stack

Creating and Deploying a private catalog on the Rancher Platform

With the announcement of private catalog support in the Rancher platform we decided to create a custom catalog for our application stack.

If you are looking for a complete overview and step by step on how to create a private catalog the best place is the January online meetup Building a Docker Application Catalog

In the post I'll guide you through the steps I took to create a catalog for our application, the Aytra Cloud.

The main concepts behind a Catalog on Rancher are

  • Expose docker and rancher compose frameworks to define a catalog blueprint
  • Store templates on git repo
  • Allow configuration and deployment of stacks
  • Support versioning and upgrades

Creating the Catalog file structure

So in a nutshell you will need to have a git repo with the following directory structure

In the config.yml file you can define the basic configuration for you catalog, eg;

name: Aytra  
description: |  
  Aytra Cloud enables our customers to fully realize the potential of cloud computing by allowing them to automatically migrate application workloads across public and private clouds to achieve an ideal balance of security, cost, and performance.
version: 7.5  
category: "Cloud Management Platform"  

To define different versions of the same catalog Rancher uses the concept of numbered directories containing different versions of a docker-compose and rancher-compose templates.

One point to note about the rancher-compose file is that a .catalog section must be defined as well as a questions attribute, for example:

.catalog:
  name: Aytra
  version: 7.5
  description: |
    Aytra Cloud enables our customers to fully realize the potential of cloud computing by allowing them to automatically migrate application workloads across public and private clouds to achieve an ideal balance of security, cost, and performance.
  uuid: aytra-0
  questions:
    - variable: "exo-scale"
      description: "Number of exosphere nodes."
      label: "Number of Exosphere Nodes:"
      required: true
      default: 1
      type: "int"

Adding the Catalog to Rancher

Now with all the catalog file structure defined you can add your private git repo to Rancher under the Admin -> Settings tab.

A couple of tweaks you need to make to enable private git repos:

1. Add the username and password of your git user in the git url: https://username:password@bitbucket.org/org/catalogs.git

2. Bind-mount ssh keys for your git repo in the rancher server container. This approach will work after v056 is released since currently the server container doesn't come with with the openssh binaries installed

Ideally you would be able to configure git users similar to how Jenkins git plugins do it but is not supported at the time.

Viewing the Catalog

The rancher catalog service has a mechanism that polls periodically the git repo for new changes to keep the catalogs up to date. So right after adding the git repo to Rancher you should see your catalog listed under the Applications -> Catalogs menu

Launching the Catalog

Now you should be able to launch your application catalog and get a running stack with your micro services

Installing OpenStack, Quantum problems

During the following weeks we plan to expand more on the subject of setting up an OpenStack cloud using Quantum.
For now we have been experimenting with different Quantum functionality and settings.
At first Quantum might look like a black box, not due to its complexity, but because it deals with several different plugins and protocols that if a person is not very familiar with them it becomes hard to understand why Quantum is there in the first place.

In a nutshell Quantum has the role to provide an interface to configure the network of multiple VMs in a cluster.

In the last few years the lines between a system, network and virtualization admin have become really blury.
The classical unix admin is pretty much non existent now a days since most services are offered in the cloud in virtualized environments.
And since everything seems to be migrating over to the cloud some network principles that were applied into physical networks in the past some times don’t translate very well to virtualized networks.

Later we’ll have some posts explaining what technologies and techniques underlie the network configuration of a cloud, in our case focusing specifically on OpenStack and Quantum.

With that being said, below are a few errors that came up during the configuration of Quantum:

1. ERROR [quantum.agent.dhcp_agent] Unable to sync network state.

This is error is most likely caused due a misconfiguration of the rabbitmq server.
A few ways to debug the issue is to:
Check if the file /etc/quantum/quantum.conf in the controller node(where the quantum server is installed) has the proper rabbit credentials

By default rabbitmq runs on port 5672, so run:

[sourcecode]
netstat -an | grep 5672
[/sourcecode]

and check if the rabbitmq server is up an running

On the network node(where the quantum agents are installed) also check if the /etc/quantum/quantum.conf have the proper rabbit credentials:

If you are running a multihost setup make sure the rabbit_host var points to the ip where the rabbit server is located.

Just to be safe check if you have a connection on the management networking by pinging all the hosts in the cluster and restart both the quantum and rabbitmq server as well the quantum agents.

2. ERROR [quantum.agent.l3agent] Error running l3nat daemon_loop

This error requires a very simple fix, however, it was very difficult to find information about the problem online.
Luckily, I found one thread on the mailing list of the fedora project explaining in more details the problem.

This is error is due to the fact that keystone authentication is not working.
A quick explanation – the l3 agent makes use of the quantum http client to interface with the quantum service.
This requires keystone authentication. If this fails then the l3 agent will not be able to communicate with the service.

To debug this problem check if the quantum server is up and running.
By default the server runs on port 9696

[sourcecode]
root@folsom-controller:/home/senecacd# netstat -an | grep 9696
tcp 0 0 0.0.0.0:9696 0.0.0.0:* LISTEN
tcp 0 0 192.168.0.11:9696 192.168.0.12:40887 ESTABLISHED
[/sourcecode]

If nothing shows up is because the quantum server is down, try restarting the service to see if the problems goes away:

[sourcecode]
quantum-server restart
[/sourcecode]

You can also try to ping the quantum server from the network node(in a multihost scenario):

[sourcecode]
root@folsom-network:/home/senecacd# nmap -p 9696 192.168.0.11

Starting Nmap 5.21 ( http://nmap.org ) at 2013-01-28 08:07 PST
Nmap scan report for folsom-controller (192.168.0.11)
Host is up (0.00038s latency).
PORT STATE SERVICE
9696/tcp open unknown
MAC Address: 00:0C:29:0C:F0:8C (VMware)

Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds
[/sourcecode]

3.ERROR [quantum.agent.l3agent] Error running l3nat daemon_loop – rootwrap error

I didn’t come across this bug, but I found a few people running into this issue.
Kieran already wrote a good blog post explaining the problem and how to fix it

You can check the bug discussion here

4. Bad floating ip request: Cannot create floating IP and bind it to Port , since that port is owned by a different tenant.

This is just a problem of mixed credentials.
Kieran documented the solution for the issue here

There is also a post on the OpenStack wiki talking about the problem.

Conclusion

This should help fixing the problems that might arise with a Quantum installation.
If anybody knows about any other issues with Quantum or has any suggestions about the problems listed above please let us know!

Also check the official guide for other common errors and fixes

OpenStack Overview

Recently we started moving away from our original setup plan that was to have an OpenNebula cloud running on CentOS boxes, to an OpenStack Cloud running on Ubuntu boxes.

I’ll try to give a quick overview of what is OpenStack and talk a little bit about some of its main components.

Before I get started, I really like this diagram that demonstrates what problem OpenStack is trying to solve.

This is a basic diagram demonstrating the basic usage/need of a cloud platform.
OpenStack works on all levels of the cloud set up, from the low level libraries that interact with the HyperVisors to the Web Dashboard that allows users to interact with the cloud.
In the end OpenStack aggregates different solutions to offer a simple way to build public and private cloud platforms.

Some OpenStack history

Back in 2010 NASA and RacKSpace started the OpenStack Cloud project with the goal of having an open standard for both hardware and software cloud solutions.
Currently there are more than 150 companies involved with the project, such as Intel, IBM, Cisco and HP to name a few.

As far as activity, OpenStack seems to be very active with two major releases per year.
Their release history:

  1. Austin 21 October 2010
  2. Bexar 3 February 2011
  3. Cactus 15 April 2011
  4. Diablo 22 September 2011
  5. Essex 5 April 2012
  6. Folsom 27 September 2012
  7. Grizzly 4 April 2013

OpenStack is divided into three different categories:

Computing

  • Nova – OpenStack Compute
  • Glance – OpenStack Image Service

Storage

  • Swift – OpenStack Object Storage
  • Cinder – OpenStack Block Storage

Common Projects

  • Keystone – OpenStack Identity
  • Horizon – OpenStack Dashboard

Network

  • Quantum – OpenStack Networking

Now lets took into more details on the components listed above:

Nova – Compute

Nova can be summarized as a cloud controller.
It centralizes the management of a cloud and communicates between all different components, a good diagram that illustrate its role:

The main components of Nova are:

  1. API server: handles requests from the user and relays them to the cloud controller.
  2. Cloud controller: handles the communication between the compute nodes, the networking controllers, the API server and the scheduler.
  3. Scheduler: selects a host to run a command.
  4. Compute worker: manages computing instances: launch/terminate instance, attach/detach volumes
  5. Network controller: manages networking resources: allocate fixed IP addresses, configure VLANs

List of features:

  • Manage virtualized commodity server resources
  • Manage Local Area Networks (LAN)
  • API with rate limiting and authentication
  • Distributed and asynchronous architecture
  • Virtual Machine (VM) image management
  • Live VM management
  • Floating IP addresses
  • Security Groups
  • Role Based Access Control (RBAC)
  • Projects & Quotas
  • VNC Proxy through web browser
  • Store and Manage files programmatically via API
  • Least privileged access design
  • Dashboard with fully integrated support for self-service provisioning
  • VM Image Caching on compute nodes

Glance – Image Service

Glance takes care of managing virtual machine images.
Some of its functionality:

  • List Available Images
  • Retrieve Image Metadata
  • Retrieve Raw Image Data
  • Add a New Image
  • Update an Image
  • List Image Memberships
  • List Shared Images
  • Add a Member to an Image
  • Remove a Member from an Image
  • Replace a Membership List for an Image

Glance is only responsible for creating and managing virtual machine images, the deployment and maintenance of the VMs is taken care by Nova

Swift – Object Storage

The Swift  component is a Container/Object storage system.
Different from SAN and NAS file systems, Swift cannot be mounted, since it is not a file system itself.
You can think of Swift as a distributed storage system for static data, such as images, videos, archive data, etc
All the files are exposed through an HTTP interface, typically with a REST API

A good analogy is to think of Containers as folders as in a regular OS such as Windows
Objects are chunks of data living inside containers.
So when you upload a picture to Swift it will be wrapped in an Object and placed inside a container.
The object can contain some metadata in the form of key value pairs.
No encryption or compression is performed on objects upon storage.
List of features:

  • Leverages commodity hardware
  • HDD/node failure agnostic
  • Unlimited storage
  • Multi-dimensional scalability (scale out architecture)
  • Account/Container/Object structure
  • Built-in replication
  • Easily add capacity unlike RAID resize
  • No central database
  • RAID not required
  • Built-in management utilities
  • Drive auditing
  • Expiring objects
  • Direct object access
  • Realtime visibility into client requests
  • Supports S3 API
  • Restrict containers per account
  • Support for NetApp, Nexenta, SolidFire
  • Snapshot and backup API for block volumes
  • Standalone volume API available
  • Integration with Compute

Keystone – Identity

To summarize what Keystone does in one sentence:

keep track of users and what they are permitted to do

Keystone handles the user management of the cloud system, allowing different permissions to be configured for different users.
One of the keywords they use when talking about Keystone is Tenants.
Their explanation of tenants is something that can be thought of as a project, group, or organization

A diagram demonstrating the basic flow of Keystone:

Horizon – Dashboard

Horizon is the web interface available for OpenStack,  it’s very much like what OpenStone is for OpenNebula.

Quantum – Networking

Quantum is a new addition to OpenStack.
Up to the Essex release Quantum was built in the Nova component. However to provide more functionality in terms of network management Quantum was created with the focus on managing a network of an OpenStack solution

Conclusion

After all that being said, based on the image from the beginning of the blog post, we can have a better idea of where the OpenStack components fit in the overall cloud set up:

openstack-logical-arch-folsom

Thoughts

From somebody who is just getting started with cloud computing I would say that between OpenNebula and OpenStack the later is a better option. Even though OpenNebula is an older project OpenStack seems more mature.
A few things I noticed:
There is a lot more uptodate information about OpenStack, plus it is very easy to navigate through their website and find information about the different solutions they provide.
The community appears to be a lot more active with OpenStack than with OpenNebula.
Maybe it is just me, but after a few minute reading about OpenStack I was lead to their official repo on github where all the related projects are hosted.
I really like that since it is an easy way to stay in close touch with the project and follow the latest additions, plus it provides a channel of communication between the users and the developer and makes a lot easier for people to contribute back to the project.
I didn’t have the same feeling when it came to OpenNebula, the project didn’t feel as open as OpenStack, I felt there were a lot more hurdles to get started with OpenNebula.

Talking strictly technical it’s hard for me to say which one is better.
Reading about them both I could definitely see a lot of areas where they overlap, for example, they both use noVnc for the visualization of VMs and libvirt to communicate with the hypervisors.
They both have a web interface to manage the cloud and they both have options to configure VM templates.

A big difference is that OpenNebula is written in ruby while OpenStack is in python
And I’m sure they must differ in other architectural decisions as well, but for the most part they are very similar

For more information about OpenStack:
OpenStack wiki
Keystone basic concepts
Underlying technologies
Architecture overview
Documentation
OpenStack Storage
Very good blog about python & openstack
OpenStack Wikipedia
RackSpace Wikipedia
AMQP

Queue & Cyclic Queue visualization with ProcessingJS

Previous posts:

Updates

I finished refactoring the code and loading all the scripts using requireJS.
RequireJS not only helped modularize the code but it also helped me find/fix a few bugs.

I knew about modular script loaders by working with nodeJS and commonJS
However, I had never used a script loader on the client side, and after playing with requireJS I can’t see myself not using anymore.
One of my last projects we didn’t anticipate the amount of client side code we would need to write, thus not planning and design a proper architecture. The code was mostly composed by event handlers and some logic needed to determine the flow of the app. It tried to mimic the functionality provided by BackboneJS and SpineJS
With that being said, once the app grew the event handling code started to get bloated and hard to maintain and some logic ended up being repeated across files.
RequireJS more than anything forces you to think in a modular way when creating web apps, and that helps maintain and also scale the app while it grows.

Here is an overview of the structure so far

The Element object is the tile that is used by the processing canvas to represent an element of a data structure
Since the Element can be reused between different data structures, it doesn’t contains any logic regarding how it should behave. It just knows where to position itself and to animate its entry and exit from the canvas.
That worked so far since all three data structures are some what similar. However, the idea is to have an abstract Element and derive separate elements for each data structure/algorithm. This would give more flexibility to each element once more examples are added.

Entry Point

RequireJS allows you to create an entry point for the application, where you can initialize any objects you need.

main.js

  
 require(  
 [  
 "./stack/stack",  
 "./queue/queue",  
 "./cyclic-queue/cyclic-queue",  
 ],  
 function () {

 // Init Stack  
 var Stack = require("./stack/stack");  
 var stack = new Stack();  
 stack.init();

 // Init Queue  
 var Queue = require("./queue/queue");  
 var queue = new Queue();  
 queue.init();

 // Init CyclicQueue  
 var CyclicQueue = require("./cyclic-queue/cyclic-queue");  
 var cyclicQueue = new CyclicQueue();  
 cyclicQueue.init();  
 });  

The first argument passed to require is a list of scripts to load before running the code.
The second argument is the code that will run once all the scripts specified in the first argument are loaded.

In the example above, the stack, queue and cyclic-queue scripts are loaded, then after their instantiation they are initialized.
All the logic of how they need to be initialized and how they will interact with the browser events are define within themselves, so to add a new data structure/algorithm the only thing that needs to be added to the main.js is the instantiation and initialization of the new object.
**An option for a future iteration is to create a factory to instantiate different objects, moving the responsibility of creating objects away from the main.js

Processing canvas

Since each example has its own canvas, a different instance of processing needs to be created.
However, not only the specific code for each example needs a reference to the processing canvas, but also the Element object, and right now all examples use the same Element.
Problem The Element object wasn’t referencing the right instance of processingJS

Solution:
Use requireJS to modularize the instantiation of the processingJS objects

So all the Elements will always reference the right canvas, and its code can be reused between the examples.
*A factory would probably be useful here to centralize the instantiation of the processing objects.

Some screen-shots:

You can find the code here and a live examples here

What’s Next?

There still a bunch of stuff to be done and improved.
The next iteration I plan to create a visualization for Hash Tables
I also want to refactor the code a little bit more and use factories to create processingJS canvas and to initialize the examples

References:

Visualizing algorithms using Processing.js , part II

A quick update on the project:

So far I’ve created examples for

  • Stack
  • Queue
  • Cyclic Queue

Before I move on to other algorithms I started refactoring the existing code and reviewing the logic.
One of the tools I started to use was: requireJS
I refactor and restructured the stack example to load all its dependencies with requireJS.
The idea is to reuse some code between all the examples, and requireJS will definitely help on that

You can check a live example here
GitHub Repo
What’s next?

  • Refactor Queue and Cyclic Queue
  • Use requireJS to load files for Queue and Cyclic Queue
  • Blog :)

Next ->