mysql

Golang Reflection and Interfaces

Before diving into interfaces and reflection I want to give a bit of a background on the use case I had to apply them to.

Manage Struct metadata as form of JSON components

In most data designs data is still relational. Non Relational databases have their use case but when dealing with Account-User Management, RBAC and other relational data models a relational database still is the best tool of choice.
In an iterative development process all columns of a table might not be known before hand and having a framework to quickly iterate on top becomes very handy. For example; instead of having to add new columns to a table the concept of a JSON component column can be used where data points that are not searched can be stored in a JSON string, which allows for the data model to be defined at the application level. Projects like OpenStack and Rancher already follow that strategy.
UPDATE: MySQL version 5.7.8 introduced native support for JSON: https://dev.mysql.com/doc/refman/5.7/en/json.html

Implementing JSON components in Go

A StructTag can be used to define which attributes should be stored in the JSON component.
When persisting the struct to the database the component attributes would be added to the JSON Components string and only the Components string would be persisted to the database.

type Stack struct {  
    StringComponent string  `json:"stringComponent,omitempty" genesis:"component"`
    IntComponent    int `json:"intComponent,omitempty" genesis:"component"`
    BoolComponent   bool    `json:"boolComponent,omitempty" genesis:"component"`
    FloatComponent  float64 `json:"floatComponent,omitempty" genesis:"component"`
    Components  string
}

First attempt at Reflection

At first I created a method in the Stack type which would iterate over all its attributes and build up a JSON string based on the attributes that were tagged with the component StructTag

func (s *Stack) prep() {  
    components := map[string]interface{}{}
    fields := reflect.TypeOf(s).Elem()
    values := reflect.ValueOf(s).Elem()
    for i := 0; i < fields.NumField(); i++ {
        field := fields.Field(i)
        value := values.Field(i)
        if isComponent(field) {
            components[field.Name] = value.Interface()
        }
    }
    c, err := json.Marshal(components)
    if err != nil {
        fmt.Printf("Error creating components JSON object %v\n", err.Error())
    }
    jsonString := string(c[:])
    s.Components = jsonString
}

Go Playground sample

The main problem with the approach listed above is that the prep method is tied to the Stack struct and other structs can't reuse it.

Second attempt at Reflection

Instead of calling prep as a struct method, by making Prep a public function then any struct can be provided as an argument and the method will take care of building the JSON component string via reflection.

func Prep(obj interface{}) {  
    components := map[string]interface{}{}
    fields := reflect.TypeOf(obj).Elem()
    values := reflect.ValueOf(obj).Elem()
    for i := 0; i < fields.NumField(); i++ {
        field := fields.Field(i)
        value := values.Field(i)
        if isComponent(field) {
            components[field.Name] = value.Interface()
        }
    }
    c, err := json.Marshal(components)
    if err != nil {
        fmt.Printf("Error creating components JSON object %v\n", err.Error())
    }
    jsonString := string(c[:])
     values.FieldByName("Components").SetString(jsonString)
}

Go Playground sample

Some important points to keep in mind:

  • Go has static and underlying types
  • Difference between Types and Kinds:
    • A Kind represents the specific kind of type that a Type represents.
    • In other words, a Kind is the underlying type and the Type is the static type
    • Example: http://play.golang.org/p/jAczBqbx0a

Even though the obj argument is an empty interface, its Kind at runtime will be of Pointer and its Type of *Stack (or whatever struct is passed in)
That is important to understand since to manipulate struct fields with reflection you need to have a Struct Kind.
For example; the following statement would panic:

reflect.ValueOf(obj).NumField()  

or

reflect.ValueOf(obj).Field(0)  

Accessing a Struct from a Pointer

Initially this might seem a bit alien, however, I like to relate to how you can deference pointers in C.
In C you can use the -> notation to access the values a pointer points to:

#include<stdio.h>

struct Stack {  
  int x;
};

int main(void) {  
  struct Stack s; 
  struct Stack *p;
  p = &s;
  s.x = 1;
  printf("s.x %d\n", s.x); // 1
  printf("&s %p\n", &s); // address of s
  printf("p %p\n", p); // address of s
  printf("&p %p\n", &p); // address of pointer
  printf("*p.x %d\n", p->x); // 1
  return 0;
}

In Golang the concept is similar but the syntax is a bit different. You can use the Elem() method which returns the value the interface contains. In the case of a pointer it will return the object that is being referenced to.

reflect.ValueOf(obj).Elem()  

and since the pointer is pointing to a Struct the Field method can be utilized

reflect.ValueOf(obj).Elem().NumField()  
reflect.ValueOf(obj).Elem().Field(0)

Installing MySQL on CentOS 7

In CentOS 7 MySQL has been officially deprecated and the database of choice is MariaDB

What's MariaDB?

MariaDB is a fork of MySQL created by the lead developers of MySQL when Oracle acquired Sun.
A detailed history about MariaDB can be found here
As far as feature comparison there is a good article on MariaDB show casing the technical differences.

Installing MySQL

First, add the community MySQL repo to CentOS:

rpm -Uvh http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm  

Second, install MySQL:

yum install -y mysql-server  

Finally, enable start on boot

chkconfig mysqld on  

OpenStack High Availability Features

Until the moment we have been researching possible solutions to make an OpenStack Cloud deployment have as much high availability features as possible.

Before the Folsom release H.A features were not built in the OpenStack service components.
With a large number of requests from the OpenStack community, starting with the Folsom release H.A is being addressed as part of the project. The features are still being introduced and in test phase, there aren’t a lot of production deployments out there yet, but with the help and feedback of the community the OpenStack developers believe that by the time the next version is release (Grizzly) OpenStack H.A features will be automated and ready to get in production mode from the get go.

Getting into the details of the H.A features available in Folsom:
Instead of reinventing the wheel, OpenStack decided to go with a proven and robust H.A provider available in the market: Pacemaker was their choice. With more than half a decade of production deployments, Pacemaker is a proven solution when it comes to providing H.A features to a vast range of services.

Specifically looking at the technologies involved with OpenStack, the role of H.A would be to prevent:

  • System downtime — the unavailability of a user-facing service beyond a specified maximum amount of time, and
  • Data loss — the accidental deletion or destruction of data.

In the end the focus is to eliminate Single Points of Failures in the cluster architecture.
A few examples:

  • Redundancy of network components, such as switches and routers,
  • Redundancy of applications and automatic service migration,
  • Redundancy of storage components,
  • Redundancy of facility services such as power, air conditioning, fire protection, and others.

Pacemaker relies on the Corosync project for reliable cluster communications. Corosync implements the Totem single-ring ordering and membership protocol and provides UDP and InfiniBand based messaging, quorum, and cluster membership to Pacemaker.

An OpenStack high-availability configuration uses existing native Pacemaker RAs (such as those managing MySQL databases or virtual IP addresses), existing third-party RAs (such as for RabbitMQ), and native OpenStack RAs (such as those managing the OpenStack Identity and Image Services).

Even though high availability features exist for native OpenStack components and external services they are not automated in the project yet so there is a need for manual installation and configuration of whatever H.A features are needed in the cloud deployment

A quick summary of how a Pacemaker setup would look is:
pacemaker-cluster

PaceMaker creates a cluster of nodes and uses Corosync to establish a communication between them.

Besides working with RabbitMQ, Pacemaker can also bring H.A features to a MySQL cluster, the steps would be:

  • configuring a DRBD (Distributed Replicated Block Device) device for use by MySQL,
  • configuring MySQL to use a data directory residing on that DRBD device,
  • selecting and assigning a virtual IP address (VIP) that can freely float between cluster nodes,
  • configuring MySQL to listen on that IP address,
  • managing all resources, including the MySQL daemon itself, with the Pacemaker cluster manager.

More information can be found:
DRBD
RabbitMQ
Towards a highly available (HA) open cloud: an introduction to production OpenStack
Stone-IT
Corosync

OpenNebula setup, Part 1

This will be the first of many blog posts to come outlining the process of setting up OpenNebula on a CentOS box and managing a small cluster of VMs.

First, a simple and good definition for OpenNebula:

…the open-source industry standard for data center virtualization, offering the most feature-rich, flexible solution for the comprehensive management of virtualized data centers to enable on-premise IaaS clouds

This diagram clearly shows the role of OpenNebula as far a Cloud infrastructure goes:

Even though the OpenNebula website is very complete and provides lots of good documentation and reference guides, a lot of that information is still out of context for me.
I haven’t dealt with cloud infrastructure before, so it is hard to put all these features and solutions that OpenNebula provides in context.
I guess once we have a small cloud set up it will start becoming clear the problems OpenNebula was designed to solved.

With all that being said lets get started!

I’ll be following the official tutorial posted on opennebula.org:

There are a few ways to install OpenNebula, they have prebuilt packages for a few Linux Distros and also provide the source code for whoever wants to build from source.
Later I plan to build from source, for now I’ll be installing from one of their pre-built packages

Download page

I selected the option:
*OpenNebula 3.8.1 Download Source, RHEL/CentOS, Debian, openSUSE and Ubuntu Binary Packages Now!
*
Which took me to the another download page where I selected the distro I was interested in, my case CentOS.

The downloaded tar file came with three packages and the src code:

opennebula-3.8.1-1.x8664.rpm
opennebula-java-3.8.1-1.x86
64.rpm
opennebula-sunstone-3.8.1-1.x86_64.rpm
src/

At first I tried to install the rpm packages by typing:

[sourcecode language=”bash”]
[diogogmt@localhost opennebula-3.8.1]$ sudo rpm -Uvh opennebula-3.8.1-1.x8664.rpm
error: Failed dependencies:
libxmlrpc++.so.4()(64bit) is needed by opennebula-3.8.1-1.x86
64
libxmlrpcclient++.so.4()(64bit) is needed by opennebula-3.8.1-1.x8664
libxmlrpcserver++.so.4()(64bit) is needed by opennebula-3.8.1-1.x8664
libxmlrpcserverabyss++.so.4()(64bit) is needed by opennebula-3.8.1-1.x8664
rubygem-json is needed by opennebula-3.8.1-1.x86
64
rubygem-nokogiri is needed by opennebula-3.8.1-1.x8664
rubygem-rack is needed by opennebula-3.8.1-1.x86
64
rubygem-sequel is needed by opennebula-3.8.1-1.x8664
rubygem-sinatra is needed by opennebula-3.8.1-1.x86
64
rubygem-sqlite3-ruby is needed by opennebula-3.8.1-1.x8664
rubygem-thin is needed by opennebula-3.8.1-1.x86
64
rubygem-uuidtools is needed by opennebula-3.8.1-1.x8664
rubygems is needed by opennebula-3.8.1-1.x86
64
[/sourcecode]

However it gave me an error saying the package needed some dependencies.

Instead of installing each dependency by hand, yum has a nice feature that installs all the dependencies automatically:

[sourcecode language=”bash”]
[diogogmt@localhost opennebula-3.8.1]$ sudo yum localinstall opennebula-3.8.1-1.x86_64.rpm
[/sourcecode]

After installing all the three packages the next step was to install the required ruby gems.

[sourcecode language=”bash”]
sudo /usr/share/one/install_gems
[/sourcecode]

installs_gems is nothing more than a ruby script.
A snippet of the script:

[sourcecode language=”ruby”]
DISTRIBUTIONS={
:debian => {
:id => [‘Ubuntu’, ‘Debian’],
:dependencies => {
SQLITE => [‘gcc’, ‘libsqlite3-dev’],
‘mysql’ => [‘gcc’, ‘libmysqlclient-dev’],
‘curb’ => [‘gcc’, ‘libcurl4-openssl-dev’],
‘nokogiri’ => %w{gcc rake libxml2-dev libxslt1-dev},
‘xmlparser’ => [‘gcc’, ‘libexpat1-dev’],
‘thin’ => [‘g++’],
‘json’ => [‘make’, ‘gcc’]
},
:installcommand => ‘apt-get install’,
:gem
env => {
‘rake’ => ‘/usr/bin/rake’
}
},
:redhat => {
:id => [‘CentOS’, /^RedHat/],
:dependencies => {
SQLITE => [‘gcc’, ‘sqlite-devel’],
‘mysql’ => [‘gcc’, ‘mysql-devel’],
‘curb’ => [‘gcc’, ‘curl-devel’],
‘nokogiri’ => %w{gcc rubygem-rake libxml2-devel libxslt-devel},
‘xmlparser’ => [‘gcc’, ‘expat-devel’],
‘thin’ => [‘gcc-c++’],
‘json’ => [‘make’, ‘gcc’]
},
:installcommand => ‘yum install’
},
:suse => {
:id => [/^SUSE/],
:dependencies => {
SQLITE => [‘gcc’, ‘sqlite3-devel’],
‘mysql’ => [‘gcc’, ‘libmysqlclient-devel’],
‘curb’ => [‘gcc’, ‘libcurl-devel’],
‘nokogiri’ => %w{rubygem-rake gcc rubygem-rake libxml2-devel libxslt-devel},
‘xmlparser’ => [‘gcc’, ‘libexpat-devel’],
‘thin’ => [‘rubygem-rake’, ‘gcc-c++’],
‘json’ => [‘make’, ‘gcc’]
},
:install
command => ‘zypper install’
}
}
[/sourcecode]

It checks which distro you are running and then install the correct packages for it.

When I tried to run the script I bumped into two problems:

[sourcecode language=”bash”]
[diogogmt@localhost opennebula-3.8.1]$ sudo /usr/share/one/install_gems
mkmf.rb can’t find header files for ruby at /usr/lib/ruby/ruby.h
ruby development package is needed to install gems

[diogogmt@localhost opennebula-3.8.1]$ sudo /usr/share/one/installgems
/usr/lib/ruby/site
ruby/1.8/rubygems/customrequire.rb:31: command not found: lsbrelease -a
lsb_release command not found. If you are using a RedHat based
distribution install redhat-lsb

[/sourcecode]

To fix those problems I installed the following packages:

  • ruby-devel
  • redhat-lsb

This time running the install_gems script installed all the dependencies, without errors.

The next sections from the official OpenNebula tutorial explained how to configure the user oneadmin in the FronteEnd and Hosts.

They refer to FrontEnd when talking about the machine that has OpenNebula installed and Hosts for the machines belonging to the cloud setup.

An important point to mention is that OpenNebula only needs to be installed on the FronteEnd, all the hosts only need to a ssh server, hypervisors and ruby installed.

For the oneadmin user configuration I followed the steps listed on the tutorial but I didn’t have a chance to look deeper on what is actually happening.

I’ll go over that configuration steps again once I configure a host computer.
Since right now there are no other computers with hypervisors installed in the network it is hard to test if the oneamdin is properly configured.

The last part of the tutorial was to actually start OpenNebula and test if everything was installed:

*All the interaction of OpenNebula needs to be done via the oneadmin user

So before running the commands below I needed to switch the terminal session to the oneadmin user:

[sourcecode language=”bash”]
su oneadmin
[/sourcecode]

First set the credentials for oneadmin user

[sourcecode language=”bash”]
$ mkdir ~/.one
$ echo "oneadmin:password" > ~/.one/oneauth
$ chmod 600 ~/.one/one
auth
[/sourcecode]

To start opennebula:

[sourcecode language=”bash”]
one start
[/sourcecode]

However OpenNebula didn’t start, I got an error instead

[sourcecode language=”bash”]
[oneadmin@localhost ~]$ one start
Could not open database.
oned failed to start
[/sourcecode]

Well, it turns out that I couldn’t even start mysql, no wonder why OpenNebula wasn’t able to open the database

[sourcecode language=”bash”]
[diogogmt@localhost opennebula-3.8.1]$ mysql
ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock’ (2)
[/sourcecode]

Searching about the error above, I actually found out mysql wasn’t fully installed on my machine. I wasn’t able to start mysql as a service.

I thought that OpenNebula had installed mysql before, but I guess it didn’t, either way I just installed the mysql pakcages again:

[sourcecode language=”bash”]
yum install mysql-server mysql mysql-client
[/sourcecode]

Then configured mysql to start on boot

[sourcecode language=”bash”]
chkconfig –levels 235 mysqld on
[/sourcecode]
[sourcecode language=”bash”]
service mysqld start
[/sourcecode]

After running those commands I was able to start mysql.

I then tried one more time to start OpenNebula

[sourcecode language=”bash”]
[oneadmin@localhost ~]$ one start
Could not open database.
oned failed to start
[/sourcecode]

But again I got the same error.

I started looking online for possible solutions for the error I was getting but didn’t have any luck.

I remembered reading at beginning of the tutorial that all the logs for OpenNebula are saved.
They have a very good diagram explaining all the directories used by OpenNebula:

I took a look on the /var/log/one/one.d file
That provided some very good information:

[sourcecode]
—————————————-
OpenNebula Configuration File
—————————————-
AUTHMAD=AUTHN=ssh,x509,ldap,servercipher,serverx509,EXECUTABLE=oneauthmad
DATASTORE
LOCATION=/var/lib/one//datastores
DATASTOREMAD=ARGUMENTS=-t 15 -d fs,vmware,vmfs,iscsi,lvm,EXECUTABLE=onedatastore
DB=BACKEND=sqlite
[/sourcecode]

The DB was set to sqlite.
I didn’t have sqlite installed, no wonder why the OpenNebula wasn’t being able to Open the DB

I went to the configuration file where all the settings for OpenNebula are defined:

[sourcecode]
/etc/one/oned.conf
[/sourcecode]

Indeed the DB was set to sqlite:

[sourcecode]
DB = [ backend = "sqlite" ]
# Sample configuration for MySQL
# DB = [ backend = "mysql",
# server = "localhost",
# port = 0,
# user = "oneadmin",
# passwd = "oneadmin",
# db_name = "opennebula" ]
[/sourcecode]

I uncommented the configuration for mysql and commented out the one for sqlite

The last thing left to do was to create the database and user for mysql:

[sourcecode language=”mysql”]
mysql> CREATE DATABASE opennebula;

mysql> GRANT ALL ON opennebula.* TO oneadmin@localhost IDENTIFIED BY ‘oneadmin';
[/sourcecode]

This time when I tried to run OpenNebula everything worked as expected!

[sourcecode language=”bash”]
[oneadmin@localhost diogogmt]$ one start
[oneadmin@localhost diogogmt]$ onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
[/sourcecode]

Next step is to understand how the ssh configuration works for the Front-End and Hosts and try to set up a small cluster of Hypervisors and use OpenNebula to manage them.