error

Installing openstack python clients on mac yosemite

Up to mavericks I had all my openstack python clients installed and working on my mac. However, after the upgrade to yosemite I started to get some errors about some missing libraries.
After spending more time than I would have liked I managed to get the python openstack clients working again.

Remove older versions of python

  • rm -rf /Library/Frameworks/Python.framework/Versions/2.7
  • rm -rf "/Applications/Python 2.7"
  • rm /usr/bin/python
  • brew rm python

Update python version

The latest available version up to this time of writing was 3.4

After the installation is completed make sure to update the Current symlink

  • ln -s /Library/Frameworks/Python.framework/Versions/3.4 /Library/Frameworks/Python.framework//Versions/Current

and the python symlink:

  • ln -s /Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 /usr/bin/python

Also needs to update symlinks for pip and easy_install

  • ln -s /Library/Frameworks/Python.framework/Versions/3.4/bin/pip3.4 /usr/bin/pip
  • ln -s /Library/Frameworks/Python.framework/Versions/3.4/bin/pip3.4 /usr/local/bin/pip
  • ln -s /Library/Frameworks/Python.framework/Versions/3.4/bin/easyinstall-3.4 /usr/bin/easyinstall
  • ln -s /Library/Frameworks/Python.framework/Versions/3.4/bin/easyinstall-3.4 /usr/local/bin/easyinstall

Install libffi

Libffi is a dependency for the python-glanceclient. If the lib is not present you might get an error like this:

/usr/bin/clang -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch i386 -arch x8664 -g -I/usr/include/ffi -I/usr/include/libffi -I/Library/Frameworks/Python.framework/Versions/3.4/include/python3.4m -c c/cffibackend.c -o build/temp.macosx-10.6-intel-3.4/c/cffi_backend.o

c/cffibackend.c:13:10: fatal error: 'ffi.h' file not found

include <ffi.h>

1 error generated.

Note: will not use '__thread' in the C code

The above error message can be safely ignored

error: command '/usr/bin/clang' failed with exit status 1

To install the libffi I used brew:

  • brew install libffi

Install pbr

PBR is also another dependency for the python-glanceclient, without PBR you might get an erro message like the following:

$ glance --help Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/bin/glance", line 6, in <module> from glanceclient.shell import main File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/init_.py", line 24, in import pbr.version ImportError: No module named 'pbr'

To install PBR:

  • pip install pbr

Finally, install the clients

Don't forget to export the PKGCONFIGPATH which will include the dir of the libffi installed with bew:
- export PKGCONFIGPATH when running the install command sudo PKGCONFIGPATH=/usr/local/Cellar/libffi/3.0.13/lib/pkgconfig/ python setup.py install

Enabling CORS on a node.js server, Same Origin Policy issue

Recently we faced the famous “XMLHttprequest doesn’t allow Cross-Origin Resource Sharing” error.

To overcome the problem a very simple solution was needed.

Below I’ll try to give a quick overview of what is CORS and how we managed to work around the issue.

Cross-Origin Resource Sharing – CORS

In a nutshell CORS is the mechanism that allows a domain to request resources from another domain, for example, if domain http://websiteAAA tries to request resources from http://websiteBBB the browser won’t allow it due to Same Origin Policy restrictions.

The reason for having Same Origin Policy rules applied on the browser is to prevent unauthorized websites accessing content they don’t have permissions for.

I found a great example that emphasizes the need to have Same Origin Policies enforced by the browser: Say you log in to a service, like Google for example, then while logged in you go and visit a shady website that’s running some malware on it. Without Same Origin Policy rules, the shady website would be able to query Google with the authentication cookies saved in the browser from your session, which of course is a huge vulnerability.

Since HTTP is a stateless protocol, the Same Origin Policy rules allow the browser to establish a connection using session cookies and still keep each cookie private to the domain that made the request, encapsulating the privileges of each “service” running in the browser.

With that being said, imagine a situation where you, as a developer, need to communicate with an API sitting on a different domain. In this scenario you don’t want to hit the Same Origin Policy restrictions.

Workaround 1 – Request resources from a server

The most common way to get around this problem is to make the API request from your own server, where Same Origin Policy rules are not applied, and then provide the data back to the browser. However, this can be exploited:

Last semester I created an example of how an attacker would be able to spoof whole websites and apply a phishing attack circumventing Same Origin Policy restrictions.
The attack structure was very similar of how ARP poisoning is done.

A very brief overview of the attack:

  1. The user would land on a infected page
  2. The page would load a legitimate website by making a request from the attacker’s server where Same Origin Policies are not applied.
  3. The attacker would inject some code in the response to monitor the vicitms activity
  4. After the victm’s credentials were stolen he would stop the attack and redirect the user to the original requested page.

By spoofing the victim’s DNS it would make even harder to detect the attack, but even without DNS spoofing this approach would still catch some careless users.

All the code for the example is available on github
The attack was built on top of a nodeJS server and socketIO
The presentation slides for the attack can also be found here

Workaround 2 – JSONP

Another way to circumvent the problem is by using JSONP (JSON Padding). The wikipedia articles summarizes in a clear and simple way how JSONP works.

The magic of JSONP is to use a script tag to load a json file and provide a callback to run when the file finishes loading.

An example of using JSONP with jquery:

[sourcecode language=”javascript”]
$.ajax({
url: "http://website.com/file.json
dataType: ‘jsonp’,
success: function (data) {
// Manipulate the response here
}
});
[/sourcecode]

Even though making requests from your server or using JSONP can get around the Same Origin Policy restrictions it is not the best solution, which is why CORS started being implemented by the browser vendors.

With CORS a server can set the HTTP headers of the response with the information indicating if the resources can or can’t be loaded from a different origin.

If you are curious and want to snoop around looking into the HTTP response headers of a page, one way to do that is to use the developers tools that come with WebKit.
Below is a screenshot of all the resources loaded by the stack overflow home page.
Screen Shot 2013-02-14 at 6.34.24 PM

As you can see in the screenshot, the script loaded from careers.stackoverflow.com/gethired/js had the following HTTP header options appended to its response:

  • Access-Control-Allow-Headers
  • Access-Control-Allow-Methods
  • Access-Control-Allow-Origin

That means that if you want to make an ajax call to carrers.stackoverflow.com/gethired/js from your own page, the browser will not apply Same Origin Policy restrictions since the careers.stackoverflow server has indicated that the script is allowed to be loaded from different domains.
*An important note to make is that only the http://careers.stackoverflow.com/gethired/js has the Same Origin Rules turned off, however, the careersstackoverflow.com domain still has them enabled on other pages.

This means you can enable the header options on a response level, making sure a few API calls are open to the public without putting your whole server in danger of being exploited.

This lead us to our problem.

The Problem

In the set up we currently have, one computer plays the role of the API server and we were trying to query that API asynchronously from the browser with a page being served from a different domain.

The results, as expected, were that the call was blocked by the browser.

Solution

Instead of hacking around and trying to make the requests from a different server or using JSONP techniques we simply added the proper header options to the responses of the API server.

We are building our API on a nodeJS server, and to add extra headers options to the response could not have been easier:

First we added the response headers to one of the API calls and it worked perfectly, however we wanted to add the option to all our API calls, the solution: use a middleware.

We created a middleware which sets the response header options and pass the execution to the next registered function, the code looks like this:

[sourcecode language=”javascript”]
//CORS middleware
var allowCrossDomain = function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "X-Requested-With");
next();
}

app.configure(function () {
app.set(‘port’, config.interfaceServerPort);
app.set(‘views’, dirname + ‘/views’);
app.set(‘view engine’, ‘jade’);
app.use(express.favicon());
app.use(express.logger(‘dev’));
app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(allowCrossDomain);
app.use(app.router);
app.use(express.static(path.join(
dirname, ‘public’)));
});

app.configure(‘development’, function(){
app.use(express.errorHandler());
});

// API routes
app.get(‘/list/vms’, routes.listGroup);
app.get(‘/list/vms/:ip’, routes.listSingle);
app.get(‘/list/daemons’, routes.listDaemons);
[/sourcecode]

That’s it for CORS, later we’ll cover another cool header option, the X-Frame-Options

If you are interested in finding more about Same Origin Policy or CORS check out this links:
http://en.wikipedia.org/wiki/JSONP
http://geekswithblogs.net/codesailor/archive/2012/11/02/151160.aspx
https://blog.mozilla.org/services/2013/02/04/implementing-cross-origin-resource-sharing-cors-for-cornice/
https://developers.google.com/storage/docs/cross-origin
http://www.tsheffler.com/blog/?p=428
http://techblog.hybris.com/2012/05/22/cors-cross-origin-resource-sharing/
http://security.stackexchange.com/questions/8264/why-is-the-same-origin-policy-so-important
http://www.w3.org/TR/cors/
https://developer.mozilla.org/en-US/docs/HTTP/AccesscontrolCORS
https://developer.mozilla.org/en-US/docs/Server-SideAccessControl
http://www.bennadel.com/blog/2327-Cross-Origin-Resource-Sharing-CORS-AJAX-Requests-Between-jQuery-And-Node-js.htm

Installing OpenStack, Quantum problems

During the following weeks we plan to expand more on the subject of setting up an OpenStack cloud using Quantum.
For now we have been experimenting with different Quantum functionality and settings.
At first Quantum might look like a black box, not due to its complexity, but because it deals with several different plugins and protocols that if a person is not very familiar with them it becomes hard to understand why Quantum is there in the first place.

In a nutshell Quantum has the role to provide an interface to configure the network of multiple VMs in a cluster.

In the last few years the lines between a system, network and virtualization admin have become really blury.
The classical unix admin is pretty much non existent now a days since most services are offered in the cloud in virtualized environments.
And since everything seems to be migrating over to the cloud some network principles that were applied into physical networks in the past some times don’t translate very well to virtualized networks.

Later we’ll have some posts explaining what technologies and techniques underlie the network configuration of a cloud, in our case focusing specifically on OpenStack and Quantum.

With that being said, below are a few errors that came up during the configuration of Quantum:

1. ERROR [quantum.agent.dhcp_agent] Unable to sync network state.

This is error is most likely caused due a misconfiguration of the rabbitmq server.
A few ways to debug the issue is to:
Check if the file /etc/quantum/quantum.conf in the controller node(where the quantum server is installed) has the proper rabbit credentials

By default rabbitmq runs on port 5672, so run:

[sourcecode]
netstat -an | grep 5672
[/sourcecode]

and check if the rabbitmq server is up an running

On the network node(where the quantum agents are installed) also check if the /etc/quantum/quantum.conf have the proper rabbit credentials:

If you are running a multihost setup make sure the rabbit_host var points to the ip where the rabbit server is located.

Just to be safe check if you have a connection on the management networking by pinging all the hosts in the cluster and restart both the quantum and rabbitmq server as well the quantum agents.

2. ERROR [quantum.agent.l3agent] Error running l3nat daemon_loop

This error requires a very simple fix, however, it was very difficult to find information about the problem online.
Luckily, I found one thread on the mailing list of the fedora project explaining in more details the problem.

This is error is due to the fact that keystone authentication is not working.
A quick explanation – the l3 agent makes use of the quantum http client to interface with the quantum service.
This requires keystone authentication. If this fails then the l3 agent will not be able to communicate with the service.

To debug this problem check if the quantum server is up and running.
By default the server runs on port 9696

[sourcecode]
root@folsom-controller:/home/senecacd# netstat -an | grep 9696
tcp 0 0 0.0.0.0:9696 0.0.0.0:* LISTEN
tcp 0 0 192.168.0.11:9696 192.168.0.12:40887 ESTABLISHED
[/sourcecode]

If nothing shows up is because the quantum server is down, try restarting the service to see if the problems goes away:

[sourcecode]
quantum-server restart
[/sourcecode]

You can also try to ping the quantum server from the network node(in a multihost scenario):

[sourcecode]
root@folsom-network:/home/senecacd# nmap -p 9696 192.168.0.11

Starting Nmap 5.21 ( http://nmap.org ) at 2013-01-28 08:07 PST
Nmap scan report for folsom-controller (192.168.0.11)
Host is up (0.00038s latency).
PORT STATE SERVICE
9696/tcp open unknown
MAC Address: 00:0C:29:0C:F0:8C (VMware)

Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds
[/sourcecode]

3.ERROR [quantum.agent.l3agent] Error running l3nat daemon_loop – rootwrap error

I didn’t come across this bug, but I found a few people running into this issue.
Kieran already wrote a good blog post explaining the problem and how to fix it

You can check the bug discussion here

4. Bad floating ip request: Cannot create floating IP and bind it to Port , since that port is owned by a different tenant.

This is just a problem of mixed credentials.
Kieran documented the solution for the issue here

There is also a post on the OpenStack wiki talking about the problem.

Conclusion

This should help fixing the problems that might arise with a Quantum installation.
If anybody knows about any other issues with Quantum or has any suggestions about the problems listed above please let us know!

Also check the official guide for other common errors and fixes

Creating a Virtual Machine on Linux with KVM, QEMU and Virt

Now a days there are quite a few different options available for virtualization on linux, the most famous ones being:

This blog post will be taking about KVM.

A quick summary of which software will be covered here:

KVM – Kernel Virtual Machine
QEMU – Quick Emulator
Virt – The virtualization API

When KVM and QEMU are used in conjunction, the KVM takes care of virtualizing the CPU and memory management while QEMU emulates all the other hardware resources, such as hard-drives, video, cd-rom, peripherals, etc.

Virt is built on top of libvirt, it provides a set of features to manage virtual machines.

1. Checking for support

Before installing any of the software listed above, you first need to check if your hardware supports virtualization.

[sourcecode language=”bash”]
egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo
[/sourcecode]

That should output a list of flags if virtualization is enabled on your hardware:

[sourcecode]
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constanttsc archperfmon pebs bts repgood aperfmperf pni dtes64 monitor dscpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse41 lahflm dts tprshadow vnmi flexpriority
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant
tsc archperfmon pebs bts repgood aperfmperf pni dtes64 monitor dscpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse41 lahflm dts tprshadow vnmi flexpriority
[/sourcecode]

If the VMX flag is enabled it means your CPU is Intel, SVM means AMD

2. Installing KVM

After checking if the processor supports virtualization, you can start by installing KVM

[sourcecode language=”bash”]
yum install kvm kmod-kvm
[/sourcecode]

There are several version of KVM, here is a list explaining which version is suitable for which need

3. Installing QEMU

QEMU is not available on the default repositories enabled on CentOS, you need to enable the rpmforge-extras repository to have access to the QEMU package with yum.

To enable the repository:

[sourcecode language=”bash”]
wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.i686.rpm
rpm -Uhv rpmforge*
[/sourcecode]

Then modify the file:

[sourcecode language=”bash”]
sudo vim /etc/yum.repos.d/rpmforge.repo
[/sourcecode]

Set the enabled key for the [rpmforge-extrs] to 1

[sourcecode language=”bash”]
### Name: RPMforge RPM Repository for RHEL 6 – dag
### URL: http://rpmforge.net/
[rpmforge]
name = RHEL $releasever – RPMforge.net – dag
baseurl = http://apt.sw.be/redhat/el6/en/$basearch/rpmforge
mirrorlist = http://apt.sw.be/redhat/el6/en/mirrors-rpmforge
#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1

[rpmforge-extras]
name = RHEL $releasever – RPMforge.net – extras
baseurl = http://apt.sw.be/redhat/el6/en/$basearch/extras
mirrorlist = http://apt.sw.be/redhat/el6/en/mirrors-rpmforge-extras
#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge-extras
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1
[/sourcecode]

Now you should be able to run:

[sourcecode language=”bash”]
sudo yum install qemu qemu-kvm
[/sourcecode]

And QEMU should be installed

4. Loading the module

With KVM and QEMU installed, it is time to load the kvm module to start playing with the virtualization tools:

[sourcecode language=”bash”]
sudo modprobe kvm-intel
[/sourcecode]

You might get the error:

ERRROR:

FATAL: Error inserting kvmintel (/lib/modules/2.6.32-279.11.1.el6.x8664/kernel/arch/x86/kvm/kvm-intel.ko): Operation not supported

Well, but we checked before and the CPU supports virtualization, right?
Actually most times the BIOS disable virtualization by default, so you need to modify the BIOS settings yourself.
To enable virtualization is very simple, here is a good tutorial explaining the steps.

**Just restarting the computer didn’t work for me.
I had to shutdown my computer and wait a few minutes for the new BIOS settings to take effect.

After enabaling VT for your CPU, you can go ahead and load the modules again:

[sourcecode language=”bash”]
sudo modprobe kvm-intel
[/sourcecode]

To check if they were successfully loaded:

[sourcecode language=”bash”]
lsmod | grep kvm
[/sourcecode]

You should see something like:

[sourcecode]
kvmintel 52890 0
kvm 314739 1 kvm
intel
[/sourcecode]

5. Adding your user to the KVM group

There are two ways to add your user to the KVM group.
The simples and fastest:

[sourcecode language=”bash”]
sudo usermod -G kvm -a diogogmt
[/sourcecode]

Or if you prefer:

Side Note:
To check the script that will automatically load the KVM module every time the computer is booted:
[sourcecode language=”bash”]
cat /etc/sysconfig/modules/kvm.modules
[/sourcecode]

The contents of the file:
[sourcecode language=”bash”]
#!/bin/sh

if [ $(grep -c vmx /proc/cpuinfo) -ne 0 ]; then
modprobe -b kvm-intel >/dev/null 2>&1
fi

if [ $(grep -c svm /proc/cpuinfo) -ne 0 ]; then
modprobe -b kvm-amd >/dev/null 2>&1
fi

modprobe -b vhost_net >/dev/null 2>&1

exit 0
[/sourcecode]

As you can see it checks if the CPU is Intel or AMD and then loads the appropriate module.

6. Installing Virt

The last step is to install Virt, the software that will allow us to manipulate and configure the virtual machine from a nice rich feature API.

[sourcecode language=”bash”]
sudo yum install lib-virt python-virtinst virt-manager virt-viewer
[/sourcecode]

After installing the packages above you could restart the computer so the changes take effect or start the libvirt service yourself:

[sourcecode language=”bash”]
sudo service libvirtd start
[/sourcecode]

7. Running a VM

Now that everything is installed you can test it out by creating a Virtual Machine.

References:
https://fedoraproject.org/wiki/Gettingstartedwithvirtualization?rd=VirtualizationQuickStart
http://linux.die.net/man/8/modprobe
http://www.sysprobs.com/disable-enable-virtualization-technology-bios
https://wiki.ubuntu.com/kvm
http://www.campworld.net/thewiki/pmwiki.php/LinuxServersCentOS/Cent6BaseServer
http://en.wikipedia.org/wiki/QEMU
http://www.linux-kvm.org/page/Guest
Support_Status

Taking screenshots on CentOS, gnome-screenshot util

By default when CentOS is installed not all the gnome utils are loaded in the system.
The screenshot utils is one of them that is not loaded.
So trying to take a screenshot would fail:

ERROR:

There was an error running gnome-screenshot: Failed to execute child process “gnome-screenshot”

Only the utilities below were available:

  • gnome-about
  • gnome-about-me
  • gnome-appearance-properties
  • gnome-at-properties
  • gnome-at-visual
  • gnome-audio-profiles-properties
  • gnome-character-map
  • gnome-control-center
  • gnome-default-applications-properties
  • gnome-desktop-item-edit
  • gnome-display-properties
  • gnome-font-viewer
  • gnome-help
  • gnome-keybinding-properties
  • gnome-keyboard-properties
  • gnome-keyring
  • gnome-keyring-daemon
  • gnome-mouse-properties
  • gnome-network-properties
  • gnome-open
  • gnome-panel
  • gnome-power-bugreport.sh
  • gnome-power-manager
  • gnome-power-preferences
  • gnome-screensaver
  • gnome-screensaver-command
  • gnome-screensaver-preferences
  • gnome-session
  • gnome-session-properties
  • gnome-session-save
  • gnome-terminal
  • gnome-text-editor
  • gnome-thumbnail-font
  • gnome-typing-monitor
  • gnomevfs-cat
  • gnomevfs-copy
  • gnomevfs-df
  • gnomevfs-info
  • gnomevfs-ls
  • gnomevfs-mkdir
  • gnomevfs-monitor
  • gnomevfs-mv
  • gnomevfs-rm
  • gnome-volume-control
  • gnome-volume-control-applet
  • gnome-wacom-properties
  • gnome-window-properties
  • gnome-wm

As you can see, gnome-screenshot wasn’t there.

To install the gnome-screenshot util:

[sourcecode language=”bash”]
sudo yum install gnome-utils
[/sourcecode]

That should fix the problem and you should be able to take screenshots as you would normally expect:

The official webpage for the gnome-utils project