Wednesday, May 18, 2011

OpenStack Deployment on Fedora using Kickstart

Overview


In this article, we discuss our approach to performing an Openstack installation on Fedora using our RPM repository and Kickstart. When we first started working with OpenStack, we found that the most popular platform for deploying OpenStack was Ubuntu, which seemed like a viable option for us, as there are packages for it available, as well as plenty of documentation. However, because our internal infrastructure is running on Fedora, instead of migrating the full infrastructure to Ubuntu, we decided to make OpenStack Fedora-friendly. The challenge in using Fedora, however, is that there aren't any packages, nor is there much documentation available. Details of how we worked around these limitations are discussed below.


OpenStack RPM Repository



Of course, installing everything from sources and bypassing the system's package manager is always an option, but this approach has some limitations:

  • OpenStack has a lot of dependencies, so it's hard to track them all
  • Installations that bypass the system's package manager take quite some time (compared to executing a single Yum installation)
  • When some packages are installed from repositories, and some are installed from sources, managing upgrades can become quite tricky


Because of these limitations, we decided to create RPMs for Fedora. In order to avoid reinventing the wheel, we've based these RPMs on RHEL6 OpenStack Packages, as RHEL6 and Fedora are fairly similar. There are two sets of packages available for various OpenStack versions:

  • Cactus - click here for the latest official release
  • Hourly - click here for hourly builds from trunk


There are two key metapackages:

  • node-full: installing a complete cloud controller infrastructure, including RabbitMQ, dnsmasq, etc.
  • node-compute: installing only node-compute services

To use the repository, just install the RPM:




In addition to installing everything with a single "yum install" command, we also need to perform the configuration. For a bare metal installation, we've created a Kickstart script. Kickstart by itself is a set of answers for the automated installation of Fedora distributive. We use it for automated hosts provisioning with PXE. The post-installation part of the Kickstart script was extended to include the OpenStack installation and configuration procedures.


Cloud Controller



To begin with, you can find the post-installation part of the Kickstart file for deploying a cloud controller below.
There are basic settings you will need to change. In our case, we are using a MySQL database.




Your server must be accessible by hostname, because RabbitMQ uses "node@host" identification. Also, because OpenStack uses hostnames to register services, if you want to change the hostname, you must stop all nova services and RabbitMQ, and then start it again after making the change. So make sure you set a resolvable hostname.

Add required repos and install the cloud controller.




qemu 0.14+ is needed to support creating custom images.
(UPD: Fedora 15 release already has qemu 0.14.0 in repository)




If you're running nova under a non-privileged user ("nova" in this case), libvirt configs should be changed to provide access to the libvirtd unix socket for nova services. Access over TCP is required for live migration, so all of our nodes should have read/write access to the TCP socket.




Now we can apply our db credentials to the nova config and generate the root certificate.




And finally, we add services to "autostart", prepare the database, and run the migration. Don't forget the setup root password for the MySQL server.





Compute Node


Compute Node script is much easier:




The config section differs very little; there is a cloud controller IP variable, which points to full nova infrastructure and other support services, such as MySQL and rabbit.




That code is very similar to cloud controller, except that it installs the openstack-nova-node-compute package, instead of node-full.




It is required to change the Cloud Controller IP address (CC_IP variable) for Compute Node installation.

IMPORTANT NOTE: All of your compute nodes should have synchronized time with the cloud controller for heartbeat control.