RDO & packstack – the Knight of the chain saw

Oh well. This project called RDO, where you’d use packstack to easily and in a repeatable way would install OpenStack on a bunch of your CentOS systems, well … if you don’t do an –allinone installation, it is the best excuse to just go and pay Piston or Mirantis to set up your gear for you.

I’m used to some oddities in bleeding edge stuff. Not that I considered OpenStack Havanna bleeding edge before this week … After quite some tweaking and reading, I got ceph up and running, and, after manually applying the official patch to openstack-packstack bug 1031167 that came in the day before yesterday, to my openstack-packstack-2013.2.1-0.17.dev876.el6.noarch installation, …

--- /usr/lib/python2.6/site-packages/packstack/puppet/modules/vswitch/lib/puppet/provider/vs_bridge/ovs_redhat.rb.orig  2013-12-03 14:53:30.000000000 +0100
+++ /usr/lib/python2.6/site-packages/packstack/puppet/modules/vswitch/lib/puppet/provider/vs_bridge/ovs_redhat.rb       2013-12-19 09:49:02.637749599 +0100
@@ -19,7 +19,6 @@
 
   def create
     vsctl("add-br", @resource[:name])
-    set_resiliency
     ip("link", "set", @resource[:name], "up")
     external_ids = @resource[:external_ids] if @resource[:external_ids]
   end
@@ -28,16 +27,6 @@
     vsctl("del-br", @resource[:name])
   end
 
-  private
-
-  def set_resiliency
-
-  end  
-
-  def _split(string, splitter=",")
-    return Hash[string.split(splitter).map{|i| i.split("=")}]
-  end
-
   def external_ids
     result = vsctl("br-get-external-id", @resource[:name])
     return result.split("\n").join(",")
@@ -53,4 +42,10 @@
       end
     end
   end
+
+  private
+
+  def _split(string, splitter=",")
+    return Hash[string.split(splitter).map{|i| i.split("=")}]
+  end
 end

… packstack happily announced full success. Well, of course it didn’t work out. Nagios was happy for anything but Cinder, and indeed, looking at the Volumes tab in the Dashboard I got errors like “Unable to retrieve default quota values”. And indeed, looking at the Cinder node’s cinder logs, it cries murder:

2013-12-19 23:19:02.596 9426 TRACE cinder.api.middleware.fault ProgrammingError: (ProgrammingError) (1146, "Table 'cinder.volumes' doesn't exist") 'SELECT volumes.created_at AS volumes_created_at, volumes.updated_at AS volumes_updated_at, …

mysql indeed was empty; there was a DB called cinder, but it was empty otherwise:

mysql> connect cinder
Connection id:    736
Current database: cinder

mysql> show tables;
Empty set (0.00 sec)

I did steps 3 and 4 of this guide to fix it; a cinder user was created in Keystone already. (How I hate it to have to guess what scripts did and what they failed to do.) Now cinder seems to be happy:

[root@node01 ~(keystone_admin)]# cinder service-list
+------------------+----------+------+---------+-------+----------------------------+
|      Binary      |   Host   | Zone |  Status | State |         Updated_at         |
+------------------+----------+------+---------+-------+----------------------------+
|  cinder-backup   | node03   | nova | enabled |   up  | 2013-12-19T23:01:04.000000 |
| cinder-scheduler | node03   | nova | enabled |   up  | 2013-12-19T23:01:06.000000 |
|  cinder-volume   | node03   | nova | enabled |   up  | 2013-12-19T23:01:03.000000 |
+------------------+----------+------+---------+-------+----------------------------+

Next station: uploading images to Glance (which, I think, would use Cinder). Oh, don’t hold your breath, it failed, of course:

2013-12-20 00:09:03.310 11147 TRACE glance.notifier.notify_qpid ConnectionError: connection-forced: Connection must be encrypted.(320)

The Big G didn’t help, but “vi /etc/qpidd.conf”, changing “require-encryption=” to “no”, followed by “/etc/init.d/qpidd restart” did:

[root@node01 ~(keystone_admin)]# glance image-create --name 'CentOS 6.4 x86_64' --disk-format qcow2 --container-format bare --is-public true <c6-x86_64-20130910-1.qcow2.bz2
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 9d6cbaf6fe7a1f3c2384ea40075ffc76     |
| container_format | bare                                 |
| created_at       | 2013-12-19T23:14:01                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 6827212a-1753-4b10-9329-53cde60afec5 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | CentOS 6.4 x86_64                    |
| owner            | b6dd0b4d858b4159aab665f08ee5b635     |
| protected        | False                                |
| size             | 228441501                            |
| status           | active                               |
| updated_at       | 2013-12-19T23:14:03                  |
+------------------+--------------------------------------+

Finally, I was able to boot a VM, but I’m currently locked out of it as it needed to be on a different network (like a VPC on AWS), but I right now do not understand how to make it accessible …

[root@node01 ~(keystone_admin)]# nova show node-4
ERROR: No server with a name or ID of 'node-4' exists.
[root@node01 ~(keystone_admin)]# nova show 0e7e73ea-2d1e-4624-8ffa-29fcdccd9d13
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| status                               | ACTIVE                                                   |
| updated                              | 2013-12-19T23:46:21Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | node03                                                   |
| key_name                             | ksiering                                                 |
| image                                | CentOS 6.4 x86_64 (6827212a-1753-4b10-9329-53cde60afec5) |
| hostId                               | 27152a121573ae9ef295a45434f434a7a0dc9e6fb1f729b80c0d5d73 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000004                                        |
| OS-SRV-USG:launched_at               | 2013-12-19T23:46:21.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | node03.some.funny.doma.in                                |
| flavor                               | m1.large (4)                                             |
| net-1 network                        | 10.128.0.2                                               |
| id                                   | 0e7e73ea-2d1e-4624-8ffa-29fcdccd9d13                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 4e9203e193c2409cad0c09b98b2186fc                         |
| name                                 | node-4                                                   |
| created                              | 2013-12-19T23:46:09Z                                     |
| tenant_id                            | cf009c826d4846e6b3ac43c920b8f164                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | []                                                       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+--------------------------------------+----------------------------------------------------------+

To put it in a nutshell: RDO/OpenStack via PackStack is, in theory, on the Right Way(tm) — as is Ceph with ceph-deploy. Unfortunally for today’s users, it feels like they just started their marathon, while others already completed half of it and gaining speed … If one thought they could run OpenStack without getting to know the nitty-gritty details on how it works (*snip, snip*) … I’m afraid RDO/packstack isn’t their tool of choice.

The Piston people on the other hand have networking expectations I first have to check on with the networking guys (you do not want to accidentially take over a DC ;)); but from the blueprint it looks like they run the HV part just as an initrd, which really is appealing. Next thing to try-out, though, will be Mirantis’ Fuel …