Testing OpenStack using tempest: all is packaged, try it yourself

tl;dr: this post explains how the new openstack-tempest-ci-live-booter package configures a machine to PXE boot a Debian Live system running on KVM in order to run functional testing of OpenStack. It may be of interest to you if you want to learn how to PXE boot a KVM virtual machine running Debian Live, even if you aren’t interested in OpenStack.

Moving my CI from one location to another leads to package it fully

After packaging a release of OpenStack, it’s kind of mandatory to functionally test the set of packages. This is done by running the tempest test suite on an already deployed OpenStack installation. I used to do that on a real hardware, provided by my employer. But since I’ve lost my job (I’m still looking for a new employer at this time), I also lost access to the hardware they were providing to me.

As a consequence, I searched for a sponsor to provide the hardware to run tempest on. I first sent a mail to the openstack-dev list, asking for such a hardware. Then Rochelle Grober and Stephen Li from Huawei got me in touch with Zachary Smith, the CEO of Packet.net. And packet.net gave me an account on their system. I am amazed how good their service is. They provide baremetal servers around the world (15 data centers), provisioned using an API (meaning, fully automatically). A big thanks to them!

Anyway, even if I planned for a few weeks to give a big thanks to the above people (they really deserves it!), this isn’t the only goal of this post. This is to introduce how to run your own tempest CI on your own machine. Because since I have been in the situation where my CI had to move twice, I decided to industrialize it, and fully automate the setup of the CI server. And what does a DD do when writing software? Package it of course. So I packaged it all, and uploaded it to the archive. Here’s how to use all of this.

General principle

The best way to run an OpenStack tempest CI is to run it on a Debian Live system. Why? Because setting-up a full OpenStack environment takes a lot of time, mostly spent on disk I/O. And on a live system, everything runs on a RAM disk, so installing under this environment is the fastest way one could do. This is what I did when working with Mirantis: I had a real baremetal server, which I was PXE booting on a Debian Live system. However nice, this imposes having access to 2 servers: one for running the Live system, and one running the dhcp/pxe/tftp server. Also, this means the boot server needs 2 nics, one on the internet, and one for booting the 2nd server that will run the Live system. It was not possible to have such specific setup at packet, so I decided to replicate this using KVM, so it would become portable. And since the servers at packet.net are very fast, it isn’t much of an issue anymore to not run on baremetal.

Anyway, let’s dive into setting-up all of this.

Network topology

We’ll assume that one of your interface has internet access, let’s say eth0. Since we don’t want to destroy any of your network config, the openstack-tempest-ci-live-booter package will use a dummy network interface (ie: modprobe dummy) and bridge it to the network interface of the KVM virtual machine. That dummy network interface will be configured with 192.168.100.1, and the Debian Live KVM will use 192.168.100.2. This convenient default can be changed, but then you’ll have to pass your specific network configuration to each and every script (just read the beginning of each script to read the parameters).

Configure the host machine

First install the openstack-tempest-ci-live-booter package. This runtime depends on the isc-dhcp-server, tftpd-hpa, apache2, qemu-kvm and all what’s needed to run a Debian Live machine, booting it over PXE / iPXE (the package support both, more on iPXE later). So, let’s do it:

apt-get install openstack-tempest-ci-live-booter

The package, once installed, doesn’t do much. To respect the Debian policy, it can’t touch configuration files of other packages in maintainer scripts. Therefore, you have to manually run:

openstack-tempest-ci-live-booter-config --configure-dummy-nick

Running this script will:

  • configure the kvm-intel module to allow nested visualization (by unloading the module, adding “options kvm-intel nested=y” to /etc/modprobe.d, and reloading the module)
  • modprobe the dummy kernel module, run “ip link set name tempestnic0 dev dummy0” to create a tempestnic0 dummy interface
  • create a tempestbr bridge, set 192.168.100.1 for the bridge IP, bridge the tempestnic0 and tempesttap
  • configure tftpd-hpa to listen on 192.168.100.1
  • configure isc-dhcp-server to dhcpreply 192.168.100.2 on the tempestbr, so that the KVM machine can boot up with an IP
  • configure apache2 to serve the filesystem.squashfs root filesystem, loaded by the Linux kernel at boot time. Note that you may need to manually start and/or reload apache after this setup though.

Again, you can change the IP addresses if you like. You can also use a real interface if you intend to boot a real hardware rather than a KVM machine (in which case, just omit the –configure-dummy-nick, and manually configure your 2nd interface).

Also, openstack-tempest-ci-live-booter provides a /etc/init.d/openstack-tempest-ci-live-booter script which will configure NAT on your server, so that the Debian Live machine has internet access (needed for apt-get operations). Edit the file if you need to change 192.168.100.1/24 by something else. The script will pick-up the interface that is connected to the default gateway by itself.

The dhcp server is configured to support both legacy PXE and the new iPXE standard. I had to support iPXE, because that’s what the standard KVM ROM does, and also I wanted to keep legacy support for older baremetal hardware. The way iPXE works is that dhcpd tells the client where to fetch the iPXE script, which itself chains to lpxelinux.0 (instead of the standard pxelinux.0). It’s rather easy to setup once you understood how it works.

Build the live image

Now that the PXE server is configured, it’s now time to build the Debian live image. Simply do this to build the image, and copy its resulting files in the PXE server folder (ie: /var/lib/tempest-live-booter):

mkdir live
cd live
openstack-tempest-ci-build-live-image --debian-mirror-addr http://ftp.nl.debian.org/debian

Since we need to login in that server later on, the script will create an ssh key-pair. If you want your own keys, simply drop the id_rsa and id_rsa.pub files in your current folder before running the script. Then make it so that this key-pair can be later on used by default by the user who will run the tempest script (ie: copy id_rsa and id_rsa.pub in the ~/.ssh folder).

Running the openstack-tempest-ci

What the openstack-tempest-ci script does is (re-)starting your KVM virtual machine, ssh into it, upgrade it to sid, install OpenStack, and eventually run all the tempest suite. There’s 2 ways to run it: either install the openstack-tempest-ci package, eventually configure it (in /etc/default/openstack-tempest-ci), and simply run the “openstack-tempest-ci” command. Or, you can skip the installation of the package, and simply run it from source:

git clone http://anonscm.debian.org/git/openstack/debian/openstack-meta-packages.git
cd openstack-meta-packages/src
./openstack-tempest-ci

Indeed, the script is designed to copy all scripts from source inside the Debian Live machine before using these scripts. The reason it’s doing that is because we want to avoid the situation where a modification needs to be uploaded to Debian before being able to test it, and also it was needed to be able to run the openstack-tempest-ci script without installing a package (which would need root access that I don’t have on casulana.debian.org, where running tempest is needed to test official OpenStack Debian images). So, definitively, feel free to hack everything in openstack-meta-packages/src before running the tempest script. Also, openstack-tempest-ci will look for a sources.list file in the current directory, and upload it to the Debian Live system before doing the upgrade/install. This way, it is easy to use the closest mirror.

There’s cloud, and it can even be YOURS on YOUR computer

Each time I see the FSFE picture, just like on Daniel’s last post to planet.d.o, where it says:

“There is NO CLOUD, just other people’s computers”

it makes me so frustrated. There’s such a thing as private cloud, setup on your own set of servers. I’ve been working on delivering OpenStack to Debian for the last 6 years and a half, motivated exactly to fix this issue: I refuse that the only cloud people could use would be a closed source solution like GCE, AWS or Azure. The FSFE (and the FSF) completely dismissing this work is more than annoying: it is counter productive. Not only the FSFE shouldn’t pull anyone away from the cloud, but it should push for the public to choose cloud providers using free software like OpenStack.

The openstack.org market place lists 23 public cloud providers using OpenStack, so there is now no excuse to use any other type of cloud: for sure, there’s one where you need it. If you use a free software solution like OpenStack, then the question if you’re running on your own hardware, on some rented hardware (on which you deployed OpenStack yourself), or on someone else’s OpenStack deployment is just a practical one, on which you can always back-up quickly. That’s one of the very reason why one should deploy on the cloud: so that it’s possible to redeploy quickly on another cloud provider, or even on your own private cloud. This gives you more freedom than you ever had, because it makes you not dependent anymore on the hosting company you’ve selected: switching provider is just the mater of launching a script. The reality is that neither the FSFE or RMS understand all of this. Please don’t dive into the FSFE very wrong message.

Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

OpenStack Newton is released, and uploaded to Sid

OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn’t disrupt Sid users too much, but 38 packages wouldn’t build without it. Thanks to Santiago Vila for pointing at the issue here.

As of writing, a lot of the Newton packages didn’t migrate to Testing yet. It’s been migrating in a very messy way. I’d love to improve this process, but I’m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome.

Bye bye Jenkins

For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton.

Current status

As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We’ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest.

Goodies from Gerrit and upstream CI/CD

It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome.

The upstream infra: nodepool, zuul and friends

The OpenStack infrastructure has been described already in planet.debian.org, by Ian Wienand. So I wont describe it again, he did a better job than I ever would.

How it works

All source packages are stored in Gerrit with the “deb-” prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you’ll find Nova packaging under https://git.openstack.org/cgit/openstack/deb-nova. Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under “jessie-newton”, and one for the automatic backports, maintained in the deb-auto-backports gerrit repository.

We’re using a “git tag” based workflow. Every Gerrit repository contains all of the upstream branch, plus a “debian/newton” branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using “git archive”, then used by sbuild to produce binaries. To package a new upstream release, one simply needs to “git merge -X theirs FOO” (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do “git commit -a –amend”, and simply “git review”. At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the “merge commit”, the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository.

Maintaining backports automatically

The automatic backports is maintained through a Gerrit repository called “deb-auto-backports” containing a “packages-list” file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg –compare-version magic, the packages-list is used to compare what’s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn’t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the “packages-list” file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we’ll try to never use it, and rebuild packages when possible).

The nice thing with this system, is that we don’t need to care much about maintaining packages up-to-date: the script does that for us.

Upstream Debian repository are NOT for production

The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren’t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable.

Improving the build infrastructure

There’s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do.

  • Get sbuild pre-setup in the Jessie VM images, so we can win 3 minutes per build. This means writing a diskimage-builder element for sbuild.
  • Have the infrastructure use a state-of-the-art Debian ftp-sync mirror, instead of the current reprepro mirroring which produces an unsigned reprository, which we can’t use for sbuild-createchroot. This will improve things a lot, as currently, there’s lots of build failures because of httpredir.debian.org mirror inconsistencies (and these are very frustrating loss of time).
  • For each packaging change, there’s 3 build: the check job, the gate job, and the POST job. This is a loss of time and resources, as we need to build a package once only. It will be hopefully possible to fix this when the OpenStack infra team will deploy Zuul 3.

Generalizing to Debian

During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer and then simply send your patch with “git review”.

This system, however, currently only fits the “git tag” based packaging workflow. We’d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push).

Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

Announcing validated Debian packages for Mitaka

Greetings! This is a (4 days delay) copy of the announce I made on the openstack-dev@lists.openstack.org on the 8th of April 2016.

I am overjoyed, thrilled and delighted to announce the release of the Debian packages for Mitaka.

All of the DefCore packages were validated successfully this morning through our package-only-based Tempest CI.

Content of this release
This release includes the following 23 services:
aodh 2.0.0
barbican 2.0.0
ceilometer 6.0.0
cinder 8.0.0
congress 3.0.0+dfsg1
designate 2.0.0
glance 12.0.0
gnocchi 2.0.2
heat 6.0.0
horizon 9.0.0
ironic 5.1.0
keystone 9.0.0
magnum 2.0.0
manila 2.0.0
mistral 2.0.0
murano 2.0.0
neutron 8.0.0
nova 13.0.0
trove 5.0.0
sahara 4.0.0
senlin 1.0.0
swift 2.7.0
zaqar 2.0.0

Where to find these packages
1/ Sid
All of Mitaka was uploaded to Debian Sid this week. You can use Debian Sid directly to use them.

2/ Official jessie-backports
As soon as everything migrates to Debian Testing (currently aka: Stretch), in 5 days if no RC bug is reported, it will be possible to upload all of Mitaka to the Debian official jessie-backports.

3/ Non-official Jessie and Trusty backports
In the meantime, the packages are available through Mirantis Jenkins automatic Debian Jessie backport repository. The full sources.list is available here:

http://mitaka-jessie.pkgs.mirantis.com/

You can use the Trusty backports as well:

http://mitaka-trusty.pkgs.mirantis.com/

To use these repositories, simply add the described sources.list to (for example) /etc/apt/sources.list.d/openstack.list, and run apt-get update. If you want to install the GPG key of the repositories, you can either install the mitaka-jessie-archive-keyring or mitaka-trusty-archive-keyring package (depending on your distribution of choice). Alternatively “apt-key add” the public key available at /debian/dists/pukey.gpg in these repositories.

As a reminder, the URLs above contain the word “Mirantis” only because the service is sponsored by my employer. These repositories are “straight” backports from what is available in Debian Sid, without any modification.

Remember that the packages listed below are maintained separately in Debian and Ubuntu, and therefore, packages are different in these distributions:
aodh, barbican, ceilometer, cinder, designate, glance, heat, horizon, ironic, keystone, manila, neutron, nova, trove, swift.

All other packages (including all OpenStack libraries like Oslo and python-*clients) are maintained in Debian, with the contribution of Canonical, and then synced to Ubuntu, so they are the exact same packages (or at least, with a minimal difference). I hope we can further improve collaboration between Debian and Canonical during the Newton cycle.

Bug reporting
As always, bug reports are welcome, and considered as high value contributions. Please follow the instructions available at https://www.debian.org/Bugs/Reporting to report bugs to the Debian BTS.

Moving forward with higher QA and the Packaging-deb project in Newton
Currently, DefCore packages are tested through a package-only (ie: no puppet, chef, you-name-it… system management involved) Tempest CI. Results can be seen at:
https://mitaka-jessie.pkgs.mirantis.com/job/openstack-tempest-ci/

Though not all packages are included in this CI. It is my intention, during the Newton cycle, to also include services like Designate, Trove, Barbican, Congress, … in this CI. Individual upstream team for these services are more than welcome to approach us to get this happen quicker.

Also, as we’re slowing starting to get the Packaging-Deb project (ie: packaging using upstream OpenStack gerrit and gating), it is also in the pipe to use the above mentioned tempest CI system as a gate system for the packaging. Hopefully, this will lead us to a full CI/CD working from trunk. We also hope to be able to use these packages to help the Puppet team to test packaged OpenStack from trunk.

Greetings
On each release, I ask myself who I should thank. This time, I would like to thank everyone, because this release was overall very nice and working well. The whole OpenStack community is always very helpful and understand the requirements of downstream distributions. Guys, you’re awesome, I love my work, and I love working with you all!

Cheers,

Moby

Just a quick reply to Rhonda about Moby. You can’t introduce him without telling about Go, which is the title who made him famous, very early in the age of electronic music (November 1990, according to wikipedia). Many attempted to remix this song (and Moby himself), but nothing’s as good as the original version.

Django upgrades area always a pain

It’s been a few years that I maintain some python-django-* packages, as part of the maintenance of the OpenStack dashboard, Horizon. Currently, this consist of: python-django-appconf, python-django-babel, python-django-bootstrap-form, python-django-compressor, python-django-discover-runner, python-django-formtools, python-django-openstack-auth, python-django-overextends, python-django-pyscss.

By far, Django has been one of the biggest pain point. It moves too fast, deprecating its own API from one minor version to the next, at the rate of one minor release every 6 months.

As Django 1.9 was uploaded to Sid, a bunch of problems appeared. The Django 1.9 release notes explains it all: a large chunk of its API gets removed (look for “Freatures removed” in that page). I had to fix a few issues: the last one I fixed was #807346 (in django-openstack-auth), which needed 2 patches. Amusingly, the patch I wrote looks the same as what is currently under review, by one of the upstream authors. Though still have #807355 to fix, and that one is more complex. To fix it, I have to package the latest commit of django-compressor, and:

  • Upgrade python-coffin to the latest version
  • Package and upload django-overextends
  • Get django-sekizai upgraded (I already interacted with the BTS to inform its package maintainer)
  • Add hand-crafted patches (ie: not from upstream) to stop using django.utils.unittest and django.utils.importlib

Even after doing all of this, django-compressor still doesn’t build (unit test failures) with lots of errors ending with this:

File “some-path/build-area/python-django-compressor-1.6+2015.12.15.git.66feba0db5/compressor/management/commands/compress.py”, line 162, in compress
followlinks=options.get(‘followlinks’, False)):
File “/usr/lib/python2.7/os.py”, line 278, in walk
names = listdir(top)
TypeError: coercing to Unicode: need string or buffer, Origin found

I tried fixing this last one, but failed so far. (if anyone can help, please do…) This was just the upgrade from 1.8 to 1.9, and it doesn’t include some of the issues fixed earlier (when Django 1.9 was only in Experimental and easy fixes were written). All this to say: Django upgrades are always painful.

As I always say: the Linux kernel is so much more complex than this kind of Python modules, and yet, they don’t allow themselves break the userland API. Why most Python developers believe that it’s OK to do so? It isn’t possible to separate private and public API clearly in python (like it is with the kernel). So it isn’t uncommon that library users start using non-public functions, classes or methods. For that, it is understandable that there are breakages (when someone uses something that isn’t made to be used by the library users). But that’s the only case where it is, and there’s no excuse to break known used public API. Django upstream authors, if you read me, please stop breaking the world every 6 months! And no, your deprecation messages are not an excuse. If you did a design mistake in the past, that’s no excuse. Too bad… you’ll have to live with it until the end of times and find a work-around.

OpenStack: Mitaka beta 1 packages available, Liberty uploaded to Jessie Backports

OpenStack Mitaka beta 1 Debian packages available

I didn’t find the time to announce it until today, though I have finished last Friday to package Mitaka Beta 1. It is available, as usual, on the Jenkins server Debian repository:

deb http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports main
deb-src http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports main
deb http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports-nochange main
deb-src http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports-nochange main

Not all of the updated packages avialable above has been uploaded to Debian Experimental, mostly those needing to pass the FTP master NEW queue did. I will upload the rest as I find enough time to do so, which unfortunately may not happen before Mitaka b2 (which will be in the middle of January).

OpenStack Liberty uploaded to jessie-backports

Also, as python-repoze.who 2.x finally could migrate to Debian testing (after filed to be removed dependencies got removed by the FTP masters), python-pysaml2 3.0, and then Keystone also did. So this week-end, all of OpenStack Liberty reached testing. So today, I could finally upload all of OpenStack liberty in the official jessie-backports repository. This is 165 packages that I uploaded, out of which 135 are going through the backports NEW queue. I’m sorry to give that much work to the FTP masters, but most OpenStack users do want to use the latest release of OpenStack on top of the latest stable distributions. So this upload really is what OpenStack Debian user will prefer (until we have PPA^Wbikesheds for Debian).

OpenStack Liberty and Debian

Long over due post

It’s been a long time I haven’t written here. And lots of things happened in the OpenStack planet. As a full time employee with the mission to package OpenStack in Debian, it feels like it is kind of my duty to tell everyone about what’s going on.

Liberty is out, uploaded to Debian

Since my last post, OpenStack Liberty, the 12th release of OpenStack, was released. In late August, Debian was the first platform which included Liberty, as I proudly outran both RDO and Canonical. So I was the first to make the announcement that Liberty passed most of the Tempest tests with the beta 3 release of Liberty (the Beta 3 is always kind of the first pre-release, as this is when feature freeze happens). Though I never made the announcement that Liberty final was uploaded to Debian, it was done just a single day after the official release.

Before the release, all of Liberty was living in Debian Experimental. Following the upload of the final packages in Experimental, I uploaded all of it to Sid. This represented 102 packages, so it took me about 3 days to do it all.

Tokyo summit

I had the pleasure to be in Tokyo for the Mitaka summit. I was very pleased with the cross-project sessions during the first day. Lots of these sessions were very interesting for me. In fact, I wish I could have attended them all, but of course, I can’t split myself in 3 to follow all of the 3 tracks.

Then there was the 2 sessions about Debian packaging on upstream OpenStack infra. The goal is to setup the OpenStack upstream infrastructure to allow packaging using Gerrit, and gating each git commit using the usual tools: building the package and checking there’s no FTBFS, running checks like lintian, piuparts and such. I knew already the overview of what was needed to make it happen. What I didn’t know was the implementation details, which I hoped we could figure out during the 1:30 slot. Unfortunately, this didn’t happen as I expected, and we discussed more general things than I wished. I was told that just reading the docs from the infra team was enough, but in reality, it was not. What currently needs to happen is building a Debian based image, using disk-image-builder, which would include the usual tools to build packages: git-buildpackage, sbuild, and so on. I’m still stuck at this stage, which would be trivial if I knew a bit more about how upstream infra works, since I already know how to setup all of that on a local machine.

I’ve been told by Monty Tailor that he would help. Though he’s always a very busy man, and to date, he still didn’t find enough time to give me a hand. Nobody replied to my request for help in the openstack-dev list either. Hopefully, with a bit of insistence, someone will help.

Keystone migration to Testing (aka: Debian Stretch) blocked by python-repoze.who

Absolutely all of OpenStack Liberty, as of today, has migrated to Stretch. All? No. Keystone is blocked by a chain of dependency. Keystone depends on python-pysaml2, itself blocked by python-repoze.who. The later, I upgraded it to version 2.2. Though python-repoze.what depends on version <= 1.9, which is blocking the migration. Since python-repoze.who-plugins, python-repoze.what and python-repoze.what-plugins aren’t used by any package anymore, I asked for them to be removed from Debian (see #805407). Until this request is processed by the FTP masters, Keystone, which is the most important piece of OpenStack (it does the authentication) will be blocked for migration to Stretch.

New OpenStack server packages available

On my presentation at Debconf 15, I quickly introduced new services which were released upstream. I have since packaged them all:

  • Barbican (Key management as a Service)
  • Congress (Policy as a Service)
  • Magnum (Container as a Service)
  • Manila (Filesystem share as a Service)
  • Mistral (Workflow as a Service)
  • Zaqar (Queuing as a Service)

Congress, unfortunately, was not accepted to Sid yet, because of some licensing issues, especially with the doc of python-pulp. I will correct this (remove the non-free files) and reattempt an upload.

I hope to make them all available in jessie-backports (see below). For the previous release of OpenStack (ie: Kilo), I skipped the uploads of services which I thought were not really critical (like Ironic, Designate and more). But from the feedback of users, they would really like to have them all available. So this time, I will upload them all to the official jessie-backports repository.

Keystone v3 support

For those who don’t know about it, Keystone API v3 means that, on top of the users and tenant, there’s a new entity called a “domain”. All of the Liberty is now coming with Keystone v3 support. This includes the automated Keystone catalog registration done using debconf for all *-api packages. As much as I could tell by running tempest on my CI, everything still works pretty well. In fact, Liberty is, to my experience, the first release of OpenStack to support Keystone API v3.

Uploading Liberty to jessie-backports

I have rebuilt all of Liberty for jessie-backports on my laptop using sbuild. This is more than 150 packages (166 packages currently). It took me about 3 days to rebuild them all, including unit tests run at build time. As soon as #805407 is closed by the FTP masters, all what’s remaining will be available in Stretch (mostly Keystone), and the upload will be possible. As there will be a lot of NEW packages (from the point of view of backports), I do expect that the approval will take some time. Also, I have to warn the original maintainers of the packages that I don’t maintain (for example, those maintained within the DPMT), that because of the big number of packages, I will not be able to process the usual communication to tell that I’m uploading to backports. However, here’s the list of package. If you see one that you maintain, and that you wish to upload the backport by yourself, please let me know. Here’s the list of packages, hopefully, exhaustive, that I will upload to jessie-backports, and that I don’t maintain myself:

alabaster contextlib2 kazoo python-cachetools python-cffi python-cliff python-crank python-ddt python-docker python-eventlet python-git python-gitdb python-hypothesis python-ldap3 python-mock python-mysqldb python-pathlib python-repoze.who python-setuptools python-smmap python-unicodecsv python-urllib3 requests routes ryu sphinx sqlalchemy turbogears2 unittest2 zzzeeksphinx.

More than ever, I wish I could just upload these to a PPA^W Bikeshed, to minimize the disruption for both the backports FTP masters, other maintainers, and our OpenStack users. Hopefully, Bikesheds will be available soon. I am sorry to give that much approval work to the backports FTP masters, however, using the latest stable system with the latest release, is what most OpenStack users really want to do. All other major distributions have specific repositories too (ie: RDO for CentOS / Red Hat, and cloud archive for Ubuntu), and stable-backports is currently the only place where I can upload support for the Stable release.

Debian listed as supported distribution on openstack.org

Good news! If you go at http://www.openstack.org/marketplace/distros/ you will see a list of supported distributions. I am proud to be able to tell that, after 6 months of lobbying from my side, Debian is also listed there. The process of having Debian there included talking with folks from the OpenStack foundation, and having Bdale to sign an agreement so that the Debian logo could be reproduced on openstack.org. Thanks to Bdale Garbee, Neil McGovern, Jonathan Brice, and Danny Carreno, without who this wouldn’t have happen.

There’s a lot of things I’d like to blog about. The last version of OpenStack, the OpenStack Liberty design summit, Kilo in the official jessie-backports repositories, etc. Maybe the most interesting part of this blog post is the last bit at the end, about a major change in the packaging workflow for OpenStack in Debian. Please read on…

OpenStack release names reminder
Just a reminder to make it easier for the average Debian reader who may know Debian well, but not OpenStack.

OpenStack 2014.1, is Icehouse, and is the version in Jessie. 2014.2 is Juno and was released right before the freeze of Jessie. 2015.1.0 is what has been released just right after jessie, on the 30th of April. Liberty, which probably will be called 12 (as this will be the 12th release of OpenStack), and not 2015.2 (this has been discussed in Vancouver), will be released in about 5 months form now.

The last summit, in Vancouver, BC, Canada, was the Liberty summit, as the OpenStack conventions are always named after the next release (since we are discussing what we will be doing during the next development cycle).

OpenStack 2015.1.0, aka Kilo, release in Debian
5 days after the release of Jessie, OpenStack 2015.1.0, aka Kilo, was released. Since I couldn’t upload to unstable during the freeze, I was holding a lot of packages, and when I did upload them, there was about 20 packages of mine in the FTP master’s NEW queue. Though, since the DSA want to use OpenStack for the Debian infrastructure, the 20 packages were fast track into Sid, thanks to the work of Paultag (thanks man!).

OpenStack Kilo in the official Jessie backports
Previously, I was only uploading OpenStack packages to Debian unstable, and maintaining a non-official Debian repositories for backports to Debian stable. However, for multiple reasons, this wasn’t satisfying.

Then, after packages migrated to Stretch, I started to upload to Debian backports. And right before the summit, almost everything went in. Only python-pysaml2 was missing (as I discovered too late that version 2.0.0 breaks Keystone which needs version 2.4.0). In fact, the last bits of the Kilo release reached jessie-backports in the middle of the OpenStack Liberty summit.

Removal of the Debian install-guide from the official site
As there was not enough efforts working on the documentation, unfortunately, the link to the Debian install-guide has been removed from docs.openstack.org. IMO, this is mostly due to a bad communication between myself and the doc team, and also because one person who promised to work on the Debian side of the install-guide failed to warn everyone that he finally couldn’t (as his managers assigned him to something else).

I hope this will soon be reverted. During the Vancouver summit, I had the opportunity to discuss with the doc team about re-inclusion of the Debian install-guide. Unfortunately, as they are moving away from the XML source format to a more standard RST-based system, the current documentation is frozen, so it seems more realistic to hold on until all of the install-guide is switched to RST.

OpenStack Debian image listed on apps.openstack.org
There’s a new area on the openstack.org where images and apps for OpenStack are listed. Under the “glance image” tab, you will see that both the Jessie and the weekly testing image are listed. There’s also a nice, easily identifiable Debian logo to link to these images.

Also, as there are trademark problems with the Ubuntu images which makes them harder to redistribute, the Murano project (which is shipping a system to automatically install apps that to installed within a few clicks on an OpenStack cloud) decided to switch to Debian for their base image.

Debian listed in the OpenStack market place
On the openstack.org site, there’s a section called Marketplace. In there, vendors supporting OpenStack are listed. To get there, a vendor needs to 1/ have a defined set of OpenStack project supported by the distribution (Debian already has a way more than the required set), 2/ sign some kind of agreement with the OpenStack foundation, and 3/ pay some sponsoring money. During the summit, I discussed this with Jonathan Bryce, from the OpenStack foundation, and he agreed that Debian would not have to pay for this (since we aren’t a big company with big money).

I have put Jonathan and Neil (our Debian Project Leader) in touch so that signing the document may happen, though since we were all busy with the summit, I do not expect Jonathan to send the documents right away. Hopefully, this will be fixed before the end of this month of May 2015.

Debian (and Ubuntu) packages collaboratively maintained upstream
Since about forever (forever is 5 years in the OpenStack world…), I pushed for more collaboration on OpenStack packaging between Debian package maintainers and Canonical. However, for some reasons which I do not wish to expand on in this blog post, it has been socially hard to do so. Also, Canonical always used BZR, which wasn’t to the tastes of everyone.

But during the Liberty summit, some very good things happened. First of all, Launchpad is now able to support Git (it’s been a few weeks it does in fact). Even though it will take a bit of time before the Canonical server team switches to it, we can consider that this problem is already out of the way.

Then it looks like Canonical are now more open than before for collaboration with Debian on the OpenStack packaging. Note that we actually did some work together already, but now we both would like a full alignment of *all* of our packages.

I have discussed this with James Page, who is the head of Canonical’s server team. We will first start to do so on the dependencies: this includes all of the python-*client libraries, but also all of python-oslo.* (the Oslo libs are use by all of the projects and are kind of unifying the project), plus all the third party dependencies the project relies on. James already pushed new versions of some Oslo libraries to Experimental (in order to not overwrite Kilo), which are adding transition packages needed for Ubuntu. We wont need those in Debian, but we want to welcome them to keep the same source packages.

We will then later try to merge the core projects if we can. Unfortunately, since the packaging of the core projects (ie: Nova, Neutron, Cinder, Glance, etc.) was forked, merging probably will be a bit painful. We will have to make some decisions on how this happen. I am however confident that it will be done during the Liberty release cycle.

Move of the packaging to upstream Gerrit
A few weeks after the summit, I wrote a proposal to upstream OpenStack dev list, with as subject: “Adding packaging as an OpenStack project”. What it means is that I have proposed to have Debian/Ubuntu packaging to happen in upstream infrastructure, using Gerrit, and building packages using upstream cloud. We will add all the tests we can, like building with unit tests, lintian, piuparts, adequate, but also maybe a full installation of the packages with functional tests.

My proposal is here:
http://lists.openstack.org/pipermail/openstack-dev/2015-May/064848.html

As everything, this translates into a Gerrit review process:
https://review.openstack.org/#/c/185187/

As you can read in the above thread, Fedora/RDO people, which have used a Gerrit work-flow for a long time already, also would like to join. But it looks like we’ll be doing 2 teams: one for RPMs and one for debs.

The proposal is currently under review by the OpenStack technical committee, which will accept (or not) if the packaging project can be fully considered as an OpenStack project. I expect a final answer next Tuesday. Note that if they deny, we can still use the stackforge namespace instead, their decision is just about the TC blessing the project as being OpenStack or not.

What’s very nice about this, is that not only we will have a better collaboration between Debian & Ubuntu, better automated testing and Q/A, this also opens contributions to potentially anyone. Especially, we welcome operation people, those who are doing actual big deployments.

Sure, it was possible before, but I often had the feedback that many were scared to break anything when trying to contribute. Thanks to the CI/CD form upstream infra, and the Gerrit peer review process, it wont be a problem anymore. So we do expect operation people to contribute more. I will also push more “upstream packaging” within Mirantis, so that MOS (Mirantis OpenStack) aligns fully with Debian & Ubuntu as well.

Another good thing, is that it will be easier for the puppet team to support Debian (they historically were more Ubuntu oriented), and it’s going to be super easy for them to request for packaging fixes. I hope we will be able to work hand-to-hand with them, adding puppet deployment checks in the packaging repo, and packaged deployments within the puppet Gerrit review process.

@Erich Schubert: why not trying to package Hadoop in Debian?

Erich,

 

As a follow-up on your blog post, where you complain about the state of Hadoop. First, I couldn’t agree more with all you wrote. All of it! But why not trying to get Hadoop in Debian, rather than only complaining about the state of things?

 

I have recently packaged and uploaded Sahara, which is OpenStack big data as a service (in other words: running Hadoop as a service on an OpenStack cloud). Its working well, though it was a bit frustrating to discover exactly what you complained about: the operating system cloud image needed to run within Sahara can only be downloaded as a pre-built image, which is impossible to check. It would have been so much work to package Hadoop that I just gave up (and frankly, packaging all of OpenStack in Debian is enough work for a single person doing the job… so no, I don’t have time to do it myself).

OpenStack Sahara already provides the reproducible deployment system which you seem to wish. We “only” need Hadoop itself.