OpenRC running on Debian / kFreeBSD

I have always pretended that OpenRC would be very easy to port to Debian GNU/kFreeBSD. Well, now I stop pretending:

http://youtu.be/zoNoi8BgQjs

This was done within a few hours working with upstream.

Now, next-up: Debian GNU/Hurd. Hoping that porters will volunteer to do the work.

OpenStack Havana 2013.2 Debian packages available

OpenStack “upstream” was released today. Thanks to the release team and a big up to TTX for his work.

By the time you read this, probably all of my uploads have reached your local Debian mirror.

Please try Havana using either Sid from any Debian mirror, or using the Wheezy backports available here:

deb http://havana.pkgs.enovance.com/debian havana main
deb http://archive.gplhost.com/debian havana-backports main

Yes, you will need *both* repositories. This is unofficial, though these are the exact same packages as in Sid, just rebuilt for Wheezy.

On the package side, here’s what is new:

– All packages who needs it can now be configured through debconf for the RabbitMQ settings. This is on top of what was already available for Grizzly, which is automated configuration for: keystone auth token, the database, the API endpoints and much more. (remember: this is fully optional, you can always use the non-interactive mode…)

– All Quantum pluggin packages have been removed, and now everything is self-contained in the neutron-common package. The selection of which plugin to use is done directly using the core_plugin= directive in /etc/neutron/neutron.conf. This will also control the init.d script of neutron-server, so that it loads the corresponding ini file in /etc/neutron/plugins. The plugin selection is done through Debconf, so that users don’t have to write the full path of the plugin class, which is (for most) very cryptic (am I the only one who thinks that writing neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 in a configuration file is user friendly?).

– All of the package descriptions and Debconf templates have been reviewed by the Debian internationalization team, and most strings are translated in Czech, Danish, French, Italian, Japanese and Russian (sometimes even more) for almost all packages (thanks everyone!).

I’d like to publicly thanks eNovance for sponsoring my packaging work, and Mehdi Abaakouk for his work on our CI with the tempest tests.

Happy Havana release testing,
Please report bugs through the Debian BTS.

Jenkins remote build trigger (eg: from git push) tokens

After upgrading my Sid virtual machine hosting my Jenkins, build after git push stopped working. This is because version 1.503 and above require an auth token for triggering the build. Since it took me some time to search the web, I’ve decided to blog about it to save time to other Jenkins users.

Under each project configuration screen, in the “Build Triggers” section, tick the “Trigger builds remotely (e.g., from scripts)” option. Then enter a random token (I used a password generator for it). Then in your post-receive hook, use what’s below:

wget -q --no-check-certificate https://<jenkins-url>/job/heat/build?token=<your-token> -O -

OpenStack 2013.2~rc1, aka Havana, fully available in Debian Experimental

Announcement

After a very long work, over the course of 4 months, I have finished packaging the first RC1 of OpenStack. This comes on time, just 9 days before the official Havana release. Please do try this RC1 before the official 2013.2, code name Havana, is released, and hopefully uploaded to Debian. All of the packages are available from Debian Experimental, keeping Grizzly in Sid. However, there is also some private repositories that I maintain, holding Wheezy backports:

deb http://havana.pkgs.enovance.com/debian havana main

deb http://archive.gplhost.com/debian havana-backports main

The first repository holds the packages maintained within the Alioth group. These are built directly from my Jenkins machine, on each git push. The 2nd repository is holding backports from Sid to Wheezy for the packages that I don’t actively maintain (though a lot of them are in the Python module team, in which I do a lot of packaging and updates as well).

A few numbers

A few numbers about this now. I had to work on 145 source packages: at least backport them to Wheezy, and push them in the GPLHost archive repository above. This generates 360 binary packages. Out of these, I maintain 77 source packages within the Alioth OpenStack group, generating 209 .deb files. That’s a lot of stuff to deal with (and I feel sometimes a bit dizzy about it). While OpenStack is a big jigsaw puzzle to solve for the users, it is even more for someone who has to deal with all the (sometimes buried in the the code) Python dependencies. I hope others will come and join me in this packaging effort, since over the time, there’s more and more work to be done, as the project grows. Note that most of the work is unfortunately done on packaging (and updating) the Python dependencies, working on the packages themselves is done last, at the end of the cycle.

Other things not packaged (yet)

Before the release (and the forthcoming Hongkong summit on the 5th of November), I hope to be able to finish packaging TripleO. TripleO is in fact OpenStack on OpenStack, which works with nova-baremetal. I have no idea how to test or install this, though it sounds like a lot of fun. There are 6 source packages that need to be done. Also, pending on the FTP masters NEW queue, is Trove: Database as a Service. I hope this one can get through soon. There is, also, Marconi, which is an incubated project for a new message queuing service, which probably will replace RabbitMQ (I’m not sure yet what it does, and I will be happy to hear about it at the summit). Lastly, there’s Ironic, which will at some point, replace nova-baremetal. That is, it does cloud computing over bare metal, without virtualization.

All of these new projects are still in an incubation stage, and are not part of the official release yet. Though, I have learned over the course of this past year, that with OpenStack, it’s never early enough to start the packaging work.

Thanks to my sponsor!

Please note that all of this wouldn’t be possible without eNovance sponsoring my packaging work. A big up to all of them for supporting and loving Debian! You guys rox. Also a special thanks to Mehdi / Sileth, for his work testing everything with the Tempest functional tests and the CI platform.

My old 1024 bits key is dead, please use 0xAC6B43FE

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi,

I am not using my old GPG key, 0x98EF9A49 anymore. My new key, using
4096 SHA2 256,
with fingerprint:

A0B1 A9F3 5089 5613 0E7A  425C D416 AD15 AC6B 43FE

has replaced the old one in the Debian keyring. Please don't encrypt
message to me using the old key anymore.

Since the idea is that we shouldn't trust 1024 bits keys anymore, I'm
not signing this message with the old key, but only with the new one,
which has gathered enough signatures from Debian Developers (more than a
dozen).

Thomas Goirand (zigo)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJSVC02AAoJENQWrRWsa0P+3wAP/i2ORGgXMoQVtjoUNX+x/Ovz
yoNSLztmih4pOLw9+qHJfM+OkBKUPwrkyjgBWkwD2IxoM2WRgNZaY5q/jBEaMVgq
psegqAm99zkX0XJTIYfqwOZFA1JLWMi1uLJQO71j0tkJWPzBSa6Jhai81X89HKgq
PqQXver+WbORHkYGIWwBvwj+VbPZ+ssY7sjbdWTaiMcaYjzLQR4s994FOFfTWH8G
5zLdwj+lD/+tBH90qcB9ETlbSE1WG4zBwz5f4++FcPYVUfBPosE/hcyhIp6p3SPK
8F6B51pUvqwRe52unZcoA30gEtlz+VNHGQ3yF3T1/HPlfkyysAypnZOw0md6CFv8
oIgsT+JBXVavfxxAJtemogyAQ/DPBEGuYmr72SSav+05BluBcK8Oevt3tIKnf7Q5
lPTs7lxGBKI0kSxKttm+JcDNkm70+Olh6bwh2KUPBSyVw0Sf6fmQdJt97tC4q7ky
945l42IGTOSY0rqdmOgCRu8Q5W1Ela9EDZN2jPmPu4P6nzqIRHUw3gS+YBeF1i+H
/2jw4yXSXSYQ+fVWJqNb5R2raR37ytNWcZvZvt4gDxBWRqnaK+UTN6tdF323HKmr
V/67+ewIhFtH6a9W9mPakyfiHqoK6QOyOhdjQIzL+g26QMrjJdOEWkqzvuIboGsw
OnyYVaKsZSFoKBs0kOFw
=qjaO
-----END PGP SIGNATURE-----

Why using Mailman when MLMMJ is available?

Daniel Pocock just wrote a blog post about how to setup mailman for virtual hosting. Well, it strikes me that mailman is a bad solution, for many reasons. First, it imposes you to use @lists.example.com lists instead of @example.com. I’m not sure if that is mandatory, but at least I’ve seen only mailman setups done this way. At least, it is how mostly everyone does the setup. I think that’s really ugly. Any mailbox should be fine, IMO.

What I found particularly lame about Mailman, is that these issues (plus the ones which Daniel listed) have been known for YEARS, though nobody came up with a patch to fix it. And it’s really not hard. Why do I know it? Well, because I’ve been using MLMMJ for years without such troubles. The current situation where everyone is with mailman is really LAME.

Not only MLMMJ is better because it is easier to install and supports virtual hosting out of the box, but also it is written in C and is much faster than Mailman. MLMMJ has been used in high traffic lists like for SUSE and Gentoo. The fact that some major sites decided to do the switch isn’t a proof that MLMMJ is perfect, but is a good indication that it at least works well without too much trouble.

Also, with mailman, you have to use the subject lines to control your list subscriptions and send command to it. No need to do that with MLMMJ, because everything is controled with the mailbox extension. For example, mylist+subscribe@example.com can be used to subscribe (instead of mylist-requests@lists.example.com, then fill-in the subject line with mailman).

So, if you don’t like some of the (bad) limitations of Mailman, would like to test something faster, and easier to setup, have a try with MLMMJ (see mlmmj.org for more details, and see the README.Debian inside my package).

OpenStack Havana b2 available, openstack-debian-images approved

I have finished preparing the beta 2 of the next release of OpenStack. It is currently only available from out Git on Alioth (in /git/openstack), and directly from my jenkins repository, which creates Wheezy backports for it:

deb ftp://havana.pkgs.enovance.com/debian havana main
deb http://archive.gplhost.com/debian havana-backports main

As for every OpenStack release, a large number of Python modules needed to be packaged and are waiting in the FTP master NEW queue to be approved: oslo-sphinx, python-django-discover-runner, python-hacking, python-jsonrpclib, python-lesscpy, python-neutronclient, python-nosehtmloutput, python-requestbuilder, python-termcolor, sphinxcontrib-httpdomain and sphinxcontrib-pecanwsme. Let’s hope that they will be approved before the next beta release in september (time where OpenStack Havana will be in feature freeze). The total number of packages maintained by the OpenStack team (on which I really am the only active maintainer for the moment…), there’s 53 packages, plus these 11 packages waiting in the NEW queue. That’s a big number of package, and I wouldn’t mind some help…

One thing that annoyed all the community is that Quantum, the OpenStack network virtualization module, had to be renamed as Neutron, because of a trademark from Quantum (you probably remember the Quantum Fireball hard drives? Well, it’s the same company…).

Another good news is that my openstack-debian-images package has just been approved and landed in Sid. With it, you can automate the process of building a Debian image for OpenStack with a simple shell command (there’s a man page that I wrote with it: read it if you need to build images). It is made of a single shell script to build the image, using kpartx, parted, mbr, debootstrap, extlinux and friends. I tried to keep it simple, not involving a huge amount of components.

With the release of cloud-init 0.7.2-3, I have fixed a few bugs (3 important bugs, out of which 2 where RC bugs), thanks to contributions in the debian-cloud@lists.debian.org mailing list. This includes adding new init.d scripts, so we now have support for user data. This doesn’t only benefits OpenStack images, but anyone willing to start virtual machines in the cloud (nowadays, every cloud implementation needs cloud-init installed in the virtual images). This means you can include a script in the metadata of the virtual machine you start it, and it will be executed at startup. If everything goes as planned (that is, no new RC bug), I will upload an update of cloud-init to backports in 5 days (there is already a version there, but it doesn’t have the necessary init.d scripts to execute the user data scripts), and openstack-debian-images in 9 days. Then it will be possible to build OpenStack images with absolutely all the tools available from Wheezy (and backports). I hope to be able to discuss that during Debconf 13.

The “v” sikness is spreading

It seems to be a new fashion. Instead of tagging software with a normal version number, many upstream adds a one letter prefix. Instead of version 0.1.2, it becomes version v0.1.2.

This sickness has spread all around in Github (to tell only about the biggest one), from one repository to the next, from one author to the next. It has consequences, because Github (and others) conveniently provides tarballs out of Git tags. Then the tarball names becomes packagename-v0.1.2.tar.gz instead of package packagename-0.1.2.tar.gz. I’ve even seen worse, like tags called packagename-0.1.2. Then the tarball becomes packagename-packagename-0.1.2. Consequently, we have to go around a lot of problems like mangling in our debian/watch files and so on (probably the debian/gbp.conf if you use that…). This is particularly truth when upstream doesn’t make tarball and only provides tags on github (which is really fine to me, but then tags have to be made in a logical way). Worse: I’ve seen this v-prefixing disease as examples in some howtos.

What’s wrong with you guys? From where is coming this v sickness? Have you guys watch too much the v 2009 series on TV, and you are a fan of the visitors? How come a version number isn’t just made of numbers? Or is this just a “v” like the virus of prefixing release names with a “v”?

So, if you are an upstream author, reading debian planet, with your software packaged in Debian, and caught the bad virus of prefixing your version numbers with a v, please give-up on that. Adding a “v” to your tags is meaningless anyway, and it’s just annoying us downstream.

Edit: Some people pointed to me some (IMO wrong) reasons why to prefix version numbers. My original post was only half serious, and responding with facts and common sense breaks the fun! :) Anyway, the most silly one being that Linus has been using it. I wont comment on that one, it’s obvious that it’s not a solid argumentation. Then the second one is for tab completion. Well, if your bash-completion script is broken, fix it so that it does what you need, rather than going around the problem by polluting your tags. Then the 3rd argument was if you were merging 2 repositories. First this never happened to me to merge 2 completely different repos, and I very much doubt that this is an operation that you have to do often. Second, if you merge the repositories, the tags are loosing all meanings, and I don’t really think you will need them anyway. Then the last one would be working with submodules. I haven’t done it, and that might be the only case where it makes sense, though this has nothing to do with prefixing with “v” (you would need a much smarter approach, like prefixing with project names, which in that case makes sense). So I stand by my post: prefixing with “v” makes no sense.

Compute node with 256 GB of RAM, 2CPU with 6 cores each (24 threads total)

Will that be enough? Let’s load some VMs in that beast! :)

too_much_ram_and_cpu

dtc-xentop: monitoring of CPU, I/O and network for your Xen VMs

What has always been annoying me with Xen, is that xentop is… well … a piece of shit! It just displays the current numbers of sectors or network bytes read / write. But as an administrator, what you care about, is to know which of your VM is taking all the resources, making your whole server starve. The number of sectors read/write since the VM has started is of very low importance. What you care about is to have an idea of the current transfer rate. And the same apply for networking.

So, tonight, within few hours, I hacked a small python shell script using ncurses to do what I needed, which tells how much resources have been used over the last 5 seconds (and not since the VM started). This way, it is easy to know which VM is killing your server.

dtc-xentop

The script is adapted to my own needs only. Which means that it works only for DTC-Xen VMs of GPLHost. In my case, VM uses exactly 2 partitions, one for the filesystem, and one for the swap, which means that I display exactly that. I’m sure it wouldn’t be hard to adapt it so that it would work for all cases (which would mean finding out what device uses a VM and getting the statistics from /sys using that information, instead of determining it using the name of the VM). But I don’t need it, so the script will stay this way.

Before writing this tonight, I didn’t know ncurses. Well, it’s really not hard, especially in Python! It took me about 2 hours to write^Whack the script (cheating on the dtc-xen soap server which I already had available).