Don't understand german? Read or subscribe to my english-only feed.

systemd backport of v230 available for Debian/jessie

July 28th, 2016

At DebConf 16 I was working on a systemd backport for Debian/jessie. Results are officially available via the Debian archive now.

In Debian jessie we have systemd v215 (which originally dates back to 2014-07-03 upstream-wise, plus changes + fixes from pkg-systemd folks of course). Now via Debian backports you have the option to update systemd to a very recent version: v230. If you have jessie-backports enabled it’s just an `apt install systemd -t jessie-backports` away. For the upstream changes between v215 and v230 see upstream’s NEWS file for list of changes.

(Actually the systemd backport is available since 2016-07-19 for amd64, arm64 + armhf, though for mips, mipsel, powerpc, ppc64el + s390x we had to fight against GCC ICEs when compiling on/for Debian/jessie and for i386 architecture the systemd test-suite identified broken O_TMPFILE permission handling.)

Thanks to the Alexander Wirt from the backports team for accepting my backport, thanks to intrigeri for the related apparmor backport, Guus Sliepen for the related ifupdown backport and Didier Raboud for the related usb-modeswitch/usb-modeswitch-data backports. Thanks to everyone testing my systemd backport and reporting feedback. Thanks a lot to Felipe Sateler and Martin Pitt for reviews, feedback and cooperation. And special thanks to Michael Biebl for all his feedback, reviews and help with the systemd backport from its very beginnings until the latest upload.

PS: I cannot stress this enough how fantastic Debian’s pkg-systemd team is. Responsive, friendly, helpful, dedicated and skilled folks, thanks folks!

DebConf16 in Capetown/South Africa: Lessons learnt

July 19th, 2016

DebConf 16 in Capetown/South Africa was fantastic for many reasons.

My Capetown/South Africa/Culture/Flight related lessons:

  • Avoid flying on Sundays (especially in/from Austria where plenty of hotlines are closed on Sundays or at least not open when you need them)
  • Actually turn back your seat on the flight when trying to sleep and not forget that this option exists *cough*
  • While UCT claims to take energy saving quite serious (e.g. “turn off the lights” mentioned at many places around the campus), several toilets flush all their water, even when trying to do just small™ business and also two big lights in front of a main building seem to be shining all day long for no apparent reason
  • There doesn’t seem to be a standard for the side of hot vs. cold water-taps
  • Soap pieces and towels on several toilets
  • For pedestrians there’s just a very short time of green at the traffic lights (~2-3 seconds), then red blinking lights show that you can continue walking across the street (but *should* not start walking) until it’s fully red again (but not many people seem to care about the rules anyway :))
  • Warning lights of cars are used for saying thanks (compared to hand waving in e.g. Austria)
  • The 40km/h speed limit signs on the roads seem to be showing the recommended minimum speed :-)
  • There are many speed bumps on the roads
  • Geese quacking past 11:00 p.m. close to a sleeping room are something I’m also not used to :-)
  • Announced downtimes for the Internet connection are something I’m not used to
  • WLAN in the dorms of UCT as well as in any other place I went to at UCT worked excellent (measured ~22-26 Mbs downstream in my room, around 26Mbs in the hacklab) (kudos!)
  • WLAN is available even on top of the Table Mountain (WLAN working and being free without any registration)
  • Number26 credit card is great to withdraw money from ATMs without any extra fees from common credit card companies (except for the fee the ATM itself charges but displays ahead on-site anyway)
  • Splitwise is a nice way to share expenses on the road, especially with its mobile app and the money beaming using the Number26 mobile app

My technical lessons from DebConf16:

  • ran into way too many yak-shaving situations, some of them might warrant separate blog posts…
  • finally got my hands on gbp-pq (manage quilt patches on patch queue branches in git): very nice to be able to work with plain git and then get patches for your changes, also having upstream patches (like cherry-picks) inside debian/patches/ and the debian specific changes inside debian/patches/debian/ is a lovely idea, this can be easily achieved via “Gbp-Pq: Topic debian” with gbp’s pq and is used e.g. in pkg-systemd, thanks to Michael Biebl for the hint and helping hand
  • David Bremner’s gitpkg/git-debcherry is something to also be aware of (thanks for the reminder, gregoa)
  • autorevision: extracts revision metadata from your VCS repository (thanks to pabs)
  • blhc: build log hardening check
  • Guido’s gbp skills exchange session reminded me once again that I should use `gbp import-dsc –download $URL_TO_DSC` more often
  • features specific copyright + patches sections (thanks, Matthieu Caneill)
  • dpkg-mergechangelogs(1) for 3-way merge of debian/changelog files (thanks, buxy)
  • meta-git from pkg-perl is always worth a closer look
  • ifupdown2 (its current version is also available in jessie-backports!) has some nice features, like `ifquery –running $interface` to get the life configuration of a network interface, json support (`ifquery –format=json …`) and makotemplates support to generate configuration for plenty of interfaces

BTW, thanks to the video team the recordings from the sessions are available online.

My talk at OSDC 2016: Continuous Integration in Data Centers – Further 3 Years Later

May 26th, 2016

Open Source Data Center Conference (OSDC) was a pleasure and great event, Netways clearly knows how to run a conference.

This year at OSDC 2016 I gave a talk titled “Continuous Integration in Data Centers – Further 3 Years Later“. The slides from this talk are available online (PDF, 6.2MB). Thanks to Netways folks also a recording is available:

This embedded video doesn’t work for you? Try heading over to YouTube.

Note: my talk was kind of an update and extension for the (german) talk I gave at OSDC 2013. If you’re interested, the slides (PDF, 4.3MB) and the recording (YouTube) from my talk in 2013 are available online as well.

Event: DebConf 16

April 18th, 2016


Yes, I’m going to DebConf 16! This year DebConf – the Debian Developer Conference – will take place in Cape Town, South Africa.


2016-06-26 15:40 VIE -> 17:10 LHR BA0703
2016-06-26 21:30 LHR -> 09:55 CPT BA0059


2016-07-09 19:30 CPT –> 06:15 LHR BA0058
2016-07-10 07:55 LHR –> 11:05 VIE BA0696

Event: OSDC 2016

March 31st, 2016


Open Source Data Center Conference (OSDC) is a conference on open source software in data centers and huge IT environments and will take place in Berlin/Germany in April 2016. I will give a talk titled “Continuous Integration in Data Centers – Further 3 Years Later” there.

I gave a talk titled “Continuous Integration in data centers“ at OSDC in 2013, presenting ways how to realize continuous integration/delivery with Jenkins and related tools. Three years later we gained new tools in our continuous delivery pipeline, including Docker, Gerrit and Goss. Over the years we also had to deal with different problems caused by faster release cycles, a growing team and gaining new projects. We therefore established code review in our pipeline, improved our test infrastructure and invested in our infrastructure automation. In this talk I will discuss the lessons we learned over the last years, demonstrate how a proper continuous delivery pipeline can improve your life and how open source tools like Jenkins, Docker and Gerrit can be leveraged for setting up such an environment.

Hope to see you there!

Revisiting 2015

January 7th, 2016


  • Attended four conferences (FOSDEM, cfgmgmtcamp, Linuxdays Graz and Debconf)
  • Second year of business with SynPro Solutions, very happy with what we achieved
  • Very intense working year (Grml Solutions, SynPro Solutions, Grml-Forensic), I worked way more than anticipated and planned but am very glad that I’ve such wonderful business partners and colleagues

Technology / Open Source:


  • Another fork, welcome Johanna – she’s such a sunshine and I’m so incredibly proud of my two daughters
  • Played less badminton (skipped full winter term, meh) and table tennis than I would have liked to
  • Managed to get back to some kind of reasonable level on playing the drums (playing on my Roland TD-30KV drum-kit)

Conclusion: 2015 was a great though intense year for me, both business as personal wise. Looking forward to 2016.

mur.strom: Podcast über Debian

January 7th, 2016

mur.strom ist ein Podcast aus Graz rund um Technologie und Gesellschaft. Am 6. Jänner wurde eine neue Folge veröffentlicht, in der es um das Debian-Projekt geht. Sebastian Ramacher und ich waren die Interview-Gäste, viel Spaß beim Hören der ~1h 43 Minuten:

DebConf15: “Continuous Delivery of Debian packages” talk

August 24th, 2015

At the Debian Conference 2015 I gave a talk about Continuous Delivery of Debian packages. My slides are available online (PDF, 753KB). Thanks to the fantastic video team there’s also a recording of the talk available: WebM (471MB) and on YouTube.

HAProxy with Debian/squeeze clients causing random “Hash Sum mismatch”

July 2nd, 2015

Update on 2015-07-02 22:15 UTC: as Petter Reinholdtsen noted in the comments:

Try adding /etc/apt/apt.conf.d/90squid with content like this:

Acquire::http::Pipeline-Depth 0;

It turn off the feature in apt confusing proxies.

” – this indeed avoids those “Hash Sum mismatch” failures with HAProxy as well. Thanks, Petter!

Many of you might know apt’s “Hash Sum mismatch” issue and there are plenty of bug reports about it (like #517874, #624122, #743298 + #762079).

Recently I saw the “Hash Sum mismatch” usually only when using “random” mirrors with e.g. in apt’s sources.list, but with a static mirror such issues usually don’t exist anymore. A customer of mine has a Debian mirror and this issue wasn’t a problem there neither, until recently:

Since the mirror also includes packages provided to customers and the mirror needs to be available 24/7 we decided to provide another instance of the mirror and put those systems behind HAProxy (version 1.5.8-3 as present in Debian/jessie). The HAProxy setup worked fine and we didn’t notice any issues in our tests, until the daily Q/A builds randomly started to report failures:

Failed to fetch Hash Sum mismatch

When repeating the download there was no problem though. This problem only appeared about once every 15-20 minutes with random package files and it affected only Debian/squeeze clients (wheezy and jessie aren’t affected at all). The problem also didn’t appear when directly accessing the mirrors behind HAproxy. We tried plenty of different options for apt (Acquire::http::No-Cache=true, Acquire::http::No-Partial=true,…) and also played with some HAProxy configurations, nothing really helped. With apt’s “Debug::Acquire::http=True” we saw that there really was a checksum failure and HTTP status code 102 (‘Processing‘, or in terms of apt: ‘Waiting for headers‘) seems to be involved. The actual problem between apt on Debian/squeeze and HAProxy is still unknown to us though.

While digging deeper into this issue is on my todo list yet, I found a way to avoid those “Hash Sum mismatch” failures: switch from http to https in sources.list. As soon as https is used the problem doesn’t appear anymore. I’m documenting it here just in case anyone else should run into it.

fork(), once again

May 15th, 2015

On 10th of May 2015 my lovely wife has made me a lovely present again: welcome to our family, Johanna.

The #newinjessie game: tools related to RPM packages

May 4th, 2015

Continuing the #newinjessie game:

Bernhard Miklautz, contributor to jenkins-debian-glue and author of jenkins-package-builder (being in an early stage but under active development to provide support for building RPMs, similar to what jenkins-debian-glue provides for building Debian/Ubuntu packages) pointed out that there are new tools related to RPM packaging available in Debian/jessie:

  • mock: Build rpm packages inside a chroot (similar to what cowbuilder/cowbuilder/sbuild/… do in the Debian world)
  • obs-build: scripts for building RPM/debian packages for multiple distributions

GLT15: Slides of my “Debian 8 aka jessie, what’s new” talk

April 28th, 2015


I wasn’t sure whether I would make it to Linuxdays Graz (GLT15) this year so I didn’t participate in its call for lectures. But when meeting folks on the evening before the main event I came up with the idea of giving a lightning talk as special kind of celebrating the Debian jessie release.

So I gave a lightning talk about "Debian 8 aka jessie, what’s new" on 25th of April (couldn’t have been a better date :)) and the slides of my talk turned up to be more useful than expected (>3000 downloads within the first 48 hours and I received lots of great feedback), so maybe it’s worth mentioning them here as well: "Debian 8 aka jessie, what’s new" (PDF, 450KB)

PS: please join the #newinjessie game, also see #newinjessie on twitter

The #newinjessie game: new forensic packages in Debian/jessie

April 24th, 2015

Repeating what I did for the last Debian release with the #newinwheezy game it’s time for the #newinjessie game:

Debian/jessie AKA Debian 8.0 includes a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/jessie stable release as compared to Debian/wheezy (and ignoring wheezy-backports):

  • ext4magic: recover deleted files from ext3 or ext4 partitions
  • libbfio1: Library to provide basic input/output abstraction
  • lime-forensics-dkms: kernel module to memory dump
  • mac-robber: collects data about allocated files in mounted filesystems
  • pff-tools: library to access various ms outlook files formats/tools to exports PAB, PST and OST files
  • ssdeep: Recursive piecewise hashing tool (note: was present in squeeze but not in wheezy)
  • volatility: advanced memory forensics framework
  • yara: help to identify and classify malwares

Join the #newinjessie game and present packages which are new in Debian/jessie.

check-mk: monitor switches for GBit links

January 23rd, 2015

For one of our customers we are using the Open Monitoring Distribution which includes Check_MK as monitoring system. We’re monitoring the switches (Cisco) via SNMP. The switches as well as all the servers support GBit connections, though there are some systems in the wild which are still operating at 100MBit (or even worse on 10MBit). Recently there have been some performance issues related to network access. To make sure it’s not the fault of a server or a service we decided to monitor the switch ports for their network speed. By default we assume all ports to be running at GBit speed. This can be configured either manually via:

cat etc/check_mk/conf.d/wato/
checkgroup_parameters.setdefault('if', [])

checkgroup_parameters['if'] = [
  ( {'speed': 1000000000}, [], ['switch1', 'switch2', 'switch3', 'switch4'], ALL_SERVICES, {'comment': u'GBit links should be used as default on all switches'} ),
] + checkgroup_parameters['if']

or by visting Check_MK’s admin web-interface at ‘WATO Configuration’ -> ‘Host & Service Parameters’ -> ‘Parameters for Inventorized Checks’ -> ‘Networking’ -> ‘Network interfaces and switch ports’ and creating a rule for the ‘Explicit hosts’ switch1, switch2, etc and setting ‘Operating speed’ to ‘1 GBit/s’ there.

So far so straight forward and this works fine. Thanks to this setup we could identify several systems which used 100Mbit and 10MBit links. Definitely something to investigate on the according systems with their auto-negotiation configuration. But to avoid flooding the monitoring system and its notifications we want to explicitly ignore those systems in the monitoring setup until those issues have been resolved.

First step: identify the checks and their format by either invoking `cmk -D switch2` or looking at var/check_mk/autochecks/

OMD[synpros]:~$ cat var/check_mk/autochecks/
  ("switch2", "cisco_cpu", None, cisco_cpu_default_levels),
  ("switch2", "cisco_fan", 'Switch#1, Fan#1', None),
  ("switch2", "cisco_mem", 'Driver text', cisco_mem_default_levels),
  ("switch2", "cisco_mem", 'I/O', cisco_mem_default_levels),
  ("switch2", "cisco_mem", 'Processor', cisco_mem_default_levels),
  ("switch2", "cisco_temp_perf", 'SW#1, Sensor#1, GREEN', None),
  ("switch2", "if64", '10101', {'state': ['1'], 'speed': 1000000000}),
  ("switch2", "if64", '10102', {'state': ['1'], 'speed': 1000000000}),
  ("switch2", "if64", '10103', {'state': ['1'], 'speed': 1000000000}),
  ("switch2", "snmp_info", None, None),
  ("switch2", "snmp_uptime", None, {}),

Second step: translate this into the according format for usage in etc/check_mk/

checks = [
  ( 'switch2', 'if64', '10105', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:42:de:ad:be:af,  10MBit
  ( 'switch2', 'if64', '10107', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:23:de:ad:be:af, 100MBit
  ( 'switch2', 'if64', '10139', {'state': ['1'], 'errors': (0.01, 0.1), 'speed': None}), # MAC: 00:42:de:ad:be:af, 100MBit

Using this configuration we ignore the operation speed on ports 10105, 10107 and 10139 of switch2 using the the if64 check. We kept the state setting untouched where sensible (‘1’ means that the expected operational status of the interface is to be ‘up’). The errors settings specifies the error rates in percent for warnings (0.01%) and critical (0.1%). For further details refer to the online documentation or invoke ‘cmk -M if64′.

Final step: after modifying the checks’ configuration make sure to run `cmk -IIu switch2 ; cmk -R` to renew the inventory for switch2 and apply the changes. Do not forget to verify the running configuration by invoking ‘cmk -D switch2’:

Screenshot of 'cmk -D switch2' execution

Revisiting 2014

January 1st, 2015


  • Attended only four conferences (FOSDEM, Linuxdays Graz, DebConf and Flowcon) due to taking care of my child each Tuesday and Friday (so it wasn’t easy to attend conferences during the week, and weekends are usually family time for me nowadays)
  • Home office/remote work: this year was very remote work and home office focused because I wanted to see my daughter grow up as much as possible and it was totally worth it
  • New company: together with a friend I started a new company called SynPro Solutions, focusing on IT administration for local (as in Graz/Styria/Austria) industry (lesson learned: there’s quite some overhead involved if you’re not doing business alone, but it’s getting better over time and it’s working fine nowadays) [disclaimer: Grml Solutions and Grml-Forensic still exist and are active]

Technology / Open Source:

  • container virtualization docker: worked a lot with it, especially related to integration in Jenkins
  • code review system gerrit: deployed to production at a customer and still manage and support it (in combination with jenkins-debian-glue)
  • Amazon/EC2, GCE & CO: did a bunch of stuff related to auto scaling, automatic deployment and image/AMI generation
  • Ten years of Grml!


  • Used the Steiermark-Card to visit many nice places in Styria with my family (Ökopark Almenland is amongst our favorite places for children)
  • Bought myself a Roland TD-30KV drum-kit, allowing me to play drums whenever I want to (which is great if you don’t have to think of your family or neighbors, just use headphones and start rocking!)
  • Started training table tennis in summer with one of my neighbors (who’s a [semi-]professional), playing about weekly/bi-weekly since then and became quite proficient
  • Became styrian academic champion in Badminton, single and double
  • After having a year without that much reading due to new constraints (AKA family time) I managed to start reading on a regular base in the second half of the year, very much enjoyed that :) (though didn’t read as much as I wanted to in Q4 due to lots of work)
  • The year ended with the 2nd birthday of my daughter. I’m so happy and can’t imagine my life without her anymore.

Conclusion: 2014 was a great year for me, both business as personal wise. I would be more than happy if 2015 would be similar to it.

Installing Debian in UEFI mode with some stunts

December 30th, 2014

For a recent customer setup of Debian/wheezy on a IBM x3630 M4 server we used my blog entry “State of the art Debian/wheezy deployments with GRUB and LVM/SW-RAID/Crypto” as a base. But this time we wanted to use (U)EFI instead of BIOS legacy boot.

As usual we went for installing via Grml and grml-debootstrap. We started by dd-ing the Grml ISO to a USB stick (‘dd grml64-full_2014.11.iso of=/dev/sdX bs=1M‘). The IBM server couldn’t boot from it though, as far as we could identify it seems to be related to a problem with the IBM server not being able to properly recognize USB sticks that are registering themselves as mass storage device instead of removable storage devices (you can check your device via the /sys/devices/…/removable setting). So we enabled Legacy Boot and USB Storage in the boot manager of the server to be able to boot Grml in BIOS/legacy mode from this specific USB stick.

To install the GRUB boot loader in (U)EFI mode you need to be able to execute ‘modprobe efivars’. But our system was booted via BIOS/legacy and in that mode the ‘modprobe efivars’ doesnt work. We could have used a different USB device for booting Grml in UEFI mode but because we are lazy sysadmins and wanted to save time we went for a different route instead:

First of all we write the Grml 64bit ISO (which is (U)EFI capable out-of-the-box, also when dd-ing it) to the local RAID disk (being /dev/sdb in this example):

root@grml ~ # dd if=grml64-full_2014.11.iso of=/dev/sdb bs=1M

Now we should be able to boot in (U)EFI mode from the local RAID disk. To verify this before actually physically rebooting the system (and possibly getting into trouble) we can use qemu with OVMF:

root@grml ~ # apt-get update
root@grml ~ # apt-get install ovmf
root@grml ~ # qemu-system-x86_64 -bios /usr/share/qemu/OVMF.fd -hda /dev/sdb

The Grml boot splash comes up as expected, perfect. Now we actually reboot the live system and boot the ISO from the local disks in (U)EFI mode. Then we put the running Grml live system into RAM to not use and block the local disks any longer since we want to install Debian there. This can be achieved not just by the toram boot option, but also by executing grml2ram on-demand as needed from user space:

root@grml ~ # grml2ram

Now having the local disks available we verify that we’re running in (U)EFI mode by executing:

root@grml ~ # modprobe efivars
root@grml ~ # echo $?

Great, so we can install the system in (U)EFI mode now. Starting with the according partitioning (/dev/sda being the local RAID disk here):

root@grml ~ # parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary fat16 2048s 4095s
(parted) name 1 "EFI System"
(parted) mkpart primary 4096s 100%
(parted) name 2 "Linux LVM"
(parted) print
Model: IBM ServeRAID M5110 (scsi)
Disk /dev/sda: 9000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

  Number  Start   End     Size    File system  Name        Flags
   1      1049kB  2097kB  1049kB  fat16        EFI System
   2      2097kB  9000GB  9000GB               Linux LVM

(parted) quit
Information: You may need to update /etc/fstab.

Then setting up LVM with a logical volume for the root fs and installing Debian via grml-debootstrap on it:

root@grml ~ # pvcreate /dev/sda2
  Physical volume "/dev/sda2" successfully created
root@grml ~ # vgcreate vg0 /dev/sda2
  Volume group "vg0" successfully created
root@grml ~ # lvcreate -n rootfs -L16G vg0
  Logical volume "rootfs" created
root@grml ~ # grml-debootstrap --target /dev/mapper/vg0-rootfs --password secret --hostname foobar --release wheezy

Now finally set up the (U)EFI partition and properly install GRUB in (U)EFI mode:

root@grml ~ # mkfs.fat -F 16 /dev/sda1
mkfs.fat 3.0.26 (2014-03-07)
WARNING: Not enough clusters for a 16 bit FAT! The filesystem will be
misinterpreted as having a 12 bit FAT without mount option "fat=16".

root@grml ~ # mount /dev/mapper/vg0-rootfs /mnt
root@grml ~ # grml-chroot /mnt /bin/bash
Writing /etc/debian_chroot ...
(foobar)root@grml:/# mkdir -p /boot/efi
(foobar)root@grml:/# mount /dev/sda1 /boot/efi
(foobar)root@grml:/# apt-get install grub-efi-amd64
(foobar)root@grml:/# grub-install /dev/sda
Timeout: 10 seconds
BootOrder: 0003,0000,0001,0002,0004,0005
Boot0000* CD/DVD Rom
Boot0001* Hard Disk 0
Boot0002* PXE Network
Boot0004* USB Storage
Boot0005* Legacy Only
Boot0003* debian
Installation finished. No error reported.
(foobar)root@grml:/# ls /boot/efi/EFI/debian/
(foobar)root@grml:/# update-grub
(foobar)root@grml:/# exit
root@grml ~ # umount /mnt/boot/efi
root@grml ~ # umount /mnt/
root@grml ~ # vgchange -an
  0 logical volume(s) in volume group "vg0" now active

That’s it. Now rebooting the system should bring you to your Debian installation running in (U)EFI mode. You can verify this before actually rebooting into the system by using the qemu/OVMF trick from above once again.

Ten years of Grml

December 22nd, 2014


On 22nd of October 2004 an event called OS04 took place in Seifenfabrik Graz/Austria and it marked the first official release of the Grml project.

Grml was initially started by myself in 2003 – I registered the domain on September 16, 2003 (so technically it would be 11 years already :)). It started with a boot-disk, first created by hand and then based on yard. On 4th of October 2004 we had a first presentation of “grml 0.09 Codename Bughunter” at Kunstlabor in Graz.

I managed to talk a good friend and fellow student – Martin Hecher – into joining me. Soon after Michael Gebetsroither and Andreas Gredler joined and throughout the upcoming years further team members (Nico Golde, Daniel K. Gebhart, Mario Lang, Gerfried Fuchs, Matthias Kopfermann, Wolfgang Scheicher, Julius Plenz, Tobias Klauser, Marcel Wichern, Alexander Wirt, Timo Boettcher, Ulrich Dangel, Frank Terbeck, Alexander Steinböck, Christian Hofstaedtler) and contributors (Hermann Thomas, Andreas Krennmair, Sven Guckes, Jogi Hofmüller, Moritz Augsburger,…) joined our efforts.

Back in those days most efforts went into hardware detection, loading and setting up the according drivers and configurations, packaging software and fighting bugs with lots of reboots (working on our custom /linuxrc for the initrd wasn’t always fun). Throughout the years virtualization became more broadly available, which is especially great for most of the testing you need to do when working on your own (meta) distribution. Once upon a time udev became available and solved most of the hardware detection issues for us. Nowadays doesn’t even need a xorg.conf file anymore (at least by default). We have to acknowledge that Linux grew up over the years quite a bit (and I’m wondering how we’ll look back at the systemd discussions in a few years).

By having Debian Developers within the team we managed to push quite some work of us back to Debian (the distribution Grml was and still is based on), years before the Debian Derivatives initiative appeared. We never stopped contributing to Debian though and we also still benefit from the Debian Derivatives initiative, like sharing issues and ideas on DebConf meetings. On 28th of May 2009 I myself became an official Debian Developer.

Over the years we moved from private self-hosted infrastructure to company-sponsored systems, migrated from Subversion (brr) to Mercurial (2006) to Git (2008). Our Zsh-related work became widely known as grml-zshrc. managed to become a continuous integration/deployment/delivery home e.g. for the dpkg, fai, initramfs-tools, screen and zsh Debian packages. The underlying software for creating Debian packages in a CI/CD way became its own project known as jenkins-debian-glue in August 2011. In 2006 I started grml-debootstrap, which grew into a reliable method for installing plain Debian (nowadays even supporting installation as VM, and one of my customers does tens of deployments per day with grml-debootstrap in a fully automated fashion). So one of the biggest achievements of Grml is – from my point of view – that it managed to grow several active and successful sub-projects under its umbrella.

Nowadays the Grml team consists of 3 Debian Developers – Alexander Wirt (formorer), Evgeni Golov (Zhenech) and myself. We couldn’t talk Frank Terbeck (ft) into becoming a DM/DD (yet?), but he’s an active part of our Grml team nonetheless and does a terrific job with maintaining grml-zshrc as well as helping out in Debian’s Zsh packaging (and being a Zsh upstream committer at the same time makes all of that even better :)).

My personal conclusion for 10 years of Grml? Back in the days when I was a student Grml was my main personal pet and hobby. Grml grew into an open source project which wasn’t known just in Graz/Austria, but especially throughout the German system administration scene. Since 2008 I’m working self-employed and mainly working on open source stuff, so I’m kind of living a dream, which I didn’t even have when I started with Grml in 2003. Nowadays with running my own business and having my own family it’s getting harder for me to consider it still a hobby though, instead it’s more integrated and part of my business – which I personally consider both good and bad at the same time (for various reasons).

Thanks so much to anyone of you, who was (and possibly still is) part of the Grml journey! Let’s hope for another 10 successful years!

Thanks to Max Amanshauser and Christian Hofstaedtler for reading drafts of this.

Book Review: The Docker Book

July 23rd, 2014

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

kamailio-deb-jenkins: Open Source Jenkins setup for Debian Packaging

March 25th, 2014

Kamailio is an Open Source SIP Server. Since beginning of March 2014 a new setup for Kamailio‘s Debian packages is available. Development of this setup is sponsored by Sipwise and I am responsible for its infrastructure part (Jenkins, EC2, jenkins-debian-glue).

The setup includes support for building Debian packages for Debian 5 (lenny), 6 (squeeze), 7 (wheezy) and 8 (jessie) as well as Ubuntu 10.04 (lucid) and 12.04 (precise), all of them architectures amd64 and i386.

My work is fully open sourced. Deployment instructions, scripts and configuration are available at kamailio-deb-jenkins, so if you’re interested in setting up your own infrastructure for Continuous Integration with Debian/Ubuntu packages that’s a very decent starting point.

NOTE: I’ll be giving a talk about Continuous Integration with Debian/Ubuntu packages at Linuxdays Graz/Austria on 5th of April. Besides kamailio-deb-jenkins I’ll also cover best practices, Debian packaging, EC2 autoscaling,…

Building Debian+Ubuntu packages on EC2

March 25th, 2014

In a project I recently worked on we wanted to provide a jenkins-debian-glue based setup on Amazon’s EC2 for building Debian and Ubuntu packages. The idea is to keep a not-so-strong powered Jenkins master up and running 24×7, while stronger machines serving as Jenkins slaves should be launched only as needed. The project setup in question is fully open sourced (more on that in a piuparts (.deb package installation, upgrading, and removal testing tool) on the resulting binary packages. The Debian packages (source+binaries) are then provided back to Jenkins master and put into a reprepro powered Debian repository for public usage.


The starting point was one of the official Debian AMIs (x86_64, paravirtual on EBS). We automatically deployed jenkins-debian-glue on the system which is used as Jenkins master (we chose a m1.small instance for our needs).

We started another instance, slightly adjusted it to already include jenkins-debian-glue related stuff out-of-the-box (more details in section “Reduce build time” below) and created an AMI out of it. This new AMI ID can be configured for usage inside Jenkins by using the Amazon EC2 Plugin (see screenshot below).

IAM policy

Before configuring EC2 in Jenkins though start by adding a new user (or group) in AWS’s IAM (Identity and Access Management) with a custom policy. This ensures that your EC2 user in Jenkins doesn’t have more permissions than really needed. The following policy should give you a starting point (we restrict the account to allow actions only in the EC2 region eu-west-1, YMMV):

  "Version": "2012-10-17",
  "Statement": [
      "Action": [
      "Effect": "Allow",
      "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "eu-west-1"

Jenkins configuration

Configure EC2 access with “Access Key ID”, “Secret Access Key”, “Region” and “EC2 Key Pair’s Private Key” (for SSH login) inside Jenkins in the Cloud section on $YOUR_JENKINS_SERVER/configure. Finally add an AMI in the AMIs Amazon EC2 configuration section (adjust security-group as needed, SSH access is enough):

As you can see the configuration also includes a launch script. This script ensures that slaves are set up as needed (provide all the packages and scripts that are required for building) and always get the latest configuration and scripts before starting to serve as Jenkins slave.

Now your setup should be ready for launching Jenkins slaves as needed:

NOTE: you can use the “Instance Cap” configuration inside the advanced Amazon EC2 Jenkins configuration section to place an upward limit to the number of EC2 instances that Jenkins may launch. This can be useful for avoiding surprises in your AWS invoices. :) Notice though that the cap numbers are calculated for all your running EC2 instances, so be aware if you have further machines running under your account, you might want to e.g. further restrict your IAM policy then.

Reduce build time

Using a plain Debian AMI and automatically installing jenkins-debian-glue and further jenkins-debian-glue-buildenv* packages on each slave startup would work but it takes time. That’s why we created our own AMI which is nothing else than an official Debian AMI with the script (which is referred to in the screenshot above) already executed. All the necessary packages are pre-installed and also all the cowbuilder environments are already present then. From time to time we start the instance again to apply (security) updates and execute the bootstrap script with its ‐‐update option to also have all the cowbuilder systems up2date. Creating a new AMI is a no-brainer and we can then use the up2date system for our Jenkins slaves, if something should break for whatever reason we can still fall back to an older known-to-be-good AMI.

Final words

How to set up your Jenkins jobs for optimal master/slave usage, multi-distribution support (Debian/Ubuntu) and further details about this setup are part of another blog post.

Thanks to Andreas Granig, Victor Seva and Bernhard Miklautz for reading drafts of this.