Don't understand german? Read or subscribe to my english-only feed.

Book Review: The Docker Book

July 23rd, 2014

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

kamailio-deb-jenkins: Open Source Jenkins setup for Debian Packaging

March 25th, 2014

Kamailio is an Open Source SIP Server. Since beginning of March 2014 a new setup for Kamailio‘s Debian packages is available. Development of this setup is sponsored by Sipwise and I am responsible for its infrastructure part (Jenkins, EC2, jenkins-debian-glue).

The setup includes support for building Debian packages for Debian 5 (lenny), 6 (squeeze), 7 (wheezy) and 8 (jessie) as well as Ubuntu 10.04 (lucid) and 12.04 (precise), all of them architectures amd64 and i386.

My work is fully open sourced. Deployment instructions, scripts and configuration are available at kamailio-deb-jenkins, so if you’re interested in setting up your own infrastructure for Continuous Integration with Debian/Ubuntu packages that’s a very decent starting point.

NOTE: I’ll be giving a talk about Continuous Integration with Debian/Ubuntu packages at Linuxdays Graz/Austria on 5th of April. Besides kamailio-deb-jenkins I’ll also cover best practices, Debian packaging, EC2 autoscaling,…

Building Debian+Ubuntu packages on EC2

March 25th, 2014

In a project I recently worked on we wanted to provide a jenkins-debian-glue based setup on Amazon’s EC2 for building Debian and Ubuntu packages. The idea is to keep a not-so-strong powered Jenkins master up and running 24×7, while stronger machines serving as Jenkins slaves should be launched only as needed. The project setup in question is fully open sourced (more on that in a piuparts (.deb package installation, upgrading, and removal testing tool) on the resulting binary packages. The Debian packages (source+binaries) are then provided back to Jenkins master and put into a reprepro powered Debian repository for public usage.


The starting point was one of the official Debian AMIs (x86_64, paravirtual on EBS). We automatically deployed jenkins-debian-glue on the system which is used as Jenkins master (we chose a m1.small instance for our needs).

We started another instance, slightly adjusted it to already include jenkins-debian-glue related stuff out-of-the-box (more details in section “Reduce build time” below) and created an AMI out of it. This new AMI ID can be configured for usage inside Jenkins by using the Amazon EC2 Plugin (see screenshot below).

IAM policy

Before configuring EC2 in Jenkins though start by adding a new user (or group) in AWS’s IAM (Identity and Access Management) with a custom policy. This ensures that your EC2 user in Jenkins doesn’t have more permissions than really needed. The following policy should give you a starting point (we restrict the account to allow actions only in the EC2 region eu-west-1, YMMV):

  "Version": "2012-10-17",
  "Statement": [
      "Action": [
      "Effect": "Allow",
      "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "eu-west-1"

Jenkins configuration

Configure EC2 access with “Access Key ID”, “Secret Access Key”, “Region” and “EC2 Key Pair’s Private Key” (for SSH login) inside Jenkins in the Cloud section on $YOUR_JENKINS_SERVER/configure. Finally add an AMI in the AMIs Amazon EC2 configuration section (adjust security-group as needed, SSH access is enough):

As you can see the configuration also includes a launch script. This script ensures that slaves are set up as needed (provide all the packages and scripts that are required for building) and always get the latest configuration and scripts before starting to serve as Jenkins slave.

Now your setup should be ready for launching Jenkins slaves as needed:

NOTE: you can use the “Instance Cap” configuration inside the advanced Amazon EC2 Jenkins configuration section to place an upward limit to the number of EC2 instances that Jenkins may launch. This can be useful for avoiding surprises in your AWS invoices. :) Notice though that the cap numbers are calculated for all your running EC2 instances, so be aware if you have further machines running under your account, you might want to e.g. further restrict your IAM policy then.

Reduce build time

Using a plain Debian AMI and automatically installing jenkins-debian-glue and further jenkins-debian-glue-buildenv* packages on each slave startup would work but it takes time. That’s why we created our own AMI which is nothing else than an official Debian AMI with the script (which is referred to in the screenshot above) already executed. All the necessary packages are pre-installed and also all the cowbuilder environments are already present then. From time to time we start the instance again to apply (security) updates and execute the bootstrap script with its ‐‐update option to also have all the cowbuilder systems up2date. Creating a new AMI is a no-brainer and we can then use the up2date system for our Jenkins slaves, if something should break for whatever reason we can still fall back to an older known-to-be-good AMI.

Final words

How to set up your Jenkins jobs for optimal master/slave usage, multi-distribution support (Debian/Ubuntu) and further details about this setup are part of another blog post.

Thanks to Andreas Granig, Victor Seva and Bernhard Miklautz for reading drafts of this.

Jenkins on-demand slave selection through labels

March 1st, 2014

Problem description: One of my customers had a problem with their Selenium tests in the Jenkins continuous integration system. While Perl’s Test::WebDriver still worked just fine the Selenium tests using Ruby’s selenium-webdriver suddenly reported failures. The problem was caused by Debian wheezy’s upgrade of the Iceweasel web browser. Debian originally shipped Iceweasel version 17.0.10esr-1~deb7u1 in wheezy, but during a security-update version 24.3.0esr-1~deb7u1 was brought in through the wheezy-security channel. Because the selenium tests are used in an automated fashion in a quite large and long-running build pipeline we immediately rolled back to Iceweasel version 17.0.10esr-1~deb7u1 so everything can continue as expected. Of course we wanted to get the new Iceweasel version up and running, but we didn’t want to break the existing workflow while working on it. This is where on-demand slave selection through labels comes in.

Basics: As soon as you’re using Jenkins slaves you can instruct Jenkins to run a specific project on a particular (slave) node. By attaching labels to your slaves you can also use a label instead of a specific node name, providing more flexibility and scalability (to e.g. avoid problems if a specific node is down or you want to scale to more systems). Then Jenkins decides which of the nodes providing the according label should be considered for job execution. In the following screenshot a job uses the ‘selenium’ label to restrict its execution to the slaves providing selenium and currently there are two nodes available providing this label:

TIP 1: Visiting $JENKINS_SERVER/label/$label/ provides a list of slaves that provide that given $label (as well as list of projects that use $label in their configuration), like:

TIP 2:
Execute the following script on $JENKINS_SERVER/script to get a list of available labels of your Jenkins system:

import hudson.model.*
labels = Hudson.instance.getLabels()
labels.each{ label -> println }

Solution: In the according customer setup we’re using the swarm plugin (with automated Debian deployment through Grml’s netscript boot option, grml-debootstrap + Puppet) to automatically connect our Jenkins slaves to Jenkins master without any manual intervention. The swarm plugin allows you to define the labels through the -labels command line option.

By using the NodeLabel Parameter plugin we can configure additional parameters in Jenkins jobs: ‘node’ and ‘label’. The ‘label’ parameter allows us to execute the jobs on the nodes providing the requested label:

This is what we can use to gradually upgrade from the old Iceweasel version to the new one by keeping a given set of slaves at the old Iceweasel version while we’re upgrading other nodes to the new Iceweasel version (same for the selenium-server version which we want to also control). We can include the version number of the Iceweasel and selenium-server packages inside the labels we announce through the swarm slaves, with something like:

if [ -r /etc/init.d/selenium-server ] ; then

  ICEWEASEL_VERSION="$(dpkg-query --show --showformat='${Version}' iceweasel)"
  if [ -n "$ICEWEASEL_VERSION" ] ; then

  SELENIUM_VERSION="$(dpkg-query --show --showformat='${Version}' selenium-server)"
  if [ -n "$SELENIUM_VERSION" ] ; then

Then by using ‘-labels “$FLAGS EXTRA_FLAGS”‘ in the swarm invocation script we end up with labels like ‘selenium iceweasel-24 selenium-2.40.0’ for the slaves providing the Iceweasel v24 and selenium v2.40.0 Debian packages and ‘selenium iceweasel-17 selenium-2.40.0’ for the slaves providing Iceweasel v17 and selenium v2.40.0.

This is perfect for our needs, because instead of using the “selenium” label (which is still there) we can configure the selenium jobs that should continue to work as usual to default to the slaves with the iceweasel-17 label now. The development related jobs though can use label iceweasel-24 and fail as often as needed without interrupting the build pipeline used for production.

To illustrate this here we have slave selenium-client2 providing Iceweasel v17 with selenium-server v2.40. When triggering the production selenium job it will get executed on selenium-client2, because that’s the slave providing the requested labels:

Whereas the development selenium job can point to the slaves providing Iceweasel v24, so it will be executed on slave selenium-client1 here:

This setup allowed us to work on the selenium Ruby tests while not conflicting with any production build pipeline. By the time I’m writing about this setup we’ve already finished the migration to support Iceweasel v24 and the infrastructure is ready for further Iceweasel and selenium-server upgrades.

Full-Crypto setup with GRUB2

February 28th, 2014

Update on 2014-03-03: quoting Colin Watson from the comments:

Note that this is spelled GRUB_ENABLE_CRYPTODISK=y in GRUB 2.02 betas (matching the 2.00 documentation though not the implementation; not sure why Andrey chose to go with the docs).

Since several people asked me how to get such a setup and it’s poorly documented (as in: I found it in the GRUB sources) I decided to blog about this. When using GRUB >=2.00-22 (as of February 2014 available in Debian/jessie and Debian/unstable) it’s possible to boot from a full-crypto setup (this doesn’t mean it’s recommended, but it worked fine in my test setups so far). This means not even an unencrypted /boot partition is needed.

Before executing the grub-install commands execute those steps (inside the system/chroot of course, adjust GRUB_PRELOAD_MODULES for your setup as needed, I’ve used it in a setup with SW-RAID/LVM):

# echo GRUB_CRYPTODISK_ENABLE=y >> /etc/default/grub
# echo 'GRUB_PRELOAD_MODULES="lvm cryptodisk mdraid1x"' >> /etc/default/grub

This will result in the following dialog before getting to GRUB’s bootsplash:

State of the art Debian/wheezy deployments with GRUB and LVM/SW-RAID/Crypto

February 28th, 2014

Moving from Lilo to GRUB, using LVM as default, etc throughout the last years it was time to evaluate how well LVM works without a separate boot partition, possibly also on top of Software RAID. Big disks are asking for partitioning with GPT, just UEFI isn’t my default yet, so I’m still defaulting to Legacy BIOS for Debian/wheezy (I expect this to change for Debian/jessie and according hardware approaching at my customers).

So what we have and want in this demonstration setup:

  • Debian 7 AKA wheezy
  • 4 hard-disks with Software RAID (on 8GB RAM), using GPT partitioning + GRUB2
  • using state-of-the-art features without too much workarounds like separate /boot partition outside of LVM or mdadm with 0.9 metadata, just no (U)EFI yet
  • LVM on top of SW-RAID (RAID5) for /boot partition [SW-RAID->LVM]
  • Cryptsetup-LUKS on top of LVM on top of SW-RAID (RAID5) for data [SW-RAID->LVM->Crypto] (this gives us more flexibility about crypto yes/no and different cryptsetup options for the LVs compared to using it below RAID/LVM)
  • Rescue-system intregration via grml-rescueboot (not limited to Grml, but Grml should work out-of-the-box)

System used for installation:

root@grml ~ # grml-version
grml64-full 2013.09 Release Codename Hefeknuddler [2013-09-27]

Partition setup:

root@grml ~ # parted /dev/sda
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 2048s 4095s
(parted) set 1 bios_grub on
(parted) name 1 "BIOS Boot Partition"
(parted) mkpart primary 4096s 100%
(parted) set 2 raid on
(parted) name 2 "SW-RAID / Linux"
(parted) quit
Information: You may need to update /etc/fstab.

Clone partition layout from sda to all the other disks:

root@grml ~ # for f in {b,c,d} ; sgdisk -R=/dev/sd$f /dev/sda
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.

Make sure each disk has its unique UUID:

root@grml ~ # for f in {b,c,d} ; sgdisk -G /dev/sd$f
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.

SW-RAID setup:

root@grml ~ # mdadm --create /dev/md0 --verbose --level=raid5 --raid-devices=4 /dev/sd{a,b,c,d}2
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 1465004544K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@grml ~ #

SW-RAID speedup (system dependent, YMMV):

root@grml ~ # cat /sys/block/md0/md/stripe_cache_size
root@grml ~ # echo 16384 > /sys/block/md0/md/stripe_cache_size # 16MB
root@grml ~ # blockdev --getra /dev/md0
root@grml ~ # blockdev --setra 65536 /dev/md0 # 32 MB
root@grml ~ # sysctl = 200000
root@grml ~ # sysctl -w = 9999999999
root@grml ~ # sysctl = 1000
root@grml ~ # sysctl -w = 100000

LVM setup:

root@grml ~ # pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created
root@grml ~ # vgcreate homesrv /dev/md0
  Volume group "homesrv" successfully created
root@grml ~ # lvcreate -n rootfs -L4G homesrv
  Logical volume "rootfs" created
root@grml ~ # lvcreate -n bootfs -L1G homesrv
  Logical volume "bootfs" created

Check partition setup + alignment:

root@grml ~ # parted -s /dev/sda print
Model: ATA WDC WD15EADS-00P (scsi)
Disk /dev/sda: 1500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name                 Flags
 1      1049kB  2097kB  1049kB               BIOS Boot Partition  bios_grub
 2      2097kB  1500GB  1500GB               SW-RAID / Linux      raid

root@grml ~ # parted -s /dev/sda unit s print
Model: ATA WDC WD15EADS-00P (scsi)
Disk /dev/sda: 2930277168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name                 Flags
 1      2048s  4095s        2048s                     BIOS Boot Partition  bios_grub
 2      4096s  2930276351s  2930272256s               SW-RAID / Linux      raid

root@grml ~ # gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 2930277168 sectors, 1.4 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 212E463A-A4E3-428B-B7E5-8D5785141564
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 2930277134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2797 sectors (1.4 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02  BIOS Boot Partition
   2            4096      2930276351   1.4 TiB     FD00  SW-RAID / Linux

root@grml ~ # mdadm -E /dev/sda2
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 7ed3b741:0774d529:d5a71c1f:cf942f0a
           Name : grml:0  (local to host grml)
  Creation Time : Fri Jan 31 15:26:12 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 2928060416 (1396.21 GiB 1499.17 GB)
     Array Size : 4392089088 (4188.62 GiB 4497.50 GB)
  Used Dev Size : 2928059392 (1396.21 GiB 1499.17 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : c1e4213b:81822fd1:260df456:2c9926fb

    Update Time : Mon Feb  3 09:41:48 2014
       Checksum : b8af8f6 - correct
         Events : 72

         Layout : left-symmetric
     chunk size : 512k

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing)

root@grml ~ # pvs -o +pe_start
  PV         VG      Fmt  Attr PSize PFree 1st PE
  /dev/md0   homesrv lvm2 a--  4.09t 4.09t   1.50m
root@grml ~ # pvs --units s -o +pe_start
  PV         VG      Fmt  Attr PSize       PFree       1st PE
  /dev/md0   homesrv lvm2 a--  8784175104S 8773689344S   3072S
root@grml ~ # pvs -o +pe_start
  PV         VG      Fmt  Attr PSize PFree 1st PE
  /dev/md0   homesrv lvm2 a--  4.09t 4.09t   1.50m
root@grml ~ # pvs --units s -o +pe_start
  PV         VG      Fmt  Attr PSize       PFree       1st PE
  /dev/md0   homesrv lvm2 a--  8784175104S 8773689344S   3072S
root@grml ~ # vgs -o +pe_start
  VG      #PV #LV #SN Attr   VSize VFree 1st PE
  homesrv   1   2   0 wz--n- 4.09t 4.09t   1.50m
root@grml ~ # vgs --units s -o +pe_start
  VG      #PV #LV #SN Attr   VSize       VFree       1st PE
  homesrv   1   2   0 wz--n- 8784175104S 8773689344S   3072S


root@grml ~ # echo cryptsetup >> /etc/debootstrap/packages
root@grml ~ # cryptsetup luksFormat -c aes-xts-plain64 -s 256 /dev/mapper/homesrv-rootfs

This will overwrite data on /dev/mapper/homesrv-rootfs irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase:
Verify passphrase:
root@grml ~ # cryptsetup luksOpen /dev/mapper/homesrv-rootfs cryptorootfs
Enter passphrase for /dev/mapper/homesrv-rootfs:


root@grml ~ # mkfs.ext4 /dev/mapper/cryptorootfs
mke2fs 1.42.8 (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
262144 inodes, 1048192 blocks
52409 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

root@grml ~ # mkfs.ext4 /dev/mapper/homesrv-bootfs
mke2fs 1.42.8 (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

Install Debian/wheezy:

root@grml ~ # mount /dev/mapper/cryptorootfs /media
root@grml ~ # mkdir /media/boot
root@grml ~ # mount /dev/mapper/homesrv-bootfs /media/boot
root@grml ~ # grml-debootstrap --target /media --password YOUR_PASSWORD --hostname YOUR_HOSTNAME
 * grml-debootstrap [0.57] - Please recheck configuration before execution:

   Target:          /media
   Install grub:    no
   Using release:   wheezy
   Using hostname:  YOUR_HOSTNAME
   Using mirror:
   Using arch:      amd64

   Important! Continuing will delete all data from /media!

 * Is this ok for you? [y/N] y

Enable grml-rescueboot (to have easy access to rescue ISO via GRUB):

root@grml ~ # mkdir /media/boot/grml
root@grml ~ # wget -O /media/boot/grml/grml64-full_$(date +%Y.%m.%d).iso
root@grml ~ # grml-chroot /media apt-get -y install grml-rescueboot

[NOTE: We’re installing a daily ISO for grml-rescueboot here because the 2013.09 Grml release doesn’t work for this LVM/SW-RAID setup while newer ISOs are working fine already. The upcoming Grml stable release is supposed to work just fine, so you will be able to choose by then. :)]

Install GRUB on all disks and adjust crypttab, fstab + initramfs:

root@grml ~ # grml-chroot /media /bin/bash
(grml)root@grml:/# for f in {a,b,c,d} ; do grub-install /dev/sd$f ; done
(grml)root@grml:/# update-grub
(grml)root@grml:/# echo "cryptorootfs /dev/mapper/homesrv-rootfs none luks" > /etc/crypttab
(grml)root@grml:/# echo "/dev/mapper/cryptorootfs / auto defaults,errors=remount-ro 0   1" > /etc/fstab
(grml)root@grml:/# echo "/dev/mapper/homesrv-bootfs /boot auto defaults 0 0" >> /etc/fstab
(grml)root@grml:/# update-initramfs -k all -u
(grml)root@grml:/# exit

Clean unmounted/removal for reboot:

root@grml ~ # umount /media/boot
root@grml ~ # umount /media/
root@grml ~ # cryptsetup luksClose cryptorootfs
root@grml ~ # dmsetup remove homesrv-bootfs
root@grml ~ # dmsetup remove homesrv-rootfs

NOTE: On a previous hardware installation I had to install GRUB 2.00-22 from Debian/unstable to get GRUB working.
Some metadata from different mdadm and LVM experiments seems to have been left and confused GRUB 1.99-27+deb7u2 from Debian/wheezy (I wasn’t able to reproduce this issue in my VM demo/test setup).
Just in cause you might experience the following error message, try GRUB >=2.00-22:

  # grub-install --recheck /dev/sda
  error: unknown LVM metadata header.
  error: unknown LVM metadata header.
  /usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/cryptorootfs.  Check your
  Auto-detection of a filesystem of /dev/mapper/cryptorootfs failed.
  Try with --recheck.
  If the problem persists please report this together with the output of "/usr/sbin/grub-probe --device-map="/boot/grub/"
  --target=fs -v /boot/grub" to <>

Revisiting 2013

January 8th, 2014

2013 was a fantastic year for me. Following the real-life fork it was the first year of my daughter’s life. Whenever I heard people saying “oh, children are growing sooooo fast” in the past it felt like a lie for me, but seeing your own child grow I can just sign that, and it’s fantastic.

It was also the first year in our new home, and I’m not just happy with the building itself but also our neighborhood turned out to be great. New friends for drinking beer. :)

Also my business year was kind of special. jenkins-debian-glue has quite taken off and I had several interesting consulting gigs thanks to it. Business wise I learned a lot of stuff, especially related to distributor handling thanks to business around Grml-Forensic.

2014 already started interesting with several things in the pipeline, more details about that at a later stage.

Conclusion: I tend to call 2013 the best year of my life so far.

Event: Streitgespräche am 05.12.2013 + 06.12.2013 im ESC

November 27th, 2013

Copy/Paste von Streitgespräche am 05.12.2013 und 06.12.2013 – 19 Uhr

Piep, Piep, Piep, wir haben uns alle lieb? Von wegen! möchte die Macht des guten alten Streitgesprächs, des gepflegten Meinungsaustauschs, der erbitterten Kontroverse, der lautgedachten Erörterung, des abendlichen Gedankenaustausches, des freundlichen Meinungsgefechts, etc. wiederbeleben!

Darum richten wir erstmals an zwei aufeinander folgenden Abenden jeweils einen potenziellen Disput zweier Widersacher*Innen mit Beteiligungsmöglichkeit des Publikums aus. Veranstaltungsort ist das neue ESC (, Bürgergasse 5, Palais Trauttmansdorff, Graz.

Thema 1: Zwischen Datenmaßlosigkeit und Speicherdiät: Lebst du schon in der Cloud oder wie viele externe Speicherplatten hast du so?

Donnerstag, 05.12.2013, 19 Uhr

In Zeiten des eigenen exzessiven Datenspeicherbedürfnisses und der gleichzeitig allgegenwärtigen Datenüberwachung durch Staat und Wirtschaft diskutieren die geladenen “Streithansln” verschiedene Ansätze und Maßnahmen, wie man mit den eigenen Daten umgehen sollte. Es treffen aufeinander Karl Voit und Heinz Wittenbrink.

Thema 2: A/Symmetrischer Internetausbau – wie schnell soll das Internet der Zukunft noch werden?

Freitag, 06.12.2013, 19 Uhr

In vielen Haushalten regiert der preisgünstige ADSL Internetanschluss (sprich: viel Download, wenig Upload) und nur wenige professionelle NutzerInnen leisten sich den ungleich teureren Ausbau der schnellen Glasfaserleitung. Über die in/direkten politischen und praktischen Folgen dieser Aus- und Umbaupolitik der Datenautobahn zanken Michael “Mika” Prokop und Josef “Seppo” Gründler.

Event: Infracoders Graz Treffen am 09.10.2013

October 3rd, 2013

Edmund Haselwanter­ hat mich eingeladen beim ersten Treffen der Infracoders Graz über das Thema “Continuous Integration im Rechenzentrum” zu reden. Dieser Einladung folge ich sehr gerne und werde basierend auf meinem Vortrag von der OSDC über Themen wie Continuous Integration/Delivery, Jenkins, jenkins-debian-glue, Puppet und mcollective referieren.

  • Wann: Mittwoch, 9. Oktober 2013, 19:00 Uhr
  • Wo: BYTEPOETS GmbH / Münzgrabenstraße 92/2/3, Graz

Der Eintritt ist frei, Bier und Pizza werden von BYTEPOETS gesponsert. Weitere Details gibt es auf Meetup.

How to get grub-reboot working™

May 10th, 2013

So while testing Proxmox VE 3.0 RC1 I had the need to reboot the system into a kernel version different than the one being the default in the bootloader GRUB. “lilo -R …” worked fine in the past, but with GRUB it’s not as trivial on the first sight to get its equivalent. I remembered to have had problems with grub-reboot in the past already, or to quote a friend of mine: “has grub-reboot worked ever?”

Well yes, grub-reboot works – but only once you’re aware of the fact that you need to manually edit /etc/default/grub. :( It’s actually documented at, but not in the man page/info document of grub-reboot itself (great idea to provide a separate wiki page for this issue but not consider editing the official documentation instead, not).

So here you go:

# grep GRUB_DEFAULT /etc/default/grub 
# sed -i 's/^GRUB_DEFAULT.*/GRUB_DEFAULT=saved/' /etc/default/grub
# grep GRUB_DEFAULT /etc/default/grub 

# update-grub
# grep '^menuentry' /boot/grub/grub.cfg
menuentry 'Debian GNU/Linux, with Linux 3.2.0-4-amd64' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 3.2.0-4-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 2.6.32-20-pve' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 2.6.32-20-pve (recovery mode)' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os {
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os {

# grub-reboot 2  # to boot the third entry, the command writes to /boot/grub/grubenv
# reboot

FTR: Filed as #707695.

The #newinwheezy game: Grml packages in Debian/wheezy

May 2nd, 2013

Following up on the #newinwheezy game: Debian/wheezy is the first Debian release which ships packages from the Grml system. Grml became an official Debian Derivative and I’m very happy that three major projects of Grml found their official way into Debian:

As the description states grml2usb is interesting for getting Grml onto a USB device when the dd(1) approach (“dd if=grml.iso of=/dev/sdX“) just isn’t flexible enough.

grml-debootstrap provides a decent way to install Debian systems from the command line. As its author I might be biased but I’ve to mention that it’s working so nice that it is in use at several of my customers for automated roll-outs without any worries at all, and I got reports from other companies that they are very happy users of it as well.

Finally the grml-rescueboot packages provides a very simple and nice way to boot a rescue system from within GRUB (short version: throw a Grml ISO to /boot/grml/, run update-grub and be done).

PS: Thanks everyone for joining the #newinwheezy game over at :)

The #newinwheezy game: new forensic packages in Debian/wheezy

April 29th, 2013

Debian/wheezy includes a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are shipped with the upcoming Debian/wheezy stable release for the first time in a Debian release are:

  • dc3dd: patched version of GNU dd with forensic features
  • extundelete: utility to recover deleted files from ext3/ext4 partition
  • rephrase: Specialized passphrase recovery tool for GnuPG
  • rkhunter: rootkit, backdoor, sniffer and exploit scanner (see comments)
  • rsakeyfind: locates BER-encoded RSA private keys in memory images
  • undbx: Tool to extract, recover and undelete e-mail messages from .dbx files

Join the #newinwheezy game and present packages which are new in Debian/wheezy.

OSDC 2013: Folien zu Continuous Integration im Rechenzentrum

April 19th, 2013

Ich bin zurück von der Open Source Data Center Conference 2013, sehr schön war es. Die Vortragsfolien zu meinem Vortrag “Continuous Integration im Rechenzentrum” habe ich soeben hochgeladen: PDF (4,4MB)

ldmtool: accessing Microsoft Windows dynamic disks from Linux

February 18th, 2013

Linux is a great platform for dealing with all kinds of different file systems, partition tables etc. But one of the few annoying situations when working in IT forensics are Microsoft Windows dynamic disks, AKA LDM (Logical Disk Manager).

Thanks to libldm’s ldmtool this is no longer true. A short demonstration from a real-life IT forensics investigation (actual IDs/data randomized for obvious reasons):

# ldmtool
ldm> scan /dev/sdc*
ldm> show diskgroup 1bad5bbc-a4b5-42e1-8823-001014b00003
  "name" : "FOOBAR-Dg0",
  "guid" : "1bad5bbc-a4b5-42e1-8823-001014b00003",
  "volumes" : [
  "disks" : [
ldm> show volume 1bad5bbc-a4b5-42e1-8823-001014b00003 Volume1
  "name" : "Volume1",
  "type" : "striped",
  "size" : 3907039232,
  "chunk-size" : 128,
  "hint" : "D:",
  "partitions" : [
ldm> show partition 1bad5bbc-a4b5-42e1-8823-001014b00003 Disk1-01
  "name" : "Disk1-01",
  "start" : 1985,
  "size" : 1953519616,
  "disk" : "Disk1"
ldm> create all
Unable to create volume Volume1 in disk group 1bad5bbc-a4b5-42e1-8823-001014b00003: Disk Disk2 required by striped volume Volume1 is missing
ldm> scan /dev/sdd*
ldm> create all

The just created device mapper device then can be handled as usual:

# dmsetup ls | grep ldm
ldm_vol_FOOBAR-Dg0_Volume1        (254:4)
# mount /dev/mapper/ldm_vol_FOOBAR-Dg0_Volume1 /mnt/whatever

ldmtool just hit Debian unstable (and I intend to ship the tool with the upcoming version of Grml-Forensic).

Event: OSDC 2013

January 8th, 2013

I’ll be speaker at the Open Source Data Center Conference 2013 in Nuremberg/Germany on 17th and 18th April, talking about Continuous Integration/Delivery in the data-center. I was speaking at OSDC back in 2009 and very much enjoyed the conference – so I’m totally looking forward to OSDC 2013, hope to see you there!

Revisiting 2012

January 1st, 2013

2012 was a very special year for me, so it’s worth some words.

In April the Grazer Linuxtage event took place, being one of the major IT events in Austria nowadays. I’m one of its original founders and after being part of the main organisation team for ten times I decided to leave the organisation team. I’m looking forward to enjoying the event in 2013 as visitor, I’m sure the organisation team will provide an absolutely rocking event.

In August I officially launched the jenkins-debian-glue project. It turned out to have been worth all the effort to go public with it. I received plenty of feedback and people are hacking on such great things nowadays that I have to find some time soon to bring those nifty new features people came up with back into my mainline.

In December my wife and me moved to our new place, being a house we bought in Graz. I’m very happy that everything worked out as planned. Thanks to wonderful friends we managed to reach a very pleasing state of our new place within just a few weeks. Once again many many thanks (you know who I mean)!

Throughout the whole year I’ve had several challenging IT consulting gigs in and outside of Austria as well as interesting forensic investigations. Sadly most of them are settled under NDA. That’s still something I can’t get really used to as being someone who likes to share and talk about my work with similarly minded people.

The year ended with a real-life fork, being such a wonderful experience that I still can’t believe that it’s true.

Thanks 2012, totally looking forward to 2013!


December 31st, 2012

On 31st of December 2012 my lovely wife has made ​​me the gift of my life: welcome to the world, Verena.

Neue Anschrift

December 11th, 2012

Rund 4,5 Jahre später ist es nochmals soweit: ich bin gesiedelt, wieder innerhalb von Graz, diesmal in etwas (zumindest geplant) langlebigeres. Die neue Adresse findet man wie üblich im Whois-Record meiner Domains.

October 2012: recordings from several IT conferences available

October 13th, 2012

In the last few days several IT events published recordings/slides from their events. I haven’t had the time to take a closer look at the material yet but there might be some gems among them so I thought it’s worth spreading the word:

Jenkins: marking an upstream job as failed if its downstream job fails

October 4th, 2012

Since I’m not aware of a ready-to-go solution, this issue once again came up at a company where I’m doing Jenkins/CI consulting for and the solution involves some trickiness you need to be aware of I’m hereby documenting it.

If you have a build pipeline inside your Jenkins setup you might have so called upstream jobs which trigger downstream jobs. If such a downstream job fails you might want to set the build state of the upstream job to “failure” as well.

First of all install the Groovy Postbuild Plugin as well as the Copy Artifact Plugin. (The Copy Artifact Plugin is not strictly needed, you can also choose a different approach for artifact handling, but the plugin works very well for me.)

In the upstream job you have to archive artifacts. Otherwise the script that we will use in the downstream job doesn’t have a connection to the upstream job through the getUpstreamBuilds() method. If you don’t have any artifacts then just create a simple textfile and use that as artfifact file. After the artifact step you can place the trigger for the downstream job. This is the relevant part for a working sample configuration for such an upstream job:

Screenshot Jenkins upstream job configuration

In the downstream job make sure to grab the artifacts from the upstream job. Then use build steps or whatever you need as usual. This is the relevant part for a working sample configuration for such a downstream job:

Screenshot Jenkins downstream job configuration

Finally use the following Groovy script as the Groovy Postbuild action:

upstreamBuilds =;

if(!upstreamBuilds) {
  manager.listener.logger.println("Error: could not identify upstream build");
} else {
  upstreamJob = upstreamBuilds.keySet().iterator().next();
  lastUpstreamBuild = upstreamJob.getLastBuild();
  buildResult =;

  if(lastUpstreamBuild.getResult().isBetterThan(buildResult)) {
    manager.listener.logger.println("Adjusting build state of upstream job to build result of this job, being " + buildResult);

That’s it. Now every time your downstream job fails it should set the according upstream job to “failure” as well.