Don't understand german? Read or subscribe to my english-only feed.

Some useful bits about Linux hardware support and patched Kernel packages

July 31st, 2019

Disclaimer: I started writing this blog post in May 2018, when Debian/stretch was the current stable release of Debian, but published this article in August 2019, so please keep the version information (Debian releases + kernels not being up2date) in mind.

The kernel version of Debian/stretch (4.9.0) didn’t support the RAID controller as present in Lenovo ThinkSystem SN550 blade servers yet. The RAID controller was known to be supported with Ubuntu 18.10 using kernel v4.15 as well as with Grml ISOs using kernel v4.15 and newer. Using a more recent Debian kernel version wasn’t really an option for my customer, as there was no LTS kernel version that could be relied on. Using the kernel version from stretch-backports could have be an option, though it would be our last resort only, since the customer where this applied to controls the Debian repositories in usage and we’d have to track security issues more closely, test new versions of the kernel on different kinds of hardware more often,… whereas the kernel version from Debian/stable is known to be working fine and is less in a flux than the ones from backports. Alright, so it doesn’t support this new hardware model yet, but how to identify the relevant changes in the kernel to have a chance to get it supported in the stable Debian kernel?

Some bits about PCI IDs and related kernel drivers

We start by identifying the relevant hardware:

root@grml ~ # lspci | grep 'LSI.*RAID'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)
root@grml ~ # lspci -s '08:00.0'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)

Which driver gets used for this device?

root@grml ~ # lspci -k -s '08:00.0'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)
        Subsystem: Lenovo ThinkSystem RAID 530-4i Flex Adapter
        Kernel driver in use: megaraid_sas
        Kernel modules: megaraid_sas

So it’s the megaraid_sas driver, let’s check some version information:

root@grml ~ # modinfo megaraid_sas | grep version
version:        07.703.05.00-rc1
srcversion:     442923A12415C892220D5F0
vermagic:       4.15.0-1-grml-amd64 SMP mod_unload modversions

But how does the kernel know which driver should be used for this device? We start by listing further details about the hardware device:

root@grml ~ # lspci -n -s 0000:08:00.0
08:00.0 0104: 1000:001c (rev 01)

The 08:00.0 describes the hardware slot information ([domain:]bus:device.function), the 0104 describes the class (with 0104 being of type RAID bus controller, also see /usr/share/misc/pci.ids by searching for ‘C 01’ -> ’04`), the (rev 01) obviously describes the revision number. We’re interested in the 1000:001c though. The 1000 identifies the vendor:

% grep '^1000' /usr/share/misc/pci.ids
1000  LSI Logic / Symbios Logic

The `001c` finally identifies the actual model. Having this information available, we can check the mapping of the megaraid_sas driver, using the `modules.alias` file of the kernel:

root@grml ~ # grep -i '1000.*001c' /lib/modules/$(uname -r)/modules.alias
alias pci:v00001000d0000001Csv*sd*bc*sc*i* megaraid_sas
root@grml ~ # modinfo megaraid_sas | grep -i 001c
alias:          pci:v00001000d0000001Csv*sd*bc*sc*i*

Bingo! Now we can check this against the Debian/stretch kernel, which doesn’t support this device yet:

root@stretch:~# modinfo megaraid_sas | grep version
version:        06.811.02.00-rc1
srcversion:     64B34706678212A7A9CC1B1
vermagic:       4.9.0-6-amd64 SMP mod_unload modversions
root@stretch:~# modinfo megaraid_sas | grep -i 001c

No match here – bingo²! Now we know for sure that the ID 001c is relevant for us. How do we identify the corresponding change in the Linux kernel though?

The file drivers/scsi/megaraid/megaraid_sas.h of the kernel source lists the PCI device IDs supported by the megaraid_sas driver. Since we know that kernel v4.9 doesn’t support it yet, while it’s supported with v4.15 we can run "git log v4.9..v4.15 drivers/scsi/megaraid/megaraid_sas.h" in the git repository of the kernel to go through the relevant changes. It’s easier to run "git blame drivers/scsi/megaraid/megaraid_sas.h" though – then we’ll stumble upon our ID from before – `0x001C` – right at the top:

45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   59) #define PCI_DEVICE_ID_LSI_VENTURA                 0x0014
754f1bae0f1e3 (Shivasharan S              2017-10-19 02:48:49 -0700   60) #define PCI_DEVICE_ID_LSI_CRUSADER                0x0015
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   61) #define PCI_DEVICE_ID_LSI_HARPOON                 0x0016
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   62) #define PCI_DEVICE_ID_LSI_TOMCAT                  0x0017
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   63) #define PCI_DEVICE_ID_LSI_VENTURA_4PORT               0x001B
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   64) #define PCI_DEVICE_ID_LSI_CRUSADER_4PORT      0x001C

Alright, the relevant change was commit 45f4f2eb3da3c:

commit 45f4f2eb3da3cbff02c3d77c784c81320c733056
Author: Sasikumar Chandrasekaran […]
Date:   Tue Jan 10 18:20:43 2017 -0500

    scsi: megaraid_sas: Add new pci device Ids for SAS3.5 Generic Megaraid Controllers
    This patch contains new pci device ids for SAS3.5 Generic Megaraid Controllers
    Signed-off-by: Sasikumar Chandrasekaran […]
    Reviewed-by: Tomas Henzl […]
    Signed-off-by: Martin K. Petersen […]

diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
index fdd519c1dd57..cb82195a8be1 100644
--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -56,6 +56,11 @@
 #define PCI_DEVICE_ID_LSI_INTRUDER_24          0x00cf
 #define PCI_DEVICE_ID_LSI_CUTLASS_52           0x0052
 #define PCI_DEVICE_ID_LSI_CUTLASS_53           0x0053
+#define PCI_DEVICE_ID_LSI_VENTURA                  0x0014
+#define PCI_DEVICE_ID_LSI_HARPOON                  0x0016
+#define PCI_DEVICE_ID_LSI_TOMCAT                   0x0017
+#define PCI_DEVICE_ID_LSI_VENTURA_4PORT                0x001B
+#define PCI_DEVICE_ID_LSI_CRUSADER_4PORT       0x001C

Custom Debian kernel packages for testing

Now that we identified the relevant change, what’s the easiest way to test this change? There’s an easy way how to build a custom Debian package, based on the official Debian kernel but including further patch(es), thanks to Ben Hutchings. Make sure to have a Debian system available (I was running this inside an amd64 system, building for amd64), with according deb-src entries in your apt’s sources.list and enough free disk space, then run:

% sudo apt install dpkg-dev build-essential devscripts fakeroot
% apt-get source -t stretch linux
% cd linux-*
% sudo apt-get build-dep linux
% bash debian/bin/test-patches -f amd64 -s none 0001-scsi-megaraid_sas-Add-new-pci-device-Ids-for-SAS3.5-.patch

This generates something like a linux-image-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb for you (next to further Debian packages like linux-headers-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb + linux-image-4.9.0-6-amd64-dbg_4.9.88-1+deb9u1a~test_amd64.deb), ready for installing and testing on the affected system. The Kernel Handbook documents this procedure as well, I just wasn’t aware of this handy `debian/bin/test-patches` so far though.

JFTR: sadly the patch with the additional PCI_DEVICE_ID* was not enough (also see #900349), we seem to need further patches from the changes between v4.9 and v4.15, though this turned up to be no longer relevant for my customer and it’s also working with Debian/buster nowadays.

Vortrag: Best Practices in der IT-Administration, Version 2019 @ GLT19

July 29th, 2019

Es ist schon ein Weilchen her, aber nachdem mich noch immer Leute darauf ansprechen: Auf den Grazer Linuxtagen 2019 (GLT19) war ich als Referent mit einem Vortrag zum Thema “Best Practices in der IT-Administration, Version 2019” vertreten. In dem 25-minütigen Vortrag geht es um die moderne IT-Administration und welche Best Practices es im Jahr 2019 gibt.

Es gibt den Vortrag als Videomitschnitt auf YouTube sowie in verschiedenen Formaten auch direkt beim CCC. Die Vortragsfolien (11MB, PDF) stehen ebenfalls online zur Verfügung. Viel Spaß beim Anschauen!

BTW: den Vortrag kann man in längerer Workshop-Version von mir via SynPro Solutions beziehen, bei Interesse/Bedarf einfach bei mir melden.

(Not really) Revisiting 2018

July 26th, 2019

Mainly to recall what happened last year and to give thoughts and planning for upcoming year(s) I usually revisit the last year (previous years: 2017, 2016, 2015, 2014, 2013 + 2012).

But the end of 2018 and beginning of 2019 were quite stressful (mainly business wise) and I was lacking time and motivation to actually sit down and blog something™. Then I wanted to finally migrate from WordPress to Hugo and started to look into the migration but got stuck for all kinds of reasons (no really nice™ theme, the more useful themes all using a bunch of foreign hosted stuff (meh) and finally some fiddling with broken URLs and further technical minor annoyances) that made me postpone any blogging. Since I had some blog articles in the queue that I wanted to actually publish, I decided to stay at WordPress for the time being and look into migrating to something like Hugo at a later point in time.

So I’m not really revisiting 2018 but just publishing what was stuck in my drafts since the middle of 2018. :)



  • Read ~one book per month on average, which is once again below my targets (the books I recall are “Was man von hier aus sehen kann” (Mariana Leky), Monitoring with Prometheus (James Turnbull), “Why We Sleep: The New Science of Sleep and Dreams (Matthew Walker)” and “Die Stunde zwischen Frau und Gitarre” (Clemens Setz), while Clemens’ book really brought me back into book reading habits!)

Conclusion: nothing to complain about :)

Debian buster: changes in coreutils #newinbuster

July 26th, 2019

Debian buster is there, and similar to what we had with #newinwheezy, #newinjessie and #newinstretch it’s time for #newinbuster!

One package that isn’t new but its tools are used by many of us is coreutils, providing many essential system utilities. We have coreutils v8.26-3 in Debian/stretch and coreutils v8.30-3 in Debian/buster. Compared to the changes between jessie and stretch there are no new tools, but there are some new options available that I’d like to point out.

New features/options

b2sum + md5sum + sha1sum + sha224sum + sha256sum + sha384sum + sha512sum (compute and check message digest):

  -z, --zero           end each output line with NUL, not newline, nd disable file name escaping

cp (copy files and directories):

  Use --reflink=never to ensure a standard copy is performed.

env (run a program in a modified environment):

  -C, --chdir=DIR      change working directory to DIR
  -S, --split-string=S  process and split S into separate arguments;
                        used to pass multiple arguments on shebang lines
  -v, --debug          print verbose information for each processing step

ls (list directory contents), dir + vdir (list directory contents):

  --hyperlink[=WHEN]     hyperlink file names; WHEN can be 'always' (default if omitted), 'auto', or 'never'

This –hyperlink option is especially worth mentioning if you’re using a recent terminal emulator (especially based on VTE), see Hyperlinks (a.k.a. HTML-like anchors) in terminal emulators for further information.

rm (remove files or directories):

  --preserve-root=all   do not remove '/' (default); with 'all', reject any command line argument on a separate device from its parent

split (split a file into pieces):

  -x                      use hex suffixes starting at 0, not alphabetic
  --hex-suffixes[=FROM]  same as -x, but allow setting the start value

timeout (run a command with a time limit):

  -v, --verbose  diagnose to stderr any signal sent upon timeout


date (print or set the system date and time):

--rfc-2822 (AKA -R) was renamed into --rfc-email, while --rfc-2822 is still supported

nl (write each FILE to standard output, with line numbers added):

Old default options: -bt        -fn -hn -i1 -l1 -nrn   -sTAB   -v1 -w6 
New default options: -bt -d'\:' -fn -hn -i1 -l1 -n'rn' -s<tab> -v1 -w6

Debian buster: changes in util-linux #newinbuster

July 26th, 2019

Debian buster is there, and similar to what we had with #newinwheezy, #newinjessie and #newinstretch it’s time for #newinbuster!

Update on 2019-07-26 22:55 UTC: Cyril Brulebois pointed out, that findmnt (find a filesystem) was available in Debian/stretch already as part of the mount package, updated the blog post accordingly

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.29.2-1+deb9u1 in Debian/stretch and util-linux v2.33.1-0.1 in Debian/buster. There are many new options available and we also have a few new tools available.

Tools that have been taken over from / moved to other packages

  • cfdisk + fdisk + sfdisk (tools to display or manipulate a disk partition table) were moved from util-linux to fdisk
  • findmnt (find a filesystem) is no longer shipped via the mount binary package (of util-linux source package) but part of the util-linux binary package itself nowadays
  • setpriv (run a program with different Linux privilege settings) is no longer shipped as separate binary package of util-linux but part of the util-linux binary package itself nowadays
  • su (change user ID or become superuser) was moved from login package (kudos to Andreas Henriksson for this!)

Deprecated / removed tools

Tools that are no longer shipped with util-linux as of Debian/buster:

  • line binary (copies one line (up to a newline) from standard input to standard output), the head binary is its suggested replacement
  • pg binary (browse pagewise through text files), it’s marked deprecated in POSIX since 1997
  • tailf binary (follow the growth of a log file), it was deprecated in 2017 and `tail -f` from coreutils works fine
  • tunelp binary (set various parameters for the lp device), parallel port printers are suspected to be extinct by now

New tools

blkzone (run zone command on a device):

 blkzone <command> [options] <device>

Run zone command on the given block device.

 report       Report zone information about the given device
 reset        Reset a range of zones.

 -o, --offset <sector>  start sector of zone to act (in 512-byte sectors)
 -l, --length <sectors> maximum sectors to act (in 512-byte sectors)
 -c, --count <number>   maximum number of zones
 -v, --verbose          display more details

 -h, --help             display this help
 -V, --version          display version

For more details see blkzone(8).

chmem (configure memory, set a particular size or range of memory online or offline):

 chmem [options] [SIZE|RANGE|BLOCKRANGE]

Set a particular size or range of memory online or offline.

 -e, --enable       enable memory
 -d, --disable      disable memory
 -b, --blocks       use memory blocks
 -z, --zone <name>  select memory zone (see below)
 -v, --verbose      verbose output
 -h, --help         display this help
 -V, --version      display version

Supported zones:

For more details see chmem(8).

choom (display and adjust OOM-killer score):

 choom [options] -p pid
 choom [options] -n number -p pid
 choom [options] -n number command [args...]]

Display and adjust OOM-killer score.

 -n, --adjust <num>     specify the adjust score value
 -p, --pid <num>        process ID

 -h, --help             display this help
 -V, --version          display version

For more details see choom(1).

fincore (count pages of file contents in core):

 fincore [options] file...

 -J, --json            use JSON output format
 -b, --bytes           print sizes in bytes rather than in human readable format
 -n, --noheadings      don't print headings
 -o, --output <list>   output columns
 -r, --raw             use raw output format

 -h, --help            display this help
 -V, --version         display version

Available output columns:
       PAGES  file data resident in memory in pages
        SIZE  size of the file
        FILE  file name
         RES  file data resident in memory in bytes

For more details see fincore(1).

lsmem (list the ranges of available memory with their online status):

 lsmem [options]

List the ranges of available memory with their online status.

 -J, --json           use JSON output format
 -P, --pairs          use key="value" output format
 -a, --all            list each individual memory block
 -b, --bytes          print SIZE in bytes rather than in human readable format
 -n, --noheadings     don't print headings
 -o, --output <list>  output columns
     --output-all     output all columns
 -r, --raw            use raw output format
 -S, --split <list>   split ranges by specified columns
 -s, --sysroot <dir>  use the specified directory as system root
     --summary[=when] print summary information (never,always or only)

 -h, --help           display this help
 -V, --version        display version

Available output columns:
      RANGE  start and end address of the memory range
       SIZE  size of the memory range
      STATE  online status of the memory range
  REMOVABLE  memory is removable
      BLOCK  memory block number or blocks range
       NODE  numa node of memory
      ZONES  valid zones for the memory range

For more details see lsmem(1).

New features/options

agetty + getty (alternative Linux getty):

  --list-speeds          display supported baud rates

blkid (locate/print block device attributes) gained a bunch of long options:


  --cache-file          same as -c 
  --no-encoding         same as -d
  --garbage-collect     same as -g
  --output              same as -o
  --list-filesystems    same as -k
  --match-tag           same as -s
  --match-token         same as -t
  --list-one            same as -l
  --label               same as -L
  --uuid                same as -U

Low-level probing options:

  --probe               same as -p
  --info                same as -i
  --size                same as -S
  --offset              same as -O
  --usages              same as -u
  --match-types         same as -n

dmesg (print or control the kernel ring buffer):

  -p, --force-prefix          force timestamp output on each line of multi-line messages

fallocate (preallocate or deallocate space to a file):

  -i, --insert-range   insert a hole at range, shifting existing data
  -x, --posix          use posix_fallocate(3) instead of fallocate(2)

findmnt (find a filesystem):

  --output-all       output all available columns
  --pseudo           print only pseudo-filesystems
  --real             print only real filesystems
  --tree             enable tree format output is possible

fstrim (discard unused blocks on a mounted filesystem):

  -A, --fstab         trim all supported mounted filesystems from /etc/fstab
  -n, --dry-run       does everything, but trim

hwlock (read or set the hardware clock (RTC)):

  -l                 same as --localtime
  --delay <sec>      delay used when set new RTC time
  -v, --verbose      display more details

lsblk (list block devices):


  -z, --zoned          print zone model
  -T, --tree           use tree format output
  --sysroot >dir<  use specified directory as system root

Available output columns:

  PATH     path to the device node
  FSAVAIL  filesystem size available
  FSSIZE   filesystem size
  FSUSED   filesystem size used
  FSUSE%   filesystem use percentage
  PTUUID   partition table identifier (usually UUID)
  PTTYPE   partition table type
  ZONED    zone model

lscpu (display information about the CPU architecture):

  -J, --json              use JSON for default or extended format

lslocks (list local system locks):


  -b, --bytes            print SIZE in bytes rather than in human readable format
      --output-all       output all columns

Available output columns:

  TYPE  kind of lock

lslogins (display information about known users in the system):


      --output-all         output all columns

Available output columns:

  PWD-METHOD  password encryption method

lsns (list namespaces):


      --output-all       output all columns
  -W, --nowrap           don't use multi-line representation

Available output columns:

  NETNSID  namespace ID as used by network subsystem
     NSFS  nsfs mountpoint (usually used network subsystem)

nsenter (run program with namespaces of other processes):

  -a, --all              enter all namespaces
      --output-all     output all columns
  -S, --sector-size <num>  overwrite sector size
      --list-types     list supported partition types and exit

rename.ul (rename files):

  -n, --no-act        do not make any changes
  -o, --no-overwrite  don't overwrite existing files
  -i, --interactive   prompt before overwrite

runuser (run a command with substitute user and group ID):

  -w, --whitelist-environment <list>  don't reset specified variables
  -P, --pty                       create a new pseudo-terminal

setsid (run a program in a new session):

  -f, --fork     always fork

setterm (set terminal attributes):

  --resize                          reset terminal rows and columns

unshare (run program with some namespaces unshared from parent):

  --kill-child[=<signame>]  when dying, kill the forked child (implies --fork), defaults to SIGKILL

wipefs (wipe a signature from a device):


  -i, --noheadings    don't print headings
  -J, --json          use JSON output format
  -O, --output <list> COLUMNS to display (see below)

Available output columns:
     UUID  partition/filesystem UUID
    LABEL  filesystem LABEL
   LENGTH  magic string length
     TYPE  superblok type
   OFFSET  magic string offset
    USAGE  type description
   DEVICE  block device name

zramctl (set up and control zram devices):

  -a, --algorithm lzo|lz4|lz4hc|deflate|842   compression algorithm to use (new compression algorithms lz4hc, deflate + 842)
       --output-all          output all columns

Deprecated and removed options

hwlock (read or set the hardware clock (RTC)):

  --badyear        ignore RTC's year because the BIOS is broken
  -c, --compare    periodically compare the system clock with the CMOS clock
  --getepoch       print out the kernel's hardware clock epoch value
  --setepoch       set the kernel's hardware clock epoch value to the value given with --epoch

unshare (run program with some namespaces unshared from parent):

  -s     (use --setgroups instead)

Inception: VM inside Docker inside KVM – Testing Debian VM installation builds on Travis CI

July 25th, 2018

Back in 2006 I started to write a tool called grml-debootstrap. grml-debootstrap is a wrapper around debootstrap for installing Debian systems. Using grml-debootstrap, it’s possible to install Debian systems from the command line, without having to boot a Debian installer ISO. This is very handy when you’re running a live system (like Grml or Tails) and want to install Debian. It’s as easy as running:

% sudo grml-debootstrap --target /dev/sda1 --grub /dev/sda

I’m aware that grml-debootstrap is used in Continuous Integration/Delivery environments, installing Debian systems several hundreds or even thousands of times each month. Over the time grml-debootstrap gained many new features. For example, since 2011 grml-debootstrap supports installation into VM images:

% sudo grml-debootstrap --vmfile --vmsize 3G --target debian.img

In 2016 we also added (U)EFI support (the target device in this example is a logical device on LVM):

% sudo grml-debootstrap --grub /dev/sdb --target /dev/mapper/debian--server-rootfs --efi /dev/sdb1

As you might imagine, every new feature we add also increases the risk of breaking something™ existing. Back in 2014, I contributed a setup using Packer to build automated machine images, using grml-debootstrap. That allowed me to generate Vagrant boxes with VirtualBox automation via Packer, serving as a base for reproducing customer environments, but also ensuring that some base features of grml-debootstrap work as intended (including backwards compatibility until Debian 5.0 AKA lenny).

The problem of this Packer setup though is, contributors usually don’t necessarily have Packer and VirtualBox (readily) available. They also might not have the proper network speed/bandwidth to run extensive tests. To get rid of those (local) dependencies and make contributing towards grml-debootstrap more accessible (we’re currently working on e.g. systemd-networkd integration), I invested some time at DebCamp at DebConf18.

I decided to give Travis CI a spin. Travis CI is a well known Continuous Integration service in the open source community. Among others, it’s providing Ubuntu Linux environments, either Container-based or as full Virtual Machines, providing us what we need. Working on the Travis CI integration, I started with enabling ShellCheck (which is also available as Debian package, BTW!), serving as lint tool for shell scripts. All of that takes place in an isolated docker container.

To be able to execute grml-debootstrap, we need to install the latest version of grml-debootstrap from Git. That’s where helps us – it is a hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every update. The result is a Debian package (grml-debootstrap_*.deb) which we can use for installation, ensuring that we run exactly what we will ship to users (including scripts, configuration + dependencies). This also takes place in an isolated docker instance.

Then it’s time to start a Debian/stretch docker container, installing the resulting grml-debootstrap*.deb file from the container run there. Inside it, we execute grml-debootstrap with its VM installation feature, to install Debian into a qemu.img file. Via qemu-system-x86_64 we can boot this VM file. Finally, goss takes care of testing and validation of the resulting system.

The overall architecture looks like:

Diagram of TravisCI setup

So Travis CI is booting a KVM instance on GCE (Google Compute Engine) for us, inside of which we start three docker instances:

  1. shellcheck (koalaman/shellcheck:stable)
  2. (debian:stretch + debian:unstable, controlled via TRAVIS_DEBIAN_DISTRIBUTION)
  3. VM image installation + validation (debian:stretch)

Inside the debian/stretch docker environment, we install and execute grml-debootstrap. Finally we’re booting it via Qemu/KVM and running tests against it.

An example of such a Travis CI run is available at

Travis CI builds heavily depend on a bunch of external resources, which might result in false negatives in builds, this is something that we might improve by further integrating and using our infrastructure with Jenkins, GitLab etc. Anyway, it serves as a great base to make contributions and refactoring of grml-debootstrap easier.

Thanks to Christian Hofstaedtler + Darshaka Pathirana for for proof-reading this.

Vortrag: Best Practices in der IT-Administration, Version 2018 @ GLT18

May 8th, 2018

Auf den Grazer Linuxtagen 2018 (GLT18) war ich als Referent mit einem Vortrag zum Thema “Best Practices in der IT-Administration, Version 2018” vertreten. In dem 25-minütigen Vortrag geht es um die moderne IT-Administration und welche Best Practices es im Jahr 2018 gibt.

Es gibt den Vortrag als Videomitschnitt auf YouTube sowie in verschiedenen Formaten auch direkt beim CCC. Die Vortragsfolien (4.4MB, PDF) stehen ebenfalls online zur Verfügung. Viel Spaß beim Anschauen!

Event: Infracoders-Graz – Best Practices in der IT-Administration

March 17th, 2018

Am Dienstag (20.03.2018) findet das nächste Treffen der Infracoders-Graz statt. Ich wurde eingeladen einen Vortrag zu halten und werde zum Thema “Best Practices in der IT-Administration” referieren.

  • Was: Vortrag zu “Best Practices in der IT-Administration”
  • Wann: Dienstag, 20.03.2018 um 19:00 Uhr
  • Wo: Aula X Space, Georgigasse 85, Graz
  • Eintritt frei

Revisiting 2017

January 1st, 2018

Mainly to recall what happened last year and to give thoughts and planning for upcoming year(s) I’m once again revisiting the last year (previous years: 2016, 2015, 2014, 2013 + 2012). Here we go:


Technology / Open Source:


  • Played the drums less often than I wish I did




  • Read ~one book per month on average, which is below my targets but better than in previous years (the non-IT books I recall are “Open: An Autobiography” by Andre Agassi, “Tiere für Fortgeschrittene” by Eva Menasse, “Du musst dich nicht entscheiden wenn du tausend Träume hast” by Barbaba Sher, “Fettnäpfchenführer Taiwan” by Deike Lautenschläger, “Der kleine Prinz” by Antoine De Saint-Exu, “Gebrauchsanweisung für Spanien” by Paul Ingendaay, “Irmgard Griss: im Gespräch mit Carina Kerschbaumer” by Carina Kerschbaumer, “Die letzte Ausfahrt” by Markus Huber)
  • Continued with taking care of my kids every Monday and half of Tuesday (which is still challenging every now and then with running your own business, but it’s so incredibly important and worth the effort)
  • Started to learn Spanish (maintaining a 354 day streak on Duolingo until the end of 2017)

Conclusion: after a very challenging 2016 I took several arrangements to ensure a better 2017. Recording further metrics about my daily work helped me with capacity and workload planning. I tested lots of different paper notebooks and workflows to improve my daily routines and work, including The Five Minute Journal and the Pomodoro Technique. Trying to get enough sleep and avoid work in the evenings/nights as much as possible (overall it became an exception and not the norm) improved my work-life-balance. In 2018 I’ll get back to attending a few selected events (incl. Fosdem, Grazer Linuxdays and DebConf) and work on some new projects. Exciting times ahead, looking forward to 2018!

Usage of Ansible for Continuous Configuration Management

December 16th, 2017

It all started with a tweet of mine:

Screenshot of

I received quite some feedback since then and I’d like to iterate on this.

I’m a puppet user since ~2008 and since ~2015 also ansible is part of my sysadmin toolbox. Recently certain ansible setups I’m involved in grew faster than I’d like to see, both in terms of managed hosts/services as well as the size of the ansible playbooks. I like ansible for ad hoc tasks, like `ansible -i ansible_hosts all -m shell -a 'lsb_release -rs'` to get an overview what distribution release systems are running, requiring only a working SSH connection and python on the client systems. ansible-cmdb provides a nice and simple to use ad hoc host overview without much effort and overhead. I even have puppetdb_to_ansible scripts to query a puppetdb via its API and generate host lists for usage with ansible on-the-fly. Ansible certainly has its use case for e.g. bootstrapping systems, orchestration and handling deployments.

Ansible has an easier learning curve than e.g. puppet and this might seem to be the underlying reason for its usage for tasks it’s not really good at. To be more precise: IMO ansible is a bad choice for continuous configuration management. Some observations, though YMMV:

  • ansible’s vaults are no real replacement for something like puppet’s hiera (though Jerakia might mitigate at least the pain regarding data lookups)
  • ansible runs are slow, and get slower with every single task you add
  • having a push model with ansible instead of pull (like puppet’s agent mode) implies you don’t get/force regular runs all the time, and your ansible playbooks might just not work anymore once you (have to) touch them again
  • the lack of a DSL results in e.g. each single package management having its own module (apt, dnf, yum,….), having too many ways how to do something, resulting more often than not in something I’d tend to call spaghetti code
  • the lack of community modules comparable to Puppet’s Forge
  • the lack of a central DB (like puppetdb) means you can’t do something like with puppet’s exported resources, which is useful e.g. for central ssh hostkey handling, monitoring checks,…
  • the lack of a resources DAG in ansible might look like a welcome simplification in the beginning, but its absence is becoming a problem when complexity and requirements grow (example: delete all unmanaged files from a directory)
  • it’s not easy at all to have ansible run automated and remotely on a couple of hundred hosts without stumbling over anything — Rudolph Bott
  • as complexity grows, the limitations of Ansible’s (lack of a) language become more maddening — Felix Frank

Let me be clear: I’m in no way saying that puppet doesn’t have its problems (side-rant: it took way too long until Debian/stretch was properly supported by puppets’ AIO packages). I had and still have all my ups and downs with it, though in 2017 and especially since puppet v5 it works fine enough for all my use cases at a diverse set of customers. Whenever I can choose between puppet and ansible for continuous configuration management (without having any host specific restrictions like unsupported architectures, memory limitations,… that puppet wouldn’t properly support) I prefer puppet. Ansible can and does exist as a nice addition next to puppet for me, even if MCollective/Choria is available. Ansible has its use cases, just not for continuous configuration management for me.

The hardest part is to leave some tool behind once you reached the end of its scale. Once you feel like a tool takes more effort than it is worth you should take a step back and re-evaluate your choices. And quoting Felix Frank:

OTOH, if you bend either tool towards a common goal, you’re not playing to its respective strengths.

Thanks: Michael Renner and Christian Hofstaedtler for initial proof reading and feedback

Grml 2017.05 – Codename Freedatensuppe

June 14th, 2017

The Debian stretch release is going to happen soon (on 2017-06-17) and since our latest Grml release is based on a very recent version of Debian stretch I’m taking this as opportunity to announce it also here. So by the end of May we released a new stable release of Grml (the Debian based live system focusing on system administrator’s needs), known as version 2017.05 with codename Freedatensuppe.

Details about the changes of the new release are available in the official release notes and as usual the ISOs are available via

With this new Grml release we finally made the switch from file-rc to systemd. From a user’s point of view this doesn’t change that much, though to prevent having to answer even more mails regarding the switch I wrote down some thoughts in Grml’s FAQ. There are some things that we still need to improve and sort out, but overall the switch to systemd so far went better than anticipated (thanks a lot to the pkg-systemd folks, especially Felipe Sateler and Michael Biebl!).

And last but not least, Darshaka Pathirana helped me a lot with the systemd integration and polishing the release, many thanks!

Happy Grml-ing!

The #newinstretch game: dbgsym packages in Debian/stretch

May 26th, 2017

Debug packages include debug symbols and so far were usually named <package>-dbg in Debian. Those packages are essential if you’ve to debug failing (especially: crashing) programs. Since December 2015 Debian has automatic dbgsym packages, being built by default. Those packages are available as <package>-dbgsym, so starting with Debian/stretch you should no longer look for -dbg packages but for -dbgsym instead. Currently there are 13.369 dbgsym packages available for the amd64 architecture of Debian/stretch, comparing this to the 2.250 packages which I counted being available for Debian/jessie this is really a huge improvement. (If you’re interested in the details of dbgsym packages as a package maintainer take a look at the Automatic Debug Packages page in the Debian wiki.)

The dbgsym packages are NOT provided by the usual Debian archive though (which is good thing, since those packages are quite disk space consuming, e.g. just the amd64 stretch mirror of debian-debug consumes 47GB). Instead there’s a new archive called debian-debug. To get access to the dbgsym packages via the debian-debug suite on your Debian/stretch system include the following entry in your apt’s sources.list configuration (replace with whatever mirror you prefer):

deb stretch-debug main

If you’re not yet familiar with usage of such debug packages let me give you a short demo.

Let’s start with sending SIGILL (Illegal Instruction) to a running sha256sum process, causing it to generate a so called core dump file:

% sha256sum /dev/urandom &
[1] 1126
% kill -4 1126
[1]+  Illegal instruction     (core dumped) sha256sum /dev/urandom
% ls
$ file core
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'sha256sum /dev/urandom', real uid: 1000, effective uid: 1000, real gid: 1000, effective gid: 1000, execfn: '/usr/bin/sha256sum', platform: 'x86_64'

Now we can run the GNU Debugger (gdb) on this core file, executing:

% gdb sha256sum core
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...(no debugging symbols found)...done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in ?? ()
(gdb) bt
#0  0x000055fe9aab63db in ?? ()
#1  0x000055fe9aab8606 in ?? ()
#2  0x000055fe9aab4e5b in ?? ()
#3  0x000055fe9aab42ea in ?? ()
#4  0x00007faec30872b1 in __libc_start_main (main=0x55fe9aab3ae0, argc=2, argv=0x7ffc512951f8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc512951e8) at ../csu/libc-start.c:291
#5  0x000055fe9aab4b5a in ?? ()

As you can see by the several “??” question marks, the “bt” command (short for backtrace) doesn’t provide useful information.
So let’s install the according debug package, which is coreutils-dbgsym in this case (since the sha256sum binary which generated the core file is part of the coreutils package). Then let’s rerun the same gdb steps:

% gdb sha256sum core
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526     lib/sha256.c: No such file or directory.
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036

As you can see it’s reading the debug symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug and this is what we were looking for.
gdb now also tells us that we don’t have lib/sha256.c available. For even better debugging it’s useful to have the according source code available. This is also just an `apt-get source coreutils ; cd coreutils-8.26/` away:

~/coreutils-8.26 % gdb sha256sum ~/core
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526           R( h, a, b, c, d, e, f, g, K(25), M(25) );
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036

Now we’re ready for all the debugging magic. :)

Thanks to everyone who was involved in getting us the automatic dbgsym package builds in Debian!

The #newinstretch game: new forensic packages in Debian/stretch

May 25th, 2017

Repeating what I did for the last Debian releases with the #newinwheezy and #newinjessie games it’s time for the #newinstretch game:

Debian/stretch AKA Debian 9.0 will include a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/stretch release as compared to Debian/jessie (and ignoring jessie-backports):

  • bruteforce-salted-openssl: try to find the passphrase for files encrypted with OpenSSL
  • cewl: custom word list generator
  • dfdatetime/python-dfdatetime: Digital Forensics date and time library
  • dfvfs/python-dfvfs: Digital Forensics Virtual File System
  • dfwinreg: Digital Forensics Windows Registry library
  • dislocker: read/write encrypted BitLocker volumes
  • forensics-all: Debian Forensics Environment – essential components (metapackage)
  • forensics-colorize: show differences between files using color graphics
  • forensics-extra: Forensics Environment – extra console components (metapackage)
  • hashdeep: recursively compute hashsums or piecewise hashings
  • hashrat: hashing tool supporting several hashes and recursivity
  • libesedb(-utils): Extensible Storage Engine DB access library
  • libevt(-utils): Windows Event Log (EVT) format access library
  • libevtx(-utils): Windows XML Event Log format access library
  • libfsntfs(-utils): NTFS access library
  • libfvde(-utils): FileVault Drive Encryption access library
  • libfwnt: Windows NT data type library
  • libfwsi: Windows Shell Item format access library
  • liblnk(-utils): Windows Shortcut File format access library
  • libmsiecf(-utils): Microsoft Internet Explorer Cache File access library
  • libolecf(-utils): OLE2 Compound File format access library
  • libqcow(-utils): QEMU Copy-On-Write image format access library
  • libregf(-utils): Windows NT Registry File (REGF) format access library
  • libscca(-utils): Windows Prefetch File access library
  • libsigscan(-utils): binary signature scanning library
  • libsmdev(-utils): storage media device access library
  • libsmraw(-utils): split RAW image format access library
  • libvhdi(-utils): Virtual Hard Disk image format access library
  • libvmdk(-utils): VMWare Virtual Disk format access library
  • libvshadow(-utils): Volume Shadow Snapshot format access library
  • libvslvm(-utils): Linux LVM volume system format access librar
  • plaso: super timeline all the things
  • pompem: Exploit and Vulnerability Finder
  • pytsk/python-tsk: Python Bindings for The Sleuth Kit
  • rekall(-core): memory analysis and incident response framework
  • unhide.rb: Forensic tool to find processes hidden by rootkits (was already present in wheezy but missing in jessie, available via jessie-backports though)
  • winregfs: Windows registry FUSE filesystem

Join the #newinstretch game and present packages and features which are new in Debian/stretch.

Debian stretch: changes in util-linux #newinstretch

May 19th, 2017

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch!

Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch.

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available.

Tools that have been taken over from other packages

  • last: used to be shipped via sysvinit-utils in Debian/jessie
  • lastb: used to be shipped via sysvinit-utils in Debian/jessie
  • mesg: used to be shipped via sysvinit-utils in Debian/jessie
  • mountpoint: used to be shipped via initscripts in Debian/jessie
  • sulogin: used to be shipped via sysvinit-utils in Debian/jessie

New tools

  • lsipc: show information on IPC facilities, e.g.:
  • root@ff2713f55b36:/# lsipc
    RESOURCE DESCRIPTION                                              LIMIT USED  USE%
    MSGMNI   Number of message queues                                 32000    0 0.00%
    MSGMAX   Max size of message (bytes)                               8192    -     -
    MSGMNB   Default max size of queue (bytes)                        16384    -     -
    SHMMNI   Shared memory segments                                    4096    0 0.00%
    SHMALL   Shared memory pages                       18446744073692774399    0 0.00%
    SHMMAX   Max size of shared memory segment (bytes) 18446744073692774399    -     -
    SHMMIN   Min size of shared memory segment (bytes)                    1    -     -
    SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
    SEMMNS   Total number of semaphores                          1024000000    0 0.00%
    SEMMSL   Max semaphores per semaphore set.                        32000    -     -
    SEMOPM   Max number of operations per semop(2)                      500    -     -
    SEMVMX   Semaphore max value                                      32767    -     -
  • lslogins: display information about known users in the system, e.g.:
  • root@ff2713f55b36:/# lslogins 
        0 root        2        0        1            root
        1 daemon      0        0        1            daemon
        2 bin         0        0        1            bin
        3 sys         0        0        1            sys
        4 sync        0        0        1            sync
        5 games       0        0        1            games
        6 man         0        0        1            man
        7 lp          0        0        1            lp
        8 mail        0        0        1            mail
        9 news        0        0        1            news
       10 uucp        0        0        1            uucp
       13 proxy       0        0        1            proxy
       33 www-data    0        0        1            www-data
       34 backup      0        0        1            backup
       38 list        0        0        1            Mailing List Manager
       39 irc         0        0        1            ircd
       41 gnats       0        0        1            Gnats Bug-Reporting System (admin)
      100 _apt        0        0        1            
    65534 nobody      0        0        1            nobody
  • lsns: list system namespaces, e.g.:
  • root@ff2713f55b36:/# lsns
    4026531835 cgroup      2   1 root bash
    4026531837 user        2   1 root bash
    4026532473 mnt         2   1 root bash
    4026532474 uts         2   1 root bash
    4026532475 ipc         2   1 root bash
    4026532476 pid         2   1 root bash
    4026532478 net         2   1 root bash
  • setpriv: run a program with different privilege settings
  • zramctl: tool to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices

New features/options

addpart (show or change the real-time scheduling attributes of a process):

--reload reload prompts on running agetty instances

blkdiscard (discard the content of sectors on a device):

-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard

chrt (show or change the real-time scheduling attributes of a process):

-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE

fdformat (do a low-level formatting of a floppy disk):

-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)

fdisk (display or manipulate a disk partition table):

-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)

New available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

findmnt (find a (mounted) filesystem):

-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details

flock (manage file locks from shell scripts):

-F, --no-fork            execute command without forking
    --verbose            increase verbosity

getty (open a terminal and set its mode):

--reload               reload prompts on running agetty instances

hwclock (query or set the hardware clock):

--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)

ldattach (attach a line discipline to a serial line):

-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach

logger (enter messages into the system log):

-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets

losetup (set up and control loop devices):

-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on|off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format

New available --list column:

DIO  access backing file with direct-io

lsblk (list information about block devices):

-J, --json           use JSON output format

New available columns (for --output):

HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems

lscpu (display information about the CPU architecture):

-y, --physical          print physical instead of logical IDs

New available column:

DRAWER  logical drawer number

lslocks (list local system locks):

-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions

nsenter (run a program with namespaces of other processes):

-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID

rtcwake (enter a system sleep state until a specified wakeup time):

--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)

sfdisk (display or manipulate a disk partition table):

New Commands:

-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes

New Options:

-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)

Available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

swapon (enable devices and files for paging and swapping):

-o, --options <list>     comma-separated list of swap options

New available columns (for --show):

UUID   swap uuid
LABEL  swap label

unshare (run a program with some namespaces unshared from the parent):

-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave|shared|private|unchanged   modify mount propagation in mount namespace
-s, --setgroups allow|deny                         control the setgroups syscall in user namespaces

Deprecated / removed options

sfdisk (display or manipulate a disk partition table):

-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

Debugging a mystery: ssh causing strange exit codes?

May 18th, 2017

XKCD comic 1722

Recently we had a WTF moment at a customer of mine which is worth sharing.

In an automated deployment procedure we’re installing Debian systems and setting up MySQL HA/Scalability. Installation of the first node works fine, but during installation of the second node something weird is going on. Even though the deployment procedure reported that everything went fine: it wasn’t fine at all. After bisecting to the relevant command lines where it’s going wrong we identified that the failure is happening between two ssh/scp commands, which are invoked inside a chroot through a shell wrapper. The ssh command caused a wrong exit code showing up: instead of bailing out with an error (we’re running under ‘set -e‘) it returned with exit code 0 and the deployment procedure continued, even though there was a fatal error. Initially we triggered the bug when two ssh/scp command lines close to each other were executed, but I managed to find a minimal example for demonstration purposes:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

What we’d expect is the following behavior, receive exit code 1 from the last command line in the chroot wrapper:

# ./ssh_wrapper 
return code = 1

But what we actually get is exit code 0:

# ./ssh_wrapper 
return code = 0

Uhm?! So what’s going wrong and what’s the fix? Let’s find out what’s causing the problem:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost command_does_not_exist >/dev/null 2>&1
exit "$?"
echo "return code = $?"

# ./ssh_wrapper 
return code = 127

Ok, so if we invoke it with a binary that does not exist we properly get exit code 127, as expected.
What about switching /bin/bash to /bin/sh (which corresponds to dash here) to make sure it’s not a bash bug:

# cat ssh_wrapper 
chroot << "EOF" / /bin/sh
ssh root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Oh, but that works as expected!?

When looking at this behavior I had the feeling that something is going wrong with file descriptors. So what about wrapping the ssh command line within different tools? No luck with `stdbuf -i0 -o0 -e0 ssh root@localhost hostname`, nor with `script -c “ssh root@localhost hostname” /dev/null` and also not with `socat EXEC:”ssh root@localhost hostname” STDIO`. But it works under unbuffer(1) from the expect package:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
unbuffer ssh root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

So my bet on something with the file descriptor handling was right. Going through the ssh manpage, what about using ssh’s `-n` option to prevent reading from standard input (stdin)?

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh -n root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Bingo! Quoting ssh(1):

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).
             This must be used when ssh is run in the background.  A common trick is
             to use this to run X11 programs on a remote machine.  For example,
             ssh -n emacs & will start an emacs on,
             and the X11 connection will be automatically forwarded over an encrypted
             channel.  The ssh program will be put in the background.  (This does not work
             if ssh needs to ask for a password or passphrase; see also the -f option.)

Let’s execute the scripts through `strace -ff -s500 ./ssh_wrapper` to see what’s going in more detail.
In the strace run without ssh’s `-n` option we see that it’s cloning stdin (file descriptor 0), getting assigned to file descriptor 4:

dup(0)            = 4
read(4, "exit 1\n", 16384) = 7

while in the strace run with ssh’s `-n` option being present there’s no file descriptor duplication but only:

open("/dev/null", O_RDONLY) = 4

This matches ssh.c’s ssh_session2_open function (where stdin_null_flag corresponds to ssh’s `-n` option):

        if (stdin_null_flag) {                                            
                in = open(_PATH_DEVNULL, O_RDONLY);
        } else {
                in = dup(STDIN_FILENO);

This behavior can also be simulated if we explicitly read from /dev/null, and this indeed works as well:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null </dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

The underlying problem is that both bash and ssh are consuming from stdin. This can be verified via:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
echo "Inner: pre"
while read line; do echo "Eat: $line" ; done
echo "Inner: post"
exit 3
echo "Outer: exit code = $?"

# ./ssh_wrapper
Inner: pre
Eat: echo "Inner: post"
Eat: exit 3
Outer: exit code = 0

This behavior applies to bash, ksh, mksh, posh and zsh. Only dash doesn’t show this behavior.
To understand the difference between bash and dash executions we can use the following test scripts:

# cat stdin-test-cmp

TEST_SH=bash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-bash.out
TEST_SH=dash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-dash.out

# cat stdin-test

: ${TEST_SH:=dash}

echo "Inner: pre"
while read line; do echo "Eat: $line"; done
echo "Inner: post"
exit 3

echo "Outer: exit code = $?"

When executing `./stdin-test-cmp` and comparing the generated files stdin-test-bash.out and stdin-test-dash.out you’ll notice that dash consumes all stdin in one single go (a single `read(0, …)`), instead of character-by-character as specified by POSIX and implemented by bash, ksh, mksh, posh and zsh. See stdin-test-bash.out on the left side and stdin-test-dash.out on the right side in this screenshot:

screenshot of vimdiff on *.out files

So when ssh tries to read from stdin there’s nothing there anymore.

Quoting POSIX’s sh section:

When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. When the command expecting to read standard input is started asynchronously by an interactive shell, it is unspecified whether characters are read by the command or interpreted by the shell.

If the standard input to sh is a FIFO or terminal device and is set to non-blocking reads, then sh shall enable blocking reads on standard input. This shall remain in effect when the command completes.

So while we learned that both bash and ssh are consuming from stdin and this needs to prevented by either using ssh’s `-n` or explicitly specifying stdin, we also noticed that dash’s behavior is different from all the other main shells and could be considered a bug (which we reported as #862907).

Lessons learned:

  • Be aware of ssh’s `-n` option when using ssh/scp inside scripts.
  • Feeding shell scripts via stdin is not only error-prone but also very inefficient, as for a standards compliant implementation it requires a read(2) system call per byte of input. Instead create a temporary script you safely execute then.
  • When debugging problems make sure to explore different approaches and tools to ensure you’re not relying on a buggy behavior in any involved tool.

Thanks to Guillem Jover for review and feedback regarding this blog post.

Revisiting 2016

January 4th, 2017

Mainly to recall what happened last year and to give thoughts and planning for upcoming year(s) I’m once again revisiting the last year (previous years: 2015, 2014, 2013 + 2012). Here we go:


Technology / Open Source:


  • Bought a Cajón, my kids loving it as much as I do :)
  • Played the drums more often in the beginning of 2016, went down close to zero in the second half of 2016 (meh)


  • Bought a unicycle and started to learn to ride it in summer (got stuck since then without progress)
  • Played less Badminton (as expected since I skipped the whole summer holidays season, but went kind of regularly during winter and summer terms otherwise) and played close to never Table Tennis (mainly due to date collisions with my sparring partner but also due to other time constraints), though managed to keep my handicap more or less in both sports


  • Third year of business with SynPro Solutions, very happy with what we achieved and very glad to have such fantastic partners
  • Started to gather more metrics around my work related to Grml Solutions, SynPro Solutions + Grml-Forensic for better (capacity) planning (that’s something I’d like to talk about in public at some point)


  • Reading activities was mainly around newspapers and articles, sadly not so many books
  • First year of kindergarten for my older daughter, this brought us into a school-like schedule I wasn’t used to anymore (I’m finally starting to adopt to it though, also related to capacity planning efforts)
  • Took two months of child care time (great time!), later on also taking care of my kids every Monday and half of Tuesday – this turned out to be way more stressful than expected (having just ~30 hours left per week for normal working hours and all my business duties, this is also the main reason why I started with the metrics thingy)

Conclusion: 2016 was quite different to previous years, mainly because of being very time-constrained on all sides. One of the most challenging years overall. I’m planning to change quite some things for 2017 (started to do so already in Q4/2016), hoping for the best.

Event: Digitaldialog Privacy

November 25th, 2016

Digitaldialog ist eine Veranstaltungsreihe der Steirischen Wirtschaftsförderung SFG. Am Dienstag den 29.11. findet der Digitaldialog zum Thema Privacy statt. Ich wurde eingeladen an der Podiumsdiskussion teilzunehmen und freue mich auf interessante Fragen aus dem Publikum. :)

Weitere Informationen zum Event gibt es auf der SFG-Website und im Event-Flyer (PDF).

  • Datum: Dienstag, 29.11.2016 ab 16:00 Uhr
  • Veranstalter: SFG, Infonova, Evolaris, IBC, Campus02, Kleine Zeitung, APA
  • Ort: Seering 10, 8141 Unterpremstätten, IBC Graz, Hotel Ramada
  • Eintritt frei

Kinderbetreuungsgeld in Österreich, Schwerpunkt Selbständige

September 1st, 2016

“Die wertvollste Zeit meines Lebens war die Papa-Karenz. Meine Zwillinge haben nun ein völlig anderes Verhältnis zu mir. Und ich zu ihnen. Wer darauf freiwillig verzichtet, ist selbst schuld. — Florian Klenk (Journalist und Chefredakteur des Stadtmagazins FALTER)

Das kann ich so absolut unterschreiben. Bei mir waren es zwar nicht Zwillinge, sondern 2 Töchter mit einem Abstand von ~2,5 Jahren, und für Selbstständige existiert das Wort Karenz in seiner Bedeutung nicht wirklich (Elternkarenz, also die Arbeitsfreistellung aus Anlass der Mutterschaft gilt für Arbeitnehmer). Aber meine Frau und ich haben Kinderbetreuungsgeld für beide Töchter bezogen und wir durften dabei so manches lernen…

TL;DR: das Kinderbetreuungsgeld ist eine gute Sache (obwohl ich ein wenig neidisch nach Schweden schiele), auch wenn es einiges zu Bedenken gibt – ich kann aber trotzdem jedem (speziell Männern!) nur aufs Wärmste empfehlen, die Kinderbetreuungszeit bzw. das Kinderbetreuungsgeld in Anspruch zu nehmen und diese Zeit mit dem eigenen Kind zu verbringen.

Ein paar Fakten vorab für jene die sich damit noch gar nicht beschäftigt haben:

Aktuell (2016) gibt es seit 2008 fünf Modelle für Kinderbetreuungsgeld (KBG): vier Pauschalvarianten (12+2, 15+3, 20+4 sowie 30+6 Monate) und die einkommensabhängige Variante mit 12+2 Monaten. Das 12+2 beispielsweise bedeutet, dass der eine Elternteil 12 Monate in Anspruch nimmt, der andere Elternteil dann noch die Option auf weitere 2 Monate hat.

Voraussetzung für den Erhalt des KBG ist der Bezug der Familienbeihilfe (geschieht mittlerweile automatisch mit der Geburt in antragsloser Form) sowie der gemeinsame Hauptwohnsitz mit dem Kind. In der Regel fängt der Mutterschutz 8 Wochen vor dem errechneten Geburtstermin an und dauert dann noch weitere 8 Wochen an (in Summe also 4 Monate). Diese 8 Wochen nach der Geburt entsprechen dabei normalerweise bereits den ersten beiden Monaten der Kinderbetreuung, wobei der höhere Geldbetrag ausbezahlt wird (wenn also das Wochengeld höher als das KBG ist bezieht man das Wochengeld, es sind dann aber trotzdem bereits 2 Monate von der Kinderbetreuungszeit verbraucht). Im klassischen Fall bedeutet das also 8 Wochen vor der Geburt Wochengeld und dann 12, 15, 20 bzw. 30 Monate beim Kind bleiben. Achtung – der Wechsel zwischen den Partnern ist maximal zwei Mal möglich (also z.B. Mutter->Vater>Mutter), man benötigt aber mindestens 2 Monate am Stück.

Soweit die Grundlagen in der Schnellfassung, weitere Details gibt es u.a. auf der offiziellen Website zum Kinderbetreuungsgeld, wie auch in den Wikipedia-Artikeln Kinderbetreuungsgeld und Wochengeld.

Wir haben uns beide Mal für das einkommensabhängige Modell entschieden, einerseits weil beide von uns möglichst bald wieder arbeiten wollten und andererseits es finanziell die attraktivste Option für uns war. Je nach Modell gibt es zwischen ~14,5 Euro und 66 Euro pro Tag, das einkommensabhängige Modell bezieht sich auf 80% des bisherigen Einkommens, limitiert mit maximal 66 Euro pro Tag (also ~2k Euro pro Monat). Je nach Modell gibt es verschiedene Zuverdienstgrenzen, bei der einkommensabhängigen Variante ist die Zuverdienstgrenze sehr niedrig (ungefähr im Bereich der Geringfügigkeitsgrenze mit ~400 Euro pro Monat).


  • für Selbstständige gilt das sog. Geld-Zuflussprinzip: im Zeitraum in dem man das KBG bezieht, dürfen auf dem Bankkonto keine Geldbeträge eingehen (bzw. müssen diese unterhalb der jeweiligen Zuverdienstgrenze liegen)
  • Änderungen am gewählten Modell sind nur innerhalb von 14 Tagen ab Einreichung möglich, innerhalb dieser Zeit wird der Antrag vielfach aber noch nicht einmal bearbeitet (mir wurden 6-8 Wochen Bearbeitungszeit beim Einreichen gemeldet), also gut überlegen welches Modell man will und dann doppelt und dreifach überprüfen bevor man es abschickt
  • das KBG gibt es erst am 10. des Folgemonats (Kinderbetreuung startet am ~10. Mai? Dann gibt es das erste Geld entsprechend für die restlichen ~20 Tage vom Mai am 10. Juni aufs Konto)
  • das Kind muss am gleichen Hauptwohnsitz gemeldet sein wie die Eltern (mit einem Nebenwohnsitz läuft man also schnell in die Falle)
  • für die Einkommensberechnung zählt das Jahr vor der Geburt des Kindes. Wenn das Kind also Ende Dezember 2013 auf die Welt kommt, der erste Elternteil die vollen 12 Monate im Jahr 2014 nimmt und der zweite Elternteil dann die übrigen zwei Monate mit Anfang 2015, dann zählt für den zweiten Elternteil die Berechnungsgrundlage von 2012(!)
  • kommt ein weiteres Kind während man bereits im Kinderbetreuungsmodell ist, so gibt es das KBG deswegen nicht zweimal (aber 50% Zuschlag)
  • der Kündigungsschutz für Arbeitnehmer gilt nur für 2 Jahre, damit ist das längste Modell mit 30+6 diesbezüglich keine ernsthafte Option
  • wenn nicht beide Elternteile Vollzeit arbeiten geht sich bei einem weiteren Kind eventuell nicht mehr das (volle) einkommensabhängige Modell für Beide aus, aber: man kann es trotzdem beantragen und wird dann auf das pauschale 12+2 Modell abgestuft, wenn man mit den 80% des Einkommens unter der Grenze liegen sollte, der zweite Partner kann dann aber trotzdem (obwohl der andere auf das Pauschalmodell abgestuft wurde!) das einkommensabhängige KBG beziehen!
  • erst wenn die genaue Berechnungsgrundlage feststeht gibt es den vollen Geldbetrag, bis dahin wird mit dem jeweiligen Mindesttagessatz gerechnet

Als Selbstständiger hatte ich aber leider den Eindruck, dass speziell die SVA nicht sonderlich daran interessiert ist, dass man in Kinderbetreuung geht:

  • das Geld wurde beim ersten Mal nach dem ersten Monat auf einmal nicht mehr am geplanten Termin überwiesen. Auf meine Rückfrage hin hat sich herausgestellt, dass die Bemessungsgrundlage mittlerweile vorläge und der Betrag dahingehend angepasst wurde. Die Zahlungen aber wurden damit einfach ausgesetzt, und das ohne jegliche Rückmeldung an mich… :(
  • der Antrag für das KBG kann auch via FinanzOnline eingebracht werden, aber Achtung: die SVA ist als Sozialversicherungsträger gar nicht auswählbar und die Karenz-Option muss ausgefüllt werden (dabei einfach sich selbst als Arbeitgeber eintragen). Als Sozialversicherungsträger habe ich daher die GKK gewählt (diese zahlt nämlich effektiv auch das KBG aus, selbst wenn man bei der SVA ist!). 3 Wochen ohne Rückmeldung vergingen und ich habe daher nachgefragt, wie es denn aktuell aussehe. Beim Telefonat mit der GKK wurde mir mitgeteilt, dass der Antrag an die SVA weitergeleitet wurde und ich dort nachfragen solle. Gesagt, getan – die SVA findet aber nichts dazu, ich möge es also nochmal in Papierform einreichen, mit 6-8 Wochen Bearbeitungszeit sollte ich allerdings rechnen (war dann zum Glück nicht ganz so schlimm)
  • während in der Papierform die Option “bis zur maximalen Bezugsdauer” existiert, muss bei der Variante via FinanzOnline das exakte Datum angegeben werden, die Berechnung des exakten Tages für Beginn und Ende des Kinderbetreuungszeitraums scheint dabei ein gut gehütetes Ergebnis zu sein, denn sowohl auf der Hotline der SVA, als auch am Schalter vor Ort wurden mir jeweils falsche Daten genannt :-/

Ich empfehle daher:

  • Informieren, Fragen und Verifizieren – nicht alle Formulierungen sind eindeutig, im Zweifelsfall lieber konkret nachfragen
  • vorab genügend Geld am Konto puffern, damit man auch ohne KBG keinen Stress bekommt, falls sich diesbezüglich doch etwas verzögern sollte, das geht auch einher mit:
  • den eigenen Kunden rechtzeitig im Voraus die Situation erklären und offene Rechnungen rechtzeitig eintreiben oder diese bis nach der Kinderbetreuungszeit ausstehen lassen
  • (speziell als Selbstständiger) den Antrag für das KBG in Papierform erledigen, sich selbst eine Kopie davon anfertigen und nach Einreichung halbwegs zeitnahe nachfragen, ob damit alles in Ordnung ist

Achtung: mit 01.03.2017 gibt es Neuerungen beim KBG, hier erwähnte Details könnten sich damit geändert haben.

Disclaimer: IANAL, ich nehme Korrekturen, Verbesserungsvorschläge und Ergänzungen aber gerne entgegen.

systemd backport of v230 available for Debian/jessie

July 28th, 2016

At DebConf 16 I was working on a systemd backport for Debian/jessie. Results are officially available via the Debian archive now.

In Debian jessie we have systemd v215 (which originally dates back to 2014-07-03 upstream-wise, plus changes + fixes from pkg-systemd folks of course). Now via Debian backports you have the option to update systemd to a very recent version: v230. If you have jessie-backports enabled it’s just an `apt install systemd -t jessie-backports` away. For the upstream changes between v215 and v230 see upstream’s NEWS file for list of changes.

(Actually the systemd backport is available since 2016-07-19 for amd64, arm64 + armhf, though for mips, mipsel, powerpc, ppc64el + s390x we had to fight against GCC ICEs when compiling on/for Debian/jessie and for i386 architecture the systemd test-suite identified broken O_TMPFILE permission handling.)

Thanks to the Alexander Wirt from the backports team for accepting my backport, thanks to intrigeri for the related apparmor backport, Guus Sliepen for the related ifupdown backport and Didier Raboud for the related usb-modeswitch/usb-modeswitch-data backports. Thanks to everyone testing my systemd backport and reporting feedback. Thanks a lot to Felipe Sateler and Martin Pitt for reviews, feedback and cooperation. And special thanks to Michael Biebl for all his feedback, reviews and help with the systemd backport from its very beginnings until the latest upload.

PS: I cannot stress this enough how fantastic Debian’s pkg-systemd team is. Responsive, friendly, helpful, dedicated and skilled folks, thanks folks!

DebConf16 in Capetown/South Africa: Lessons learnt

July 19th, 2016

DebConf 16 in Capetown/South Africa was fantastic for many reasons.

My Capetown/South Africa/Culture/Flight related lessons:

  • Avoid flying on Sundays (especially in/from Austria where plenty of hotlines are closed on Sundays or at least not open when you need them)
  • Actually turn back your seat on the flight when trying to sleep and not forget that this option exists *cough*
  • While UCT claims to take energy saving quite serious (e.g. “turn off the lights” mentioned at many places around the campus), several toilets flush all their water, even when trying to do just small™ business and also two big lights in front of a main building seem to be shining all day long for no apparent reason
  • There doesn’t seem to be a standard for the side of hot vs. cold water-taps
  • Soap pieces and towels on several toilets
  • For pedestrians there’s just a very short time of green at the traffic lights (~2-3 seconds), then red blinking lights show that you can continue walking across the street (but *should* not start walking) until it’s fully red again (but not many people seem to care about the rules anyway :))
  • Warning lights of cars are used for saying thanks (compared to hand waving in e.g. Austria)
  • The 40km/h speed limit signs on the roads seem to be showing the recommended minimum speed :-)
  • There are many speed bumps on the roads
  • Geese quacking past 11:00 p.m. close to a sleeping room are something I’m also not used to :-)
  • Announced downtimes for the Internet connection are something I’m not used to
  • WLAN in the dorms of UCT as well as in any other place I went to at UCT worked excellent (measured ~22-26 Mbs downstream in my room, around 26Mbs in the hacklab) (kudos!)
  • WLAN is available even on top of the Table Mountain (WLAN working and being free without any registration)
  • Number26 credit card is great to withdraw money from ATMs without any extra fees from common credit card companies (except for the fee the ATM itself charges but displays ahead on-site anyway)
  • Splitwise is a nice way to share expenses on the road, especially with its mobile app and the money beaming using the Number26 mobile app

My technical lessons from DebConf16:

  • ran into way too many yak-shaving situations, some of them might warrant separate blog posts…
  • finally got my hands on gbp-pq (manage quilt patches on patch queue branches in git): very nice to be able to work with plain git and then get patches for your changes, also having upstream patches (like cherry-picks) inside debian/patches/ and the debian specific changes inside debian/patches/debian/ is a lovely idea, this can be easily achieved via “Gbp-Pq: Topic debian” with gbp’s pq and is used e.g. in pkg-systemd, thanks to Michael Biebl for the hint and helping hand
  • David Bremner’s gitpkg/git-debcherry is something to also be aware of (thanks for the reminder, gregoa)
  • autorevision: extracts revision metadata from your VCS repository (thanks to pabs)
  • blhc: build log hardening check
  • Guido’s gbp skills exchange session reminded me once again that I should use `gbp import-dsc –download $URL_TO_DSC` more often
  • features specific copyright + patches sections (thanks, Matthieu Caneill)
  • dpkg-mergechangelogs(1) for 3-way merge of debian/changelog files (thanks, buxy)
  • meta-git from pkg-perl is always worth a closer look
  • ifupdown2 (its current version is also available in jessie-backports!) has some nice features, like `ifquery –running $interface` to get the life configuration of a network interface, json support (`ifquery –format=json …`) and makotemplates support to generate configuration for plenty of interfaces

BTW, thanks to the video team the recordings from the sessions are available online.