Don't understand german? Read or subscribe to my english-only feed.

Revisiting 2017

January 1st, 2018

Mainly to recall what happened last year and to give thoughts and planning for upcoming year(s) I’m once again revisiting the last year (previous years: 2016, 2015, 2014, 2013 + 2012). Here we go:

Events:

Technology / Open Source:

Music:

  • Played the drums less often than I wish I did

Sports:

Business:

Personal:

  • Read ~one book per month on average, which is below my targets but better than in previous years (the non-IT books I recall are “Open: An Autobiography” by Andre Agassi, “Tiere für Fortgeschrittene” by Eva Menasse, “Du musst dich nicht entscheiden wenn du tausend Träume hast” by Barbaba Sher, “Fettnäpfchenführer Taiwan” by Deike Lautenschläger, “Der kleine Prinz” by Antoine De Saint-Exu, “Gebrauchsanweisung für Spanien” by Paul Ingendaay, “Irmgard Griss: im Gespräch mit Carina Kerschbaumer” by Carina Kerschbaumer, “Die letzte Ausfahrt” by Markus Huber)
  • Continued with taking care of my kids every Monday and half of Tuesday (which is still challenging every now and then with running your own business, but it’s so incredibly important and worth the effort)
  • Started to learn Spanish (maintaining a 354 day streak on Duolingo until the end of 2017)

Conclusion: after a very challenging 2016 I took several arrangements to ensure a better 2017. Recording further metrics about my daily work helped me with capacity and workload planning. I tested lots of different paper notebooks and workflows to improve my daily routines and work, including The Five Minute Journal and the Pomodoro Technique. Trying to get enough sleep and avoid work in the evenings/nights as much as possible (overall it became an exception and not the norm) improved my work-life-balance. In 2018 I’ll get back to attending a few selected events (incl. Fosdem, Grazer Linuxdays and DebConf) and work on some new projects. Exciting times ahead, looking forward to 2018!

Usage of Ansible for Continuous Configuration Management

December 16th, 2017

It all started with a tweet of mine:

Screenshot of https://twitter.com/mikagrml/status/941304704004448257

I received quite some feedback since then and I’d like to iterate on this.

I’m a puppet user since ~2008 and since ~2015 also ansible is part of my sysadmin toolbox. Recently certain ansible setups I’m involved in grew faster than I’d like to see, both in terms of managed hosts/services as well as the size of the ansible playbooks. I like ansible for ad hoc tasks, like `ansible -i ansible_hosts all -m shell -a 'lsb_release -rs'` to get an overview what distribution release systems are running, requiring only a working SSH connection and python on the client systems. ansible-cmdb provides a nice and simple to use ad hoc host overview without much effort and overhead. I even have puppetdb_to_ansible scripts to query a puppetdb via its API and generate host lists for usage with ansible on-the-fly. Ansible certainly has its use case for e.g. bootstrapping systems, orchestration and handling deployments.

Ansible has an easier learning curve than e.g. puppet and this might seem to be the underlying reason for its usage for tasks it’s not really good at. To be more precise: IMO ansible is a bad choice for continuous configuration management. Some observations, though YMMV:

  • ansible’s vaults are no real replacement for something like puppet’s hiera (though Jerakia might mitigate at least the pain regarding data lookups)
  • ansible runs are slow, and get slower with every single task you add
  • having a push model with ansible instead of pull (like puppet’s agent mode) implies you don’t get/force regular runs all the time, and your ansible playbooks might just not work anymore once you (have to) touch them again
  • the lack of a DSL results in e.g. each single package management having its own module (apt, dnf, yum,….), having too many ways how to do something, resulting more often than not in something I’d tend to call spaghetti code
  • the lack of community modules comparable to Puppet’s Forge
  • the lack of a central DB (like puppetdb) means you can’t do something like with puppet’s exported resources, which is useful e.g. for central ssh hostkey handling, monitoring checks,…
  • the lack of a resources DAG in ansible might look like a welcome simplification in the beginning, but its absence is becoming a problem when complexity and requirements grow (example: delete all unmanaged files from a directory)
  • it’s not easy at all to have ansible run automated and remotely on a couple of hundred hosts without stumbling over anything — Rudolph Bott
  • as complexity grows, the limitations of Ansible’s (lack of a) language become more maddening — Felix Frank

Let me be clear: I’m in no way saying that puppet doesn’t have its problems (side-rant: it took way too long until Debian/stretch was properly supported by puppets’ AIO packages). I had and still have all my ups and downs with it, though in 2017 and especially since puppet v5 it works fine enough for all my use cases at a diverse set of customers. Whenever I can choose between puppet and ansible for continuous configuration management (without having any host specific restrictions like unsupported architectures, memory limitations,… that puppet wouldn’t properly support) I prefer puppet. Ansible can and does exist as a nice addition next to puppet for me, even if MCollective/Choria is available. Ansible has its use cases, just not for continuous configuration management for me.

The hardest part is to leave some tool behind once you reached the end of its scale. Once you feel like a tool takes more effort than it is worth you should take a step back and re-evaluate your choices. And quoting Felix Frank:

OTOH, if you bend either tool towards a common goal, you’re not playing to its respective strengths.

Thanks: Michael Renner and Christian Hofstaedtler for initial proof reading and feedback

Grml 2017.05 – Codename Freedatensuppe

June 14th, 2017

The Debian stretch release is going to happen soon (on 2017-06-17) and since our latest Grml release is based on a very recent version of Debian stretch I’m taking this as opportunity to announce it also here. So by the end of May we released a new stable release of Grml (the Debian based live system focusing on system administrator’s needs), known as version 2017.05 with codename Freedatensuppe.

Details about the changes of the new release are available in the official release notes and as usual the ISOs are available via grml.org/download.

With this new Grml release we finally made the switch from file-rc to systemd. From a user’s point of view this doesn’t change that much, though to prevent having to answer even more mails regarding the switch I wrote down some thoughts in Grml’s FAQ. There are some things that we still need to improve and sort out, but overall the switch to systemd so far went better than anticipated (thanks a lot to the pkg-systemd folks, especially Felipe Sateler and Michael Biebl!).

And last but not least, Darshaka Pathirana helped me a lot with the systemd integration and polishing the release, many thanks!

Happy Grml-ing!

The #newinstretch game: dbgsym packages in Debian/stretch

May 26th, 2017

Debug packages include debug symbols and so far were usually named <package>-dbg in Debian. Those packages are essential if you’ve to debug failing (especially: crashing) programs. Since December 2015 Debian has automatic dbgsym packages, being built by default. Those packages are available as <package>-dbgsym, so starting with Debian/stretch you should no longer look for -dbg packages but for -dbgsym instead. Currently there are 13.369 dbgsym packages available for the amd64 architecture of Debian/stretch, comparing this to the 2.250 packages which I counted being available for Debian/jessie this is really a huge improvement. (If you’re interested in the details of dbgsym packages as a package maintainer take a look at the Automatic Debug Packages page in the Debian wiki.)

The dbgsym packages are NOT provided by the usual Debian archive though (which is good thing, since those packages are quite disk space consuming, e.g. just the amd64 stretch mirror of debian-debug consumes 47GB). Instead there’s a new archive called debian-debug. To get access to the dbgsym packages via the debian-debug suite on your Debian/stretch system include the following entry in your apt’s sources.list configuration (replace deb.debian.org with whatever mirror you prefer):

deb http://deb.debian.org/debian-debug/ stretch-debug main

If you’re not yet familiar with usage of such debug packages let me give you a short demo.

Let’s start with sending SIGILL (Illegal Instruction) to a running sha256sum process, causing it to generate a so called core dump file:

% sha256sum /dev/urandom &
[1] 1126
% kill -4 1126
% 
[1]+  Illegal instruction     (core dumped) sha256sum /dev/urandom
% ls
core
$ file core
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'sha256sum /dev/urandom', real uid: 1000, effective uid: 1000, real gid: 1000, effective gid: 1000, execfn: '/usr/bin/sha256sum', platform: 'x86_64'

Now we can run the GNU Debugger (gdb) on this core file, executing:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...(no debugging symbols found)...done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in ?? ()
(gdb) bt
#0  0x000055fe9aab63db in ?? ()
#1  0x000055fe9aab8606 in ?? ()
#2  0x000055fe9aab4e5b in ?? ()
#3  0x000055fe9aab42ea in ?? ()
#4  0x00007faec30872b1 in __libc_start_main (main=0x55fe9aab3ae0, argc=2, argv=0x7ffc512951f8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc512951e8) at ../csu/libc-start.c:291
#5  0x000055fe9aab4b5a in ?? ()
(gdb) 

As you can see by the several “??” question marks, the “bt” command (short for backtrace) doesn’t provide useful information.
So let’s install the according debug package, which is coreutils-dbgsym in this case (since the sha256sum binary which generated the core file is part of the coreutils package). Then let’s rerun the same gdb steps:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526     lib/sha256.c: No such file or directory.
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036

As you can see it’s reading the debug symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug and this is what we were looking for.
gdb now also tells us that we don’t have lib/sha256.c available. For even better debugging it’s useful to have the according source code available. This is also just an `apt-get source coreutils ; cd coreutils-8.26/` away:

~/coreutils-8.26 % gdb sha256sum ~/core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526           R( h, a, b, c, d, e, f, g, K(25), M(25) );
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036
(gdb) 

Now we’re ready for all the debugging magic. :)

Thanks to everyone who was involved in getting us the automatic dbgsym package builds in Debian!

The #newinstretch game: new forensic packages in Debian/stretch

May 25th, 2017

Repeating what I did for the last Debian releases with the #newinwheezy and #newinjessie games it’s time for the #newinstretch game:

Debian/stretch AKA Debian 9.0 will include a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/stretch release as compared to Debian/jessie (and ignoring jessie-backports):

  • bruteforce-salted-openssl: try to find the passphrase for files encrypted with OpenSSL
  • cewl: custom word list generator
  • dfdatetime/python-dfdatetime: Digital Forensics date and time library
  • dfvfs/python-dfvfs: Digital Forensics Virtual File System
  • dfwinreg: Digital Forensics Windows Registry library
  • dislocker: read/write encrypted BitLocker volumes
  • forensics-all: Debian Forensics Environment – essential components (metapackage)
  • forensics-colorize: show differences between files using color graphics
  • forensics-extra: Forensics Environment – extra console components (metapackage)
  • hashdeep: recursively compute hashsums or piecewise hashings
  • hashrat: hashing tool supporting several hashes and recursivity
  • libesedb(-utils): Extensible Storage Engine DB access library
  • libevt(-utils): Windows Event Log (EVT) format access library
  • libevtx(-utils): Windows XML Event Log format access library
  • libfsntfs(-utils): NTFS access library
  • libfvde(-utils): FileVault Drive Encryption access library
  • libfwnt: Windows NT data type library
  • libfwsi: Windows Shell Item format access library
  • liblnk(-utils): Windows Shortcut File format access library
  • libmsiecf(-utils): Microsoft Internet Explorer Cache File access library
  • libolecf(-utils): OLE2 Compound File format access library
  • libqcow(-utils): QEMU Copy-On-Write image format access library
  • libregf(-utils): Windows NT Registry File (REGF) format access library
  • libscca(-utils): Windows Prefetch File access library
  • libsigscan(-utils): binary signature scanning library
  • libsmdev(-utils): storage media device access library
  • libsmraw(-utils): split RAW image format access library
  • libvhdi(-utils): Virtual Hard Disk image format access library
  • libvmdk(-utils): VMWare Virtual Disk format access library
  • libvshadow(-utils): Volume Shadow Snapshot format access library
  • libvslvm(-utils): Linux LVM volume system format access librar
  • plaso: super timeline all the things
  • pompem: Exploit and Vulnerability Finder
  • pytsk/python-tsk: Python Bindings for The Sleuth Kit
  • rekall(-core): memory analysis and incident response framework
  • unhide.rb: Forensic tool to find processes hidden by rootkits (was already present in wheezy but missing in jessie, available via jessie-backports though)
  • winregfs: Windows registry FUSE filesystem

Join the #newinstretch game and present packages and features which are new in Debian/stretch.

Debian stretch: changes in util-linux #newinstretch

May 19th, 2017

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch!

Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch.

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available.

Tools that have been taken over from other packages

  • last: used to be shipped via sysvinit-utils in Debian/jessie
  • lastb: used to be shipped via sysvinit-utils in Debian/jessie
  • mesg: used to be shipped via sysvinit-utils in Debian/jessie
  • mountpoint: used to be shipped via initscripts in Debian/jessie
  • sulogin: used to be shipped via sysvinit-utils in Debian/jessie

New tools

  • lsipc: show information on IPC facilities, e.g.:
  • root@ff2713f55b36:/# lsipc
    RESOURCE DESCRIPTION                                              LIMIT USED  USE%
    MSGMNI   Number of message queues                                 32000    0 0.00%
    MSGMAX   Max size of message (bytes)                               8192    -     -
    MSGMNB   Default max size of queue (bytes)                        16384    -     -
    SHMMNI   Shared memory segments                                    4096    0 0.00%
    SHMALL   Shared memory pages                       18446744073692774399    0 0.00%
    SHMMAX   Max size of shared memory segment (bytes) 18446744073692774399    -     -
    SHMMIN   Min size of shared memory segment (bytes)                    1    -     -
    SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
    SEMMNS   Total number of semaphores                          1024000000    0 0.00%
    SEMMSL   Max semaphores per semaphore set.                        32000    -     -
    SEMOPM   Max number of operations per semop(2)                      500    -     -
    SEMVMX   Semaphore max value                                      32767    -     -
    
  • lslogins: display information about known users in the system, e.g.:
  • root@ff2713f55b36:/# lslogins 
      UID USER     PROC PWD-LOCK PWD-DENY LAST-LOGIN GECOS
        0 root        2        0        1            root
        1 daemon      0        0        1            daemon
        2 bin         0        0        1            bin
        3 sys         0        0        1            sys
        4 sync        0        0        1            sync
        5 games       0        0        1            games
        6 man         0        0        1            man
        7 lp          0        0        1            lp
        8 mail        0        0        1            mail
        9 news        0        0        1            news
       10 uucp        0        0        1            uucp
       13 proxy       0        0        1            proxy
       33 www-data    0        0        1            www-data
       34 backup      0        0        1            backup
       38 list        0        0        1            Mailing List Manager
       39 irc         0        0        1            ircd
       41 gnats       0        0        1            Gnats Bug-Reporting System (admin)
      100 _apt        0        0        1            
    65534 nobody      0        0        1            nobody
    
  • lsns: list system namespaces, e.g.:
  • root@ff2713f55b36:/# lsns
            NS TYPE   NPROCS PID USER COMMAND
    4026531835 cgroup      2   1 root bash
    4026531837 user        2   1 root bash
    4026532473 mnt         2   1 root bash
    4026532474 uts         2   1 root bash
    4026532475 ipc         2   1 root bash
    4026532476 pid         2   1 root bash
    4026532478 net         2   1 root bash
    
  • setpriv: run a program with different privilege settings
  • zramctl: tool to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices

New features/options

addpart (show or change the real-time scheduling attributes of a process):

--reload reload prompts on running agetty instances

blkdiscard (discard the content of sectors on a device):

-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard

chrt (show or change the real-time scheduling attributes of a process):

-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE

fdformat (do a low-level formatting of a floppy disk):

-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)

fdisk (display or manipulate a disk partition table):

-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)

New available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

findmnt (find a (mounted) filesystem):

-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details

flock (manage file locks from shell scripts):

-F, --no-fork            execute command without forking
    --verbose            increase verbosity

getty (open a terminal and set its mode):

--reload               reload prompts on running agetty instances

hwclock (query or set the hardware clock):

--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)

ldattach (attach a line discipline to a serial line):

-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach

logger (enter messages into the system log):

-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets

losetup (set up and control loop devices):

-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on|off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format

New available --list column:

DIO  access backing file with direct-io

lsblk (list information about block devices):

-J, --json           use JSON output format

New available columns (for --output):

HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems

lscpu (display information about the CPU architecture):

-y, --physical          print physical instead of logical IDs

New available column:

DRAWER  logical drawer number

lslocks (list local system locks):

-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions

nsenter (run a program with namespaces of other processes):

-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID

rtcwake (enter a system sleep state until a specified wakeup time):

--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)

sfdisk (display or manipulate a disk partition table):

New Commands:

-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes

New Options:

-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)

Available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

swapon (enable devices and files for paging and swapping):

-o, --options <list>     comma-separated list of swap options

New available columns (for --show):

UUID   swap uuid
LABEL  swap label

unshare (run a program with some namespaces unshared from the parent):

-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave|shared|private|unchanged   modify mount propagation in mount namespace
-s, --setgroups allow|deny                         control the setgroups syscall in user namespaces

Deprecated / removed options

sfdisk (display or manipulate a disk partition table):

-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

Debugging a mystery: ssh causing strange exit codes?

May 18th, 2017

XKCD comic 1722

Recently we had a WTF moment at a customer of mine which is worth sharing.

In an automated deployment procedure we’re installing Debian systems and setting up MySQL HA/Scalability. Installation of the first node works fine, but during installation of the second node something weird is going on. Even though the deployment procedure reported that everything went fine: it wasn’t fine at all. After bisecting to the relevant command lines where it’s going wrong we identified that the failure is happening between two ssh/scp commands, which are invoked inside a chroot through a shell wrapper. The ssh command caused a wrong exit code showing up: instead of bailing out with an error (we’re running under ‘set -e‘) it returned with exit code 0 and the deployment procedure continued, even though there was a fatal error. Initially we triggered the bug when two ssh/scp command lines close to each other were executed, but I managed to find a minimal example for demonstration purposes:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

What we’d expect is the following behavior, receive exit code 1 from the last command line in the chroot wrapper:

# ./ssh_wrapper 
return code = 1

But what we actually get is exit code 0:

# ./ssh_wrapper 
return code = 0

Uhm?! So what’s going wrong and what’s the fix? Let’s find out what’s causing the problem:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost command_does_not_exist >/dev/null 2>&1
exit "$?"
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 127

Ok, so if we invoke it with a binary that does not exist we properly get exit code 127, as expected.
What about switching /bin/bash to /bin/sh (which corresponds to dash here) to make sure it’s not a bash bug:

# cat ssh_wrapper 
chroot << "EOF" / /bin/sh
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Oh, but that works as expected!?

When looking at this behavior I had the feeling that something is going wrong with file descriptors. So what about wrapping the ssh command line within different tools? No luck with `stdbuf -i0 -o0 -e0 ssh root@localhost hostname`, nor with `script -c “ssh root@localhost hostname” /dev/null` and also not with `socat EXEC:”ssh root@localhost hostname” STDIO`. But it works under unbuffer(1) from the expect package:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
unbuffer ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

So my bet on something with the file descriptor handling was right. Going through the ssh manpage, what about using ssh’s `-n` option to prevent reading from standard input (stdin)?

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh -n root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Bingo! Quoting ssh(1):

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).
             This must be used when ssh is run in the background.  A common trick is
             to use this to run X11 programs on a remote machine.  For example,
             ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi,
             and the X11 connection will be automatically forwarded over an encrypted
             channel.  The ssh program will be put in the background.  (This does not work
             if ssh needs to ask for a password or passphrase; see also the -f option.)

Let’s execute the scripts through `strace -ff -s500 ./ssh_wrapper` to see what’s going in more detail.
In the strace run without ssh’s `-n` option we see that it’s cloning stdin (file descriptor 0), getting assigned to file descriptor 4:

dup(0)            = 4
[...]
read(4, "exit 1\n", 16384) = 7

while in the strace run with ssh’s `-n` option being present there’s no file descriptor duplication but only:

open("/dev/null", O_RDONLY) = 4

This matches ssh.c’s ssh_session2_open function (where stdin_null_flag corresponds to ssh’s `-n` option):

        if (stdin_null_flag) {                                            
                in = open(_PATH_DEVNULL, O_RDONLY);
        } else {
                in = dup(STDIN_FILENO);
        }

This behavior can also be simulated if we explicitly read from /dev/null, and this indeed works as well:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null </dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

The underlying problem is that both bash and ssh are consuming from stdin. This can be verified via:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
echo "Inner: pre"
while read line; do echo "Eat: $line" ; done
echo "Inner: post"
exit 3
EOF
echo "Outer: exit code = $?"

# ./ssh_wrapper
Inner: pre
Eat: echo "Inner: post"
Eat: exit 3
Outer: exit code = 0

This behavior applies to bash, ksh, mksh, posh and zsh. Only dash doesn’t show this behavior.
To understand the difference between bash and dash executions we can use the following test scripts:

# cat stdin-test-cmp
#!/bin/sh

TEST_SH=bash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-bash.out
TEST_SH=dash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-dash.out

# cat stdin-test
#!/bin/sh

: ${TEST_SH:=dash}

$TEST_SH <<"EOF"
echo "Inner: pre"
while read line; do echo "Eat: $line"; done
echo "Inner: post"
exit 3
EOF

echo "Outer: exit code = $?"

When executing `./stdin-test-cmp` and comparing the generated files stdin-test-bash.out and stdin-test-dash.out you’ll notice that dash consumes all stdin in one single go (a single `read(0, …)`), instead of character-by-character as specified by POSIX and implemented by bash, ksh, mksh, posh and zsh. See stdin-test-bash.out on the left side and stdin-test-dash.out on the right side in this screenshot:

screenshot of vimdiff on *.out files

So when ssh tries to read from stdin there’s nothing there anymore.

Quoting POSIX’s sh section:

When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. When the command expecting to read standard input is started asynchronously by an interactive shell, it is unspecified whether characters are read by the command or interpreted by the shell.

If the standard input to sh is a FIFO or terminal device and is set to non-blocking reads, then sh shall enable blocking reads on standard input. This shall remain in effect when the command completes.

So while we learned that both bash and ssh are consuming from stdin and this needs to prevented by either using ssh’s `-n` or explicitly specifying stdin, we also noticed that dash’s behavior is different from all the other main shells and could be considered a bug (which we reported as #862907).

Lessons learned:

  • Be aware of ssh’s `-n` option when using ssh/scp inside scripts.
  • Feeding shell scripts via stdin is not only error-prone but also very inefficient, as for a standards compliant implementation it requires a read(2) system call per byte of input. Instead create a temporary script you safely execute then.
  • When debugging problems make sure to explore different approaches and tools to ensure you’re not relying on a buggy behavior in any involved tool.

Thanks to Guillem Jover for review and feedback regarding this blog post.

Revisiting 2016

January 4th, 2017

Mainly to recall what happened last year and to give thoughts and planning for upcoming year(s) I’m once again revisiting the last year (previous years: 2015, 2014, 2013 + 2012). Here we go:

Events:

Technology / Open Source:

Music:

  • Bought a Cajón, my kids loving it as much as I do :)
  • Played the drums more often in the beginning of 2016, went down close to zero in the second half of 2016 (meh)

Sports:

  • Bought a unicycle and started to learn to ride it in summer (got stuck since then without progress)
  • Played less Badminton (as expected since I skipped the whole summer holidays season, but went kind of regularly during winter and summer terms otherwise) and played close to never Table Tennis (mainly due to date collisions with my sparring partner but also due to other time constraints), though managed to keep my handicap more or less in both sports

Business:

  • Third year of business with SynPro Solutions, very happy with what we achieved and very glad to have such fantastic partners
  • Started to gather more metrics around my work related to Grml Solutions, SynPro Solutions + Grml-Forensic for better (capacity) planning (that’s something I’d like to talk about in public at some point)

Personal:

  • Reading activities was mainly around newspapers and articles, sadly not so many books
  • First year of kindergarten for my older daughter, this brought us into a school-like schedule I wasn’t used to anymore (I’m finally starting to adopt to it though, also related to capacity planning efforts)
  • Took two months of child care time (great time!), later on also taking care of my kids every Monday and half of Tuesday – this turned out to be way more stressful than expected (having just ~30 hours left per week for normal working hours and all my business duties, this is also the main reason why I started with the metrics thingy)

Conclusion: 2016 was quite different to previous years, mainly because of being very time-constrained on all sides. One of the most challenging years overall. I’m planning to change quite some things for 2017 (started to do so already in Q4/2016), hoping for the best.

Event: Digitaldialog Privacy

November 25th, 2016

Digitaldialog ist eine Veranstaltungsreihe der Steirischen Wirtschaftsförderung SFG. Am Dienstag den 29.11. findet der Digitaldialog zum Thema Privacy statt. Ich wurde eingeladen an der Podiumsdiskussion teilzunehmen und freue mich auf interessante Fragen aus dem Publikum. :)

Weitere Informationen zum Event gibt es auf der SFG-Website und im Event-Flyer (PDF).

  • Datum: Dienstag, 29.11.2016 ab 16:00 Uhr
  • Veranstalter: SFG, Infonova, Evolaris, IBC, Campus02, Kleine Zeitung, APA
  • Ort: Seering 10, 8141 Unterpremstätten, IBC Graz, Hotel Ramada
  • Eintritt frei

Kinderbetreuungsgeld in Österreich, Schwerpunkt Selbständige

September 1st, 2016

“Die wertvollste Zeit meines Lebens war die Papa-Karenz. Meine Zwillinge haben nun ein völlig anderes Verhältnis zu mir. Und ich zu ihnen. Wer darauf freiwillig verzichtet, ist selbst schuld. — Florian Klenk (Journalist und Chefredakteur des Stadtmagazins FALTER)

Das kann ich so absolut unterschreiben. Bei mir waren es zwar nicht Zwillinge, sondern 2 Töchter mit einem Abstand von ~2,5 Jahren, und für Selbstständige existiert das Wort Karenz in seiner Bedeutung nicht wirklich (Elternkarenz, also die Arbeitsfreistellung aus Anlass der Mutterschaft gilt für Arbeitnehmer). Aber meine Frau und ich haben Kinderbetreuungsgeld für beide Töchter bezogen und wir durften dabei so manches lernen…

TL;DR: das Kinderbetreuungsgeld ist eine gute Sache (obwohl ich ein wenig neidisch nach Schweden schiele), auch wenn es einiges zu Bedenken gibt – ich kann aber trotzdem jedem (speziell Männern!) nur aufs Wärmste empfehlen, die Kinderbetreuungszeit bzw. das Kinderbetreuungsgeld in Anspruch zu nehmen und diese Zeit mit dem eigenen Kind zu verbringen.

Ein paar Fakten vorab für jene die sich damit noch gar nicht beschäftigt haben:

Aktuell (2016) gibt es seit 2008 fünf Modelle für Kinderbetreuungsgeld (KBG): vier Pauschalvarianten (12+2, 15+3, 20+4 sowie 30+6 Monate) und die einkommensabhängige Variante mit 12+2 Monaten. Das 12+2 beispielsweise bedeutet, dass der eine Elternteil 12 Monate in Anspruch nimmt, der andere Elternteil dann noch die Option auf weitere 2 Monate hat.

Voraussetzung für den Erhalt des KBG ist der Bezug der Familienbeihilfe (geschieht mittlerweile automatisch mit der Geburt in antragsloser Form) sowie der gemeinsame Hauptwohnsitz mit dem Kind. In der Regel fängt der Mutterschutz 8 Wochen vor dem errechneten Geburtstermin an und dauert dann noch weitere 8 Wochen an (in Summe also 4 Monate). Diese 8 Wochen nach der Geburt entsprechen dabei normalerweise bereits den ersten beiden Monaten der Kinderbetreuung, wobei der höhere Geldbetrag ausbezahlt wird (wenn also das Wochengeld höher als das KBG ist bezieht man das Wochengeld, es sind dann aber trotzdem bereits 2 Monate von der Kinderbetreuungszeit verbraucht). Im klassischen Fall bedeutet das also 8 Wochen vor der Geburt Wochengeld und dann 12, 15, 20 bzw. 30 Monate beim Kind bleiben. Achtung – der Wechsel zwischen den Partnern ist maximal zwei Mal möglich (also z.B. Mutter->Vater>Mutter), man benötigt aber mindestens 2 Monate am Stück.

Soweit die Grundlagen in der Schnellfassung, weitere Details gibt es u.a. auf der offiziellen Website zum Kinderbetreuungsgeld, wie auch in den Wikipedia-Artikeln Kinderbetreuungsgeld und Wochengeld.

Wir haben uns beide Mal für das einkommensabhängige Modell entschieden, einerseits weil beide von uns möglichst bald wieder arbeiten wollten und andererseits es finanziell die attraktivste Option für uns war. Je nach Modell gibt es zwischen ~14,5 Euro und 66 Euro pro Tag, das einkommensabhängige Modell bezieht sich auf 80% des bisherigen Einkommens, limitiert mit maximal 66 Euro pro Tag (also ~2k Euro pro Monat). Je nach Modell gibt es verschiedene Zuverdienstgrenzen, bei der einkommensabhängigen Variante ist die Zuverdienstgrenze sehr niedrig (ungefähr im Bereich der Geringfügigkeitsgrenze mit ~400 Euro pro Monat).

Wissenswertes:

  • für Selbstständige gilt das sog. Geld-Zuflussprinzip: im Zeitraum in dem man das KBG bezieht, dürfen auf dem Bankkonto keine Geldbeträge eingehen (bzw. müssen diese unterhalb der jeweiligen Zuverdienstgrenze liegen)
  • Änderungen am gewählten Modell sind nur innerhalb von 14 Tagen ab Einreichung möglich, innerhalb dieser Zeit wird der Antrag vielfach aber noch nicht einmal bearbeitet (mir wurden 6-8 Wochen Bearbeitungszeit beim Einreichen gemeldet), also gut überlegen welches Modell man will und dann doppelt und dreifach überprüfen bevor man es abschickt
  • das KBG gibt es erst am 10. des Folgemonats (Kinderbetreuung startet am ~10. Mai? Dann gibt es das erste Geld entsprechend für die restlichen ~20 Tage vom Mai am 10. Juni aufs Konto)
  • das Kind muss am gleichen Hauptwohnsitz gemeldet sein wie die Eltern (mit einem Nebenwohnsitz läuft man also schnell in die Falle)
  • für die Einkommensberechnung zählt das Jahr vor der Geburt des Kindes. Wenn das Kind also Ende Dezember 2013 auf die Welt kommt, der erste Elternteil die vollen 12 Monate im Jahr 2014 nimmt und der zweite Elternteil dann die übrigen zwei Monate mit Anfang 2015, dann zählt für den zweiten Elternteil die Berechnungsgrundlage von 2012(!)
  • kommt ein weiteres Kind während man bereits im Kinderbetreuungsmodell ist, so gibt es das KBG deswegen nicht zweimal (aber 50% Zuschlag)
  • der Kündigungsschutz für Arbeitnehmer gilt nur für 2 Jahre, damit ist das längste Modell mit 30+6 diesbezüglich keine ernsthafte Option
  • wenn nicht beide Elternteile Vollzeit arbeiten geht sich bei einem weiteren Kind eventuell nicht mehr das (volle) einkommensabhängige Modell für Beide aus, aber: man kann es trotzdem beantragen und wird dann auf das pauschale 12+2 Modell abgestuft, wenn man mit den 80% des Einkommens unter der Grenze liegen sollte, der zweite Partner kann dann aber trotzdem (obwohl der andere auf das Pauschalmodell abgestuft wurde!) das einkommensabhängige KBG beziehen!
  • erst wenn die genaue Berechnungsgrundlage feststeht gibt es den vollen Geldbetrag, bis dahin wird mit dem jeweiligen Mindesttagessatz gerechnet

Als Selbstständiger hatte ich aber leider den Eindruck, dass speziell die SVA nicht sonderlich daran interessiert ist, dass man in Kinderbetreuung geht:

  • das Geld wurde beim ersten Mal nach dem ersten Monat auf einmal nicht mehr am geplanten Termin überwiesen. Auf meine Rückfrage hin hat sich herausgestellt, dass die Bemessungsgrundlage mittlerweile vorläge und der Betrag dahingehend angepasst wurde. Die Zahlungen aber wurden damit einfach ausgesetzt, und das ohne jegliche Rückmeldung an mich… :(
  • der Antrag für das KBG kann auch via FinanzOnline eingebracht werden, aber Achtung: die SVA ist als Sozialversicherungsträger gar nicht auswählbar und die Karenz-Option muss ausgefüllt werden (dabei einfach sich selbst als Arbeitgeber eintragen). Als Sozialversicherungsträger habe ich daher die GKK gewählt (diese zahlt nämlich effektiv auch das KBG aus, selbst wenn man bei der SVA ist!). 3 Wochen ohne Rückmeldung vergingen und ich habe daher nachgefragt, wie es denn aktuell aussehe. Beim Telefonat mit der GKK wurde mir mitgeteilt, dass der Antrag an die SVA weitergeleitet wurde und ich dort nachfragen solle. Gesagt, getan – die SVA findet aber nichts dazu, ich möge es also nochmal in Papierform einreichen, mit 6-8 Wochen Bearbeitungszeit sollte ich allerdings rechnen (war dann zum Glück nicht ganz so schlimm)
  • während in der Papierform die Option “bis zur maximalen Bezugsdauer” existiert, muss bei der Variante via FinanzOnline das exakte Datum angegeben werden, die Berechnung des exakten Tages für Beginn und Ende des Kinderbetreuungszeitraums scheint dabei ein gut gehütetes Ergebnis zu sein, denn sowohl auf der Hotline der SVA, als auch am Schalter vor Ort wurden mir jeweils falsche Daten genannt :-/

Ich empfehle daher:

  • Informieren, Fragen und Verifizieren – nicht alle Formulierungen sind eindeutig, im Zweifelsfall lieber konkret nachfragen
  • vorab genügend Geld am Konto puffern, damit man auch ohne KBG keinen Stress bekommt, falls sich diesbezüglich doch etwas verzögern sollte, das geht auch einher mit:
  • den eigenen Kunden rechtzeitig im Voraus die Situation erklären und offene Rechnungen rechtzeitig eintreiben oder diese bis nach der Kinderbetreuungszeit ausstehen lassen
  • (speziell als Selbstständiger) den Antrag für das KBG in Papierform erledigen, sich selbst eine Kopie davon anfertigen und nach Einreichung halbwegs zeitnahe nachfragen, ob damit alles in Ordnung ist

Achtung: mit 01.03.2017 gibt es Neuerungen beim KBG, hier erwähnte Details könnten sich damit geändert haben.

Disclaimer: IANAL, ich nehme Korrekturen, Verbesserungsvorschläge und Ergänzungen aber gerne entgegen.

systemd backport of v230 available for Debian/jessie

July 28th, 2016

At DebConf 16 I was working on a systemd backport for Debian/jessie. Results are officially available via the Debian archive now.

In Debian jessie we have systemd v215 (which originally dates back to 2014-07-03 upstream-wise, plus changes + fixes from pkg-systemd folks of course). Now via Debian backports you have the option to update systemd to a very recent version: v230. If you have jessie-backports enabled it’s just an `apt install systemd -t jessie-backports` away. For the upstream changes between v215 and v230 see upstream’s NEWS file for list of changes.

(Actually the systemd backport is available since 2016-07-19 for amd64, arm64 + armhf, though for mips, mipsel, powerpc, ppc64el + s390x we had to fight against GCC ICEs when compiling on/for Debian/jessie and for i386 architecture the systemd test-suite identified broken O_TMPFILE permission handling.)

Thanks to the Alexander Wirt from the backports team for accepting my backport, thanks to intrigeri for the related apparmor backport, Guus Sliepen for the related ifupdown backport and Didier Raboud for the related usb-modeswitch/usb-modeswitch-data backports. Thanks to everyone testing my systemd backport and reporting feedback. Thanks a lot to Felipe Sateler and Martin Pitt for reviews, feedback and cooperation. And special thanks to Michael Biebl for all his feedback, reviews and help with the systemd backport from its very beginnings until the latest upload.

PS: I cannot stress this enough how fantastic Debian’s pkg-systemd team is. Responsive, friendly, helpful, dedicated and skilled folks, thanks folks!

DebConf16 in Capetown/South Africa: Lessons learnt

July 19th, 2016

DebConf 16 in Capetown/South Africa was fantastic for many reasons.

My Capetown/South Africa/Culture/Flight related lessons:

  • Avoid flying on Sundays (especially in/from Austria where plenty of hotlines are closed on Sundays or at least not open when you need them)
  • Actually turn back your seat on the flight when trying to sleep and not forget that this option exists *cough*
  • While UCT claims to take energy saving quite serious (e.g. “turn off the lights” mentioned at many places around the campus), several toilets flush all their water, even when trying to do just small™ business and also two big lights in front of a main building seem to be shining all day long for no apparent reason
  • There doesn’t seem to be a standard for the side of hot vs. cold water-taps
  • Soap pieces and towels on several toilets
  • For pedestrians there’s just a very short time of green at the traffic lights (~2-3 seconds), then red blinking lights show that you can continue walking across the street (but *should* not start walking) until it’s fully red again (but not many people seem to care about the rules anyway :))
  • Warning lights of cars are used for saying thanks (compared to hand waving in e.g. Austria)
  • The 40km/h speed limit signs on the roads seem to be showing the recommended minimum speed :-)
  • There are many speed bumps on the roads
  • Geese quacking past 11:00 p.m. close to a sleeping room are something I’m also not used to :-)
  • Announced downtimes for the Internet connection are something I’m not used to
  • WLAN in the dorms of UCT as well as in any other place I went to at UCT worked excellent (measured ~22-26 Mbs downstream in my room, around 26Mbs in the hacklab) (kudos!)
  • WLAN is available even on top of the Table Mountain (WLAN working and being free without any registration)
  • Number26 credit card is great to withdraw money from ATMs without any extra fees from common credit card companies (except for the fee the ATM itself charges but displays ahead on-site anyway)
  • Splitwise is a nice way to share expenses on the road, especially with its mobile app and the money beaming using the Number26 mobile app

My technical lessons from DebConf16:

  • ran into way too many yak-shaving situations, some of them might warrant separate blog posts…
  • finally got my hands on gbp-pq (manage quilt patches on patch queue branches in git): very nice to be able to work with plain git and then get patches for your changes, also having upstream patches (like cherry-picks) inside debian/patches/ and the debian specific changes inside debian/patches/debian/ is a lovely idea, this can be easily achieved via “Gbp-Pq: Topic debian” with gbp’s pq and is used e.g. in pkg-systemd, thanks to Michael Biebl for the hint and helping hand
  • David Bremner’s gitpkg/git-debcherry is something to also be aware of (thanks for the reminder, gregoa)
  • autorevision: extracts revision metadata from your VCS repository (thanks to pabs)
  • blhc: build log hardening check
  • Guido’s gbp skills exchange session reminded me once again that I should use `gbp import-dsc –download $URL_TO_DSC` more often
  • sources.debian.net features specific copyright + patches sections (thanks, Matthieu Caneill)
  • dpkg-mergechangelogs(1) for 3-way merge of debian/changelog files (thanks, buxy)
  • meta-git from pkg-perl is always worth a closer look
  • ifupdown2 (its current version is also available in jessie-backports!) has some nice features, like `ifquery –running $interface` to get the life configuration of a network interface, json support (`ifquery –format=json …`) and makotemplates support to generate configuration for plenty of interfaces

BTW, thanks to the video team the recordings from the sessions are available online.

My talk at OSDC 2016: Continuous Integration in Data Centers – Further 3 Years Later

May 26th, 2016

Open Source Data Center Conference (OSDC) was a pleasure and great event, Netways clearly knows how to run a conference.

This year at OSDC 2016 I gave a talk titled “Continuous Integration in Data Centers – Further 3 Years Later“. The slides from this talk are available online (PDF, 6.2MB). Thanks to Netways folks also a recording is available:

This embedded video doesn’t work for you? Try heading over to YouTube.

Note: my talk was kind of an update and extension for the (german) talk I gave at OSDC 2013. If you’re interested, the slides (PDF, 4.3MB) and the recording (YouTube) from my talk in 2013 are available online as well.

Event: DebConf 16

April 18th, 2016

*

Yes, I’m going to DebConf 16! This year DebConf – the Debian Developer Conference – will take place in Cape Town, South Africa.

Outbound:

2016-06-26 15:40 VIE -> 17:10 LHR BA0703
2016-06-26 21:30 LHR -> 09:55 CPT BA0059

Inbound:

2016-07-09 19:30 CPT –> 06:15 LHR BA0058
2016-07-10 07:55 LHR –> 11:05 VIE BA0696

Event: OSDC 2016

March 31st, 2016

*

Open Source Data Center Conference (OSDC) is a conference on open source software in data centers and huge IT environments and will take place in Berlin/Germany in April 2016. I will give a talk titled “Continuous Integration in Data Centers – Further 3 Years Later” there.

I gave a talk titled “Continuous Integration in data centers“ at OSDC in 2013, presenting ways how to realize continuous integration/delivery with Jenkins and related tools. Three years later we gained new tools in our continuous delivery pipeline, including Docker, Gerrit and Goss. Over the years we also had to deal with different problems caused by faster release cycles, a growing team and gaining new projects. We therefore established code review in our pipeline, improved our test infrastructure and invested in our infrastructure automation. In this talk I will discuss the lessons we learned over the last years, demonstrate how a proper continuous delivery pipeline can improve your life and how open source tools like Jenkins, Docker and Gerrit can be leveraged for setting up such an environment.

Hope to see you there!

Revisiting 2015

January 7th, 2016

Business:

  • Attended four conferences (FOSDEM, cfgmgmtcamp, Linuxdays Graz and Debconf)
  • Second year of business with SynPro Solutions, very happy with what we achieved
  • Very intense working year (Grml Solutions, SynPro Solutions, Grml-Forensic), I worked way more than anticipated and planned but am very glad that I’ve such wonderful business partners and colleagues

Technology / Open Source:

Personal:

  • Another fork, welcome Johanna – she’s such a sunshine and I’m so incredibly proud of my two daughters
  • Played less badminton (skipped full winter term, meh) and table tennis than I would have liked to
  • Managed to get back to some kind of reasonable level on playing the drums (playing on my Roland TD-30KV drum-kit)

Conclusion: 2015 was a great though intense year for me, both business as personal wise. Looking forward to 2016.

mur.strom: Podcast über Debian

January 7th, 2016

mur.strom ist ein Podcast aus Graz rund um Technologie und Gesellschaft. Am 6. Jänner wurde eine neue Folge veröffentlicht, in der es um das Debian-Projekt geht. Sebastian Ramacher und ich waren die Interview-Gäste, viel Spaß beim Hören der ~1h 43 Minuten: http://murstrom.at/msp016-debian/

DebConf15: “Continuous Delivery of Debian packages” talk

August 24th, 2015

At the Debian Conference 2015 I gave a talk about Continuous Delivery of Debian packages. My slides are available online (PDF, 753KB). Thanks to the fantastic video team there’s also a recording of the talk available: WebM (471MB) and on YouTube.

HAProxy with Debian/squeeze clients causing random “Hash Sum mismatch”

July 2nd, 2015

Update on 2015-07-02 22:15 UTC: as Petter Reinholdtsen noted in the comments:

Try adding /etc/apt/apt.conf.d/90squid with content like this:

Acquire::http::Pipeline-Depth 0;

It turn off the feature in apt confusing proxies.

” – this indeed avoids those “Hash Sum mismatch” failures with HAProxy as well. Thanks, Petter!

Many of you might know apt’s “Hash Sum mismatch” issue and there are plenty of bug reports about it (like #517874, #624122, #743298 + #762079).

Recently I saw the “Hash Sum mismatch” usually only when using “random” mirrors with e.g. httpredir.debian.org in apt’s sources.list, but with a static mirror such issues usually don’t exist anymore. A customer of mine has a Debian mirror and this issue wasn’t a problem there neither, until recently:

Since the mirror also includes packages provided to customers and the mirror needs to be available 24/7 we decided to provide another instance of the mirror and put those systems behind HAProxy (version 1.5.8-3 as present in Debian/jessie). The HAProxy setup worked fine and we didn’t notice any issues in our tests, until the daily Q/A builds randomly started to report failures:

Failed to fetch http://example.org/foobar_amd64.deb Hash Sum mismatch

When repeating the download there was no problem though. This problem only appeared about once every 15-20 minutes with random package files and it affected only Debian/squeeze clients (wheezy and jessie aren’t affected at all). The problem also didn’t appear when directly accessing the mirrors behind HAproxy. We tried plenty of different options for apt (Acquire::http::No-Cache=true, Acquire::http::No-Partial=true,…) and also played with some HAProxy configurations, nothing really helped. With apt’s “Debug::Acquire::http=True” we saw that there really was a checksum failure and HTTP status code 102 (‘Processing‘, or in terms of apt: ‘Waiting for headers‘) seems to be involved. The actual problem between apt on Debian/squeeze and HAProxy is still unknown to us though.

While digging deeper into this issue is on my todo list yet, I found a way to avoid those “Hash Sum mismatch” failures: switch from http to https in sources.list. As soon as https is used the problem doesn’t appear anymore. I’m documenting it here just in case anyone else should run into it.

fork(), once again

May 15th, 2015

On 10th of May 2015 my lovely wife has made me a lovely present again: welcome to our family, Johanna.