All posts by #!/bin/blog

systemd delayed service restart

Ich habe einige Monate damit rumgemacht, dass das Konglomerat Telegraf/Prometheus/Grafana von ziemlich starken Abhängigkeiten bei der Startreihenfolge der beteiligten Systeme geplagt ist. Meine Server rebooten alle einmal pro Woche automatisch innerhalb eines gewissen Zeitfensters und seit Grafana musste ich so alle 2-3 Wochen manuell eingreifen und Services neustarten, da insbesondere Telegraf davon ausgeht, dass Schnittstellen, auf die es selbst zugreift, beim eigenen Start erreichbar sein müssen. (Siehe auch: Fallacies of distributed computing.)

Die Frage war also, wie sieht die einfachste Lösung aus, um einen gegebenen Service 60 Minuten nach seinem Start noch einmal automatisch durchzustarten (mit sporadischen Lücken in den Daten kann ich am frühen Sonntagmorgen leben).

Timer und dedizierte Service-Units zu verteilen schien mir zu aufwändig, und ich habe unangenehm lange gegrübelt, wie die simpelste Lösung aussehen könnte, die einfach per systemctl edit foo.service (oder per anderweitig verteiltem systemd drop-in, ymmv) in die betreffende Unit eingebracht werden kann.

Angekommen bin ich schließlich bei dieser Bandwurmzeile:

# /etc/systemd/system/telegraf.service.d/override.conf
[Service]
ExecStartPost=-+sh -c 'systemctl --quiet is-active %N-restart || systemd-run --unit=%N-restart --on-active=1h systemctl restart %N'

Von rechts nach links:

  • systemd-run instanziiert einen transienten timer und service, der in einer Stunde systemctl restart ausführen wird. %N wird dabei durch den Namen der übergeordneten Unit (hier also: telegraf) ersetzt.
  • Der Name des transienten timer und service wird über die Option --unit vorgegeben um im vorherigen Schritt (eins weiter links) vorhersagbar zu sein.
  • systemd-run wird nur dann ausgeführt, wenn nicht bereits eine Unit aktiv ist, die den vorgegebenen Namen hat, den die transiente Unit bekommen würde, wenn sie noch auszuführen wäre.
  • Das ganze ist als Shell-Aufruf gewrappt, um den || – Operator zu haben.
  • ExecStartPost=+ führt als root aus, obwohl die übergeordnete Unit ihre Befehle unprivilegiert ausführt.
  • ExecStartPost=- ignoriert den Fehler aus dem Shell-Einzeiler, wenn die Unit bereits aktiv war. Es könnte ersatzweise auch ; true ans Ende des Einzeilers geschrieben werden.

EDC Toolkit Review 2024

Im vorigen Jahr habe ich über mein kleines EDC-Toolkit geschrieben, das ich auf Verdacht schnappen kann, wenn ich aus der Haustür gehe, um viele Kleinigkeiten um Haus und Hof direkt an Ort und Stelle erledigen zu können. Ein zweites, fast identisches Kit mit kleineren Änderungen begleitet mich zwischenzeitlich auf allen Reisen.

Ein kleiner Rückblick zwei Saisons später.


Blick ins Mesh-Bag mit geöffnetem Reißverschluss, in dem eine Zange, ein Edding und andere Sachen zu erkennen sind.

Das zweite Kit, das im Auto unterwegs ist, steckt dort in einem Mesh-Bag im Format DIN A6.

Das ursprüngliche Kit befindet sich weiterhin in seiner taktischen Nylon-Gürteltasche.

Gepackte olivfarbene Gürteltasche.
Olivfarbene Gürteltasche mit dem Inhalt des Kits daneben ausgebreitet.

It’s bigger on the inside.


Abt. Messen

Der 100 cm Gliedermaßstab tut immer noch den Job, bis mir ein sehr kompaktes aber ernstzunehmendes Bandmaß für unter 10 Euro in die Hände fällt.

Gliedermaßstab mit Werbung einer Baumarktkette.
Edding mit abgenommener Kappe.

Bleistift und Spudger warteten einfach viel zu spitz in der Tasche darauf, sich unter die Nägel wühlender Finger zu bohren. Der Bleistift wurde durch einen dünnen Edding 400 ersetzt, der Spudger durch ein selbstkonstruiertes Spudger-Bit.

Die Schraubenlehre ist nicht mehr im Kit und stationär im Bereich von Werkbank und Basteltisch besser aufgehoben.

3D-gedruckte Schraubenlehre mit Schraube zur Veranschaulichung.

Abt. Schrauben

Die Ratschen von KWB und Wera im Vergleich.

Die Mini-Ratsche mit 1/4-Zoll-Sechskantabtrieb und Adapter auf 1/4-Zoll-Vierkantabtrieb. In Kit Nr. 2 handelt es sich um eine Variante von KWB, gegenüber der Wera-Ratsche wirklich haushohe Preis-Leistungs-Siegerin, die mir 2023 noch nicht bekannt war.

Der Stubby-Bithalter von Wiha ist in der Variante ohne Doppelbits (rechts) eine Idee angenehmer, da die Bits magnetisch gehalten werden. Die Ratsche muss dann auch nicht mehr aufs Durchstecken der zweiseitigen Bits ausgelegt sein.

Die beiden Stubby-Varianten im Vergleich.
Bitmagazin und Bitverlängerung.

Die Bit-Verlängerung erlaubt es, Standard-Bits im Doppelbit-Stubby einzusetzen. Hier das Bitmagazin zum Selberdrucken.

Der Inbusschlüssel mit gedrucktem Quergriff ist hier erst einmal raus, nachdem ich preisgünstige 1/4-Zoll-Bits in 2 mm und 2,5 mm Innensechskant gefunden habe.

Wiha Stubby (doppelseitig) mit gesteckter Bitverlängerung, um vorne eins der dünnen Innensechskant-Bits aufnehmen zu können.

Der unauffälligste Bonuscontent im Kit ist ein PH0-Bit mit 1/4-Zoll-Antrieb, das im Stubby zwar etwas ungelenk aussieht, aber im Bedarfsfall einfach deutlich besser ist als kein PH0-Bit.

Ich habe nicht vor, den Doppelbit-Stubby auszusortieren, bin aber zwischenzeitlich der Meinung, dass die Doppelbits zu wenig Platz sparen um die Fragen zu beantworten, die sie aufwerfen. Die Bitverlängerung, um normale Bits ohne übertriebenes Gefummel einzusetzen, neutralisiert den Gewichtsvorteil der Doppelbits. Es bringt einfach nichts.


Abt. Sechskant

Rollgabelschlüssel.

Der 100er Rollgabelschlüssel kann Schrauben bis 15 mm Schlüsselweite greifen. Der Rollstuhl meiner Liebsten ist damit nicht versorgt, allerdings benötigt dieser gerade an den dicken Schrauben einen Drehmomentschlüssel mit recht hohem Anzugsmoment.

Der Rollgabelschlüssel kann ohne großes Drama auf einen 150er upgegradet werden; abgeraten wird aber weiterhin vom automatischen Rollgabelschlüssel “Wera Joker 6004“, dem es an Vielseitigkeit fehlt.

Der 1/4-Zoll-Vierkantadapter für die Ratsche war zu schade, um ihn nicht zu nutzen. Beide Kits enthalten eine kleine selbstkonstruierte Box mit 8, 10 und 13 mm Stecknüssen. Für die Nüsse aus der KWB-Ratschenbox mit ihrem Vielzahn-Profil wurde das Modell um 15% vergrößert.

Kleines Magazin mit 3 Stecknüssen.

TL;DR: Das KWB-Kit “Bits and Sockets” inclusive Ratsche zusammen mit dem Wiha Stubby mit Torx-Sortiment bringt es für 25 Euro mit weniger glitschiger Ratsche und griffigerem Schraubendreher in etwa auf den Funktionsumfang eines Wera Toolcheck, der aber mindestens das doppelte kostet. Aber Achtung, nur der Toolcheck enthält 7er und 5,5er-Stecknüsse für M4- und M3-Schrauben.


McGyver-Gedächtnis-Abt.

Kombizange mit Schutzkappe.

120er Kombizange – Hier tut es wirklich die Baumarktzange für wenige Euro.

Wer es edler mag, darf bei der 110er Kombizange von Knipex zugreifen. Sehr viel vielseitiger ist dann aber die extrem gute 145er Knipex-Spitzkombizange, die ich nicht im EDC, aber auf dem Bastelschreibtisch habe.

Beide Kits enthalten eine Rolle Tesa Textilklebeband “Extra Power Perfect”, das mit dem Edding auch zur Beschriftung eingesetzt werden kann. (Die Dose gibt es hier zum Download.)

Eine Rolle Klebeband mit Dose.

Das Konzept “Gaffa auf Pappdeckel umgerollt” ist bekannt, wird von mir aber aus verschiedenen Gründen abgelehnt.

Ein Stück Paracord.

Seit ich im Außeneinsatz eine halbe Rolle Klebeband zum provisorischen Seil verdrehen musste, um etwas festzubinden, habe ich ein Stück Paracord 550 dabei.

Das selbstgedruckte Allzweckmesser wurde vor kurzem aussortiert, nachdem ich mich in einen 1-Euro-Laden verirrt habe.

Typisches schmales Abbrechmesser mit Einhandbedienung.
LED-Leuchte mit selbstgedrucktem Ständer.

Die Influencer-Arbeitsleuchte von Brennenstuhl habe ich nur einmal und sie hat sowohl in den dunklen Hallen des Chaos Communication Congress als auch in einer Ferienwohnung ohne Nachttischlampe(?!?) extrem gute Dienste geleistet. Dazu passend, der Ständer zum selbst drucken.

Ein paar Milliliter reines Isopropanol sind spannender als ranzelndes Desinfektionsmittel aus Pandemiebeständen, und eignen sich genauso gut, um Klebe- und Eddingrückstände zu entfernen.

Winziges Glasfläschchen mit der Aufschrift "Isopropanol".

Abt. Zukunftsmusik

Was fehlt, aber überhaupt nicht ins Konzept passt, ist ein Hammer. Wenn der Drang zu groß wird, unvorbereitet auf Dinge einzuschlagen, könnte irgendwann eine schwere Kombizange im Stil amerikanischer Lineman’s Pliers in die Bresche springen.

Leider, und das ist echt ein großes Leider, fehlt immer noch eine ganz einfache Schieblehre. Ich hätte sie hier und da gern gehabt, kam dann aber auch Pi mal Daumen zurecht. Vielleicht habe ich ja Glück und dieser Cliffhanger kann bis zum nächsten Review aufgelöst werden.

Deltachat – First dos, first don’ts

Deltachat is a decentral instant messenger (vulgo, Whatsapp alternative) that uses SMTP and IMAP as its transport medium. I already tried it several years ago but dismissed it as too experimental back then.

At the 37th Chaos Communication Congress I personally got in touch with the Deltachat people and their wholesome anarchist attitude, marveled at their insanely flawless live onboarding process during their talk in front of about 100 listeners, and been closely following the project since.

Here’s a few initial learnings from half a year of using the messenger.


First things first, E-Mail and SMTP are NOT cursed

E-Mail is one of the oldest distributed systems, with redundancy and queueing built-in since eternity, and will always have to deal with reachability issues. If you can’t reach someone, if someone can’t reach you, if either of you receive error messages, get in touch with your postmistresses and -masters. Everyone who is more than 50% serious about running a mail server will be interested in resolving any issues for their users. I certainly am.

If you operate the mail server yourself, get educated on best practices starting from DNS, PTR, EHLO, STARTTLS and SPF up to DKIM. Should your opinions on how to run things inexplicably differ from best practice, acquaint yourself with the idea of being wrong just this one time.


Attachments

Since the early 2000’s, the commonly accepted limits for message size hardly went up at all. We used to tell our users “e-mail is not a file transfer protocol”, but at this point, the small message size limits have honestly become a bit awful. Google Mail accepts a whopping 150 MB, T-Online 50 MB, and even my own mail exchangers are stuck in the past with 32 MB only.

Some messengers just accept any mindless 4K video upload, this one doesn’t. Attachments on Deltachat are encrypted end-to-end, no service in the middle is going to store them for you, and the messages themselves will be even bigger after encryption and transport-encoding.

When in doubt, upload somewhere and share the link.

If you operate the server yourself, feel free to raise the limit for your own users as far as you like but be aware that they may always encounter issues when interacting with users on other servers and domains.


Groups

Groups in Deltachat are a very unusual kind of beast, unlike every other group I’ve seen before.

They are fully decentral and really just resemble a group of users who automatically send e-mails to all others, with control messages that automatically notify everyone of people who are new to the group and have left the group. There is no administration at all; everyone can add and remove anyone, for everyone.

Currently, Deltachat groups are perfect for fully cooperative groups (friends and family), where this administration style even has its benefits because there’s no need to ask person X to do task Y. In hostile environments where the group may be subverted by trolls, they are not the ideal solution.


Don’t change mail accounts (My worst mistake.)

If the random Chatmail address you started out with doesn’t satisfy your vanity anymore, don’t change the mail account in your profile, but start over with a new profile. Direct contacts will need to be informed about your change in address anyway, and you can invite your new profile to groups using your old profile.

If you change mail addresses nevertheless, your PGP key will always carry the mail address you initially created it on, breaking Autocrypt key exchange. Also, your old mail address will forever linger as a “secondary” address in the profile and all sorts of hard-to-understand confusion will set in if it starts showing up from elsewhere in the future.

Don’t change mail accounts. Really.


Add a secondary device

Add a secondary device, e.g. your desktop system. It’s the most casual way of having a backup of your Deltchat profile.


Get your friends on board

It’s just 2½ steps:

  • Install the app.
  • Create a chatmail account (happens semi-automatically in current versions of the app).
  • Have them scan your QR code.

Try it right here.


Post header image: “Sundarbans web” by European Space Agency, CC-BY-NC-SA

Deltachat Push on any IMAP server

Preface

Deltachat is a decentral instant messenger (vulgo, Whatsapp alternative) that uses SMTP and IMAP as its transport medium. I already tried it several years ago but dismissed it as too experimental back then.

At the 37th Chaos Communication Congress I personally got in touch with the Deltachat people and their wholesome anarchist attitude, marveled at their insanely flawless live onboarding process during their talk in front of about 100 listeners, and been closely following the project since.

In late 2023/early 2024, the project introduced the Chatmail concept that enables anyone to host Postfix-Dovecot based Deltachat IMAP toasters for easy onboarding with real-time push notifications.

(Nota bene, I had already registered a domain but then decided against running a Chatmail instance because I doubt I have enough spare spoons to run an anonymous operation and/or deal with potential requests from authorities.)

Architecture

The moment I heard of push for Chatmail, I knew I wanted to figure out how to make it work on my own domain. Since no specification for Chatmail push seemed to be published, I figured out things from the following files in the Chatmail repository:

  • default.sieve – Not as helpful as expected, but providing an important-ish detail for the later implementation.
  • push_notification.lua – Hints towards the existence of a metadata service. Never heard of it so far.
  • dovecot.conf.j2 – First appearance of the metadata service in Dovecot config. TIL. Enabled METADATA and XDELTAPUSH on my own Dovecot and figured out that the client stores a notify-token in IMAP metadata.
  • notifier.py – Simply posts the notify-token to the Delta notification service.

So essentially, the push system works like this:

  1. Client connects to IMAP and saves its notify-token to the metadata service.
  2. Mail arrives.
  3. push_notification.lua, loaded as the push driver into Dovecot, talks back to the metadata server, which in turn uses notifier.py for sending.
  4. Client wakes up and connects to IMAP.
  5. Push message appears on screen only if there are unseen messages on the IMAP server. Meaning, if another client is connected as well (and, I believe, running visibly in the foreground), the mail on the server will already be marked seen and no message is displayed.

My alternative implementation needs to replace steps 1-3. Let’s do it.

Independent Push implementation

Executing things, figuring out the push token from IMAP metadata, calling a URL, from Sieve, requires a lot of configuration, so I quickly came back to running an additional IMAP IDLE session from somewhere else, as IdlePush did back in 2009. Instead of adapting the old Perl code or rewriting the IMAP IDLE dance from scratch, I looked for pre-existing building blocks to put together:

  • getmail6 (prepackaged on Debian) is a powerful alternative to fetchmail that can IDLE on an IMAP server and retrieve new messages while leaving them unseen (the way IdlePush did with Mail::IMAPClient‘s $imap->Peek(1);). The retrieved messages can be fed into what I call a custom mail delivery agent.
  • A small Python script serves as the custom MDA.
  • systemd, the old horse, can reliably run getmail as a service.

Files

getmail configuration

getmail demands that its configuration files be saved in ~/.config/getmail, so this is where this configuration goes.

The notify token can be found at the top of the client’s logfile.

# ~/.config/getmail/deltachat-example.com
[retriever]
type=SimpleIMAPSSLRetriever
server=imap.example.com
username=example@example.com
password=xxx
mailboxes=("INBOX",)

[destination]
type=MDA_external
path=~/bin/deltachat-push.py
arguments=("notify-token",)
ignore_stderr=true

[options]
verbose=1
read_all=false
delete=false

Custom mail delivery agent

The custom MDA is already referenced in the config above. Ignoring messages with the Auto-Submitted header was adapted from default.sieve in Chatmail.

The script talks to the notification server at most every 10 seconds per notify-token.

#!/usr/bin/env python3
import sys, requests, re, time
from select import select
from email.feedparser import BytesFeedParser
from pathlib import Path

notify_url='https://notifications.delta.chat/notify'

# See if a message is waiting on stdin - https://stackoverflow.com/a/3763257
if select([sys.stdin, ], [], [], 0.0)[0]:
    try:
        mailparser = BytesFeedParser()
        mailparser.feed(sys.stdin.buffer.read())
        message = mailparser.close()
    except Exception as e:
        print(f"While reading message: {e}", file=sys.stderr)
        sys.exit(255)

    # Skip if message contains header:
    if message.get('Auto-Submitted') and re.match('auto-(replied|generated)', message.get('Auto-Submitted')):
        print('Skipping: Auto-Submitted', file=sys.stderr)
        sys.exit(0)

# Issue notifications to the specified notify-token(s)
for token in sys.argv[1:]:
    print(f"Token: {token}", file=sys.stderr)

    ratelimit_file = f"{Path.home()}/.cache/deltachat-ratelimit-{token}"
    now = int(time.time())
    try:
        ratelimit_time = int(Path(ratelimit_file).stat().st_mtime)
    except:
        ratelimit_time = 0

    if now - ratelimit_time < 10:
        print('ratelimited', file=sys.stderr)
        continue

    try:
        r = requests.post(notify_url, token)
    except Exception as e:
        print(f"While talking to {notify_url}: {e}", file=sys.stderr)
        continue

    print(f"Notification request submitted, HTTP {r.status_code} {r.reason}", file=sys.stderr)
    Path(ratelimit_file).touch()

sys.exit(0)

systemd user unit

I use an instantiated unit that specifies the getmail configuration file and the mailbox (INBOX) on which to IDLE.

# ~/.config/systemd/user/deltachat-push@.service
[Unit]
Description=Push emitter for delta chat

[Service]
ExecStart=getmail -r %i --idle INBOX
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

Instantiation

systemctl --user enable --now deltachat-push@deltachat-example.com.service

Compatibility tested

  • Vanilla Dovecot
  • multipop.t-online.de (yes)
  • imap.gmail.com (indeed)

Troubleshooting

journalctl --user-unit deltachat-push@\*.service -f
~/bin/deltachat-push <notify-token>

Limitations

  • First execution of getmail downloads all mail, so I had to start newly configured getmail configurations without a push token by commenting out the arguments option.
  • May be considered abuse of the Chatmail infrastructure? I definitely use it sensibly, on an IMAP mailbox that is dedicated to Deltchat only. Someone is paying actual time and effort for running that notification relay.
  • Impact of plans for encrypting the notify-token unclear.

This is not a nice post. This is a post about Gnome/GDM customization.

I recently got the opportunity to work on adapting the default Debian Gnome experience to a client’s corporate design, and it felt a LOT like reverse-engineering deeply into the undocumented.

I found the work to fall into a number of categories, which I will classify as “dconf policy”, “css”, “xml-manifest” and “packaging”.

GDM logodconf policy, packaging
GDM banner messagedconf policy
GDM background colorcss, xml-manifest
GDM wallpapercss, xml-manifest, packaging
Gnome default wallpaperdconf policy, packaging
Gnome default themedconf policy
Gnome shell pluginsdconf policy, packaging
Gnome UI and plugin defaultsdconf policy
Gnome wallpapersxml-manifest

Note I’m not familiar with any underlying Gnome/GTK philosopy aspects but come from a Linux engineering role and just need to get the job done.

Packaging

The “packaging” class really just means that required assets need to be packaged onto the system and that any shell plugins that should be enabled by default, must be installed.

GDM

The GDM-Settings workflow

For GDM customization, GDM-Settings proved immensely helpful for identifying where to make changes.

# Install via flatpak
sudo apt-get install flatpak gnome-software-plugin-flatpak
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
flatpak -y install io.github.realmazharhussain.GdmSettings

# Keep track of where we started off
touch /tmp/now

# Run gdm-settings
flatpak run io.github.realmazharhussain.GdmSettings

# See what changed
find / -type f -newer /tmp/now 2>/dev/null | egrep -v '^/(dev|run|proc|sys|home|var|tmp)'

For this post, I will stick with the default dconf policy filename used by GDM-Settings.

Logo and banner

# /etc/dconf/db/gdm.d/95-gdm-settings
[org/gnome/login-screen]
logo='/usr/share/icons/hicolor/48x48/apps/gvim.png'
banner-message-enable=true
banner-message-text='Welcome to VIMnux'

dconf needs an accompanying profile definition, /etc/dconf/profile/gdm:

user-db:user
system-db:gdm

dconf update needs to be run after modifying these files.

Background color and wallpaper

GDM background settings are hidden deep in the global gnome-shell theme CSS, which itself is hidden in /usr/share/gnome-shell/gnome-shell-theme.gresource.

GDM-Settings completely hides the tedious process of drilling down to the CSS away from the user, which is great from a user perspective, but not what I needed for my customizations. I went with the following workflow for unpacking the files. gresource list lists the file names contained in the gresource file, gresource extract extracts them one by one.

# Unpack /usr/share/gnome-shell/gnome-shell-theme.gresource
# to a temporary directory:
T=$(mktemp -d /tmp/gres.XXX); printf "Resources tempdir: %s\n" $T
cd $T
while read R
do
  gresource extract /usr/share/gnome-shell/gnome-shell-theme.gresource $R > $(basename $R)
done < <(gresource list /usr/share/gnome-shell/gnome-shell-theme.gresource)

At this point, the only file I’m interested in is gnome-shell.css, where I set a black background for my application.

.login-dialog { background: transparent; }
#lockDialogGroup { background-color: rgb(0,0,0); }

Similar CSS for a wallpaper:

.login-dialog { background: transparent; }
#lockDialogGroup {
  background-image: url('file:///usr/share/backgrounds/gnome/wood-d.webp');
  background-position: center;
  background-size: cover;
}

Reassembly of the gresource file requires an XML manifest which I generate using the following script, manifest.py:

#!/usr/bin/env python3

import os, sys, glob
import xml.etree.ElementTree as ET
from io import BytesIO

os.chdir(sys.argv[1])
gresources = ET.Element('gresources')
gresource = ET.SubElement(gresources, 'gresource', attrib = {'prefix': '/org/gnome/shell/theme'})
for resourcefile in glob.glob('*'):
    file = ET.SubElement(gresource, 'file')
    file.text = resourcefile

out = BytesIO()
xmldoc = ET.ElementTree(gresources)
ET.indent(xmldoc)
xmldoc.write(out, encoding='utf-8', xml_declaration=True)
print(out.getvalue().decode())

First generate the XML manifest, then compile the gresources file.

# Generate XML
./manifest.py $T > gnome-shell-theme.gresource.xml

# Compile gresources (glib-compile-resources from libglib2.0-dev-bin)
glib-compile-resources gnome-shell-theme.gresource.xml --sourcedir=$T --target=gnome-shell-theme.gresource

Someone over here decided to indirect /usr/share/gnome-shell/gnome-shell-theme.gresource via /etc/alternatives, do whatever you like.

Note that on the systems I tested this on, gdm.css and gdm3.css could be left out of the gresource file and all changes were made in gnome-shell.css.

Gnome Wallpapers

Speaking of XML, Wallpapers can be installed to someplace intuitive such as /usr/share/backgrounds/corporate but must be accompanied by another XML manifest in /usr/share/gnome-background-properties, which I generate using another XML generator, properties-xml.py:

#!/usr/bin/env python3

import os, sys, glob
import xml.etree.ElementTree as ET
from io import BytesIO

os.chdir(sys.argv[1])
dirname='/usr/share/backgrounds/corporate'
wallpapers = ET.Element('wallpapers')
for wallpaper in glob.glob('*'):
    wallpaper_element = ET.SubElement(wallpapers, 'wallpaper', attrib = {'deleted': 'false'})
    filename = ET.SubElement(wallpaper_element, 'filename')
    filename.text = f"{dirname}/{wallpaper}"
    name = ET.SubElement(wallpaper_element, 'name')
    name.text = wallpaper
    options = ET.SubElement(wallpaper_element, 'options')
    options.text = 'zoom'
    pcolor = ET.SubElement(wallpaper_element, 'pcolor')
    pcolor.text = '#000000'
    scolor = ET.SubElement(wallpaper_element, 'scolor')
    scolor.text = '#ffffff'

out = BytesIO()
xmldoc = ET.ElementTree(wallpapers)
ET.indent(xmldoc)
xmldoc.write(out)
print('<?xml version="1.0" encoding="UTF-8"?>')
print('<!DOCTYPE wallpapers SYSTEM "gnome-wp-list.dtd">')
print(out.getvalue().decode())

Which I run as follows:

./properties-xml.py backgrounds > corporate.xml

/usr/share/backgrounds/corporate/* and /usr/share/gnome-background-properties/corporate.xml then get packaged onto the system.

Gnome Extensions and Defaults

At this point, a dconf user profile needs to be introduced:

$ cat /etc/dconf/profile/user 
user-db:user
system-db:local

(Things get easier from here.)

Default-enabled extensions

I chose to enable the extensions and set related defaults in /etc/dconf/db/local.d/99-extensions:

[org/gnome/shell]
enabled-extensions=[ 'ding@rastersoft.com', 'no-overview@fthx', 'dash-to-dock@micxgx.gmail.com', 'ubuntu-appindicators@ubuntu.com', 'TopIcons@phocean.net' ]

[org/gnome/shell/extensions/dash-to-dock]
dock-fixed=true
background-opacity=0.2
transparency-mode='FIXED'

dconf update needs to be run after modifying this file.

Other Gnome defaults

dconf watch /

dconf watch / in a terminal makes it possible to take note of what configuration options change as changes are being made. They can now be made the defaults in a policy file such as /etc/dconf/db/local.d/99-misc-defaults:

[org/gnome/desktop/wm/preferences]
button-layout='appmenu:minimize,maximize,close'

[org/gnome/terminal/legacy]
theme-variant='dark'

[org/gnome/desktop/interface]
color-scheme='prefer-dark'
gtk-theme='Adwaita-dark'

dconf update needs to be run after modifying this file.

tl;dr: Example customization package

A debian package that provides live examples, can be found here: https://github.com/mschmitt/gnome-local-custom

Too good to #0012

Today: Mounting tar archives, a novel take on uptime, ipxe escapes.


ratarmountAccess large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives

$ virtualenv ~/.local/ratarmount
$ ~/.local/ratarmount/bin/pip3 install -U ratarmount
$ ~/.local/ratarmount/bin/ratarmount
$ install -D ~/.local/ratarmount/bin/ratarmount ~/bin/

$ mkdir -p ~/mnt
$ curl -O https://data.iana.org/time-zones/tzdata-latest.tar.gz
$ ratarmount tzdata-latest.tar.gz ~/mnt
$ find ~/mnt
$ fusermount -u ~/mnt

tuptimeReport historical and statistical real time of the system, keeping it between restarts. Total uptime

tuptime calculates overall uptime of the system it’s running on. It also flags shutdowns as “BAD” if it comes up without having been gracefully stopped before.

As I grew up in an age where uptime braggery was common even among professionals, my entirely unreasonable use case here is to determine uptime since the previous unclean shutdown:

function tuptime-graceful () {
        local tuptime_since=1
        local temp_array
        while read -r line 
        do
                if [[ "${line}" =~ ' BAD ' ]]
                then
                        read -r -a temp_array <<< "${line}"
                        tuptime_since=$(( temp_array[0] + 1 ))
                        break
                fi
        done < <(tuptime --table --order e --reverse)
        tuptime --since "${tuptime_since}"
}

Ampersand in ipxe script

Example is from a Debian preseed environment where preseed/late_command downloads and executes a shell script

set amp &
set latecmd in-target wget ${script_url} ${amp}${amp} in-target bash script.sh
kernel [...] preseed/late_command="${latecmd}"

Vollständige Liste aller Amazon-Alternativen

Vielen Dank für eure Aufmerksamkeit. Es folgt eine unvollständige Liste einiger weniger Amazon-Alternativen.

Ich sage nicht, dass ich Amazon nicht benutze oder ihr Amazon nicht benutzen sollt. Ich sage lediglich, dass ich diese Shops bereits alternativ benutzt habe. (Updates folgen.)

Hardware

Elektro- und Elektronikmaterial

Werkzeug

Verbrauchs- und Bastelmaterial

Büromaterial

Drogerie / Körperpflege

Genussmittel


Aus dem Feedback

(Von mir ungetestet, aber aus vertrauenswürdigen Quellen.)

Failsafe curl

Nothing serious, just a few notes I like to share with friends and colleagues who, like me, script around curl.

curl -f / --fail

I try to use --fail whenever I can, because why would I want to exit zero on server errors?

$ curl -L https://download.grml.org/grml64-small_2024.02.iso.NO
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL was not found on this server.</p>
<hr>
<address>Apache/2.4.41 (Ubuntu) Server at ftp.fau.de Port 443</address>
</body></html>
$ echo $?
0
$ curl -f -L https://download.grml.org/grml64-small_2024.02.iso.NO
curl: (22) The requested URL returned error: 404
$ echo $?
22

curl --fail-with-body

I have a CI/CD situation where curl calls a webhook and it’s incredibly useful to see its error message in case of failure.

$ curl --fail https://binblog.de/xmlrpc.php
curl: (22) The requested URL returned error: 405
$ curl --fail-with-body https://binblog.de/xmlrpc.php
curl: (22) The requested URL returned error: 405
XML-RPC server accepts POST requests only.

set -o pipefail

When curl‘s output gets piped to any other command, I try to remember to set -o pipefail along with curl --fail so if curl fails, the pipe exits non-zero.

#!/usr/bin/env bash

url='https://download.grml.org/grml64-small_2024.02.iso.NONO'

if curl -s -f -L "${url}" | sha256sum
then
        echo "Success."
else
        echo "Failure."
fi

set -o pipefail

if curl -s -f -L "${url}" | sha256sum
then
        echo "Success."
else
        echo "Failure."
fi

curl --connect-timeout

Useful to get quicker response in scripts instead of waiting for the system’s default timeouts.

curl -w / --write-out

This may be over the top most of the time, but I have one situation that requires extremely detailed error handling. (The reason being a bit of a foul split DNS situation in the environment, long story.) This is where I use --write-out to analyze the server response.

curl_http_status="$(curl -o "${tmpfile}" --write-out '%{http_code}\n' "${url}")"
curl_exit_status=$?

Update: curl versions from 8.3.0 allow writing out to files.

curl -o "${tmpfile}" --write-out '%output{http_status.txt}%{http_code}' "${url}"
curl_exit_status=$?
curl_http_status="$(< http_status.txt)"

curl -n / --netrc / [ --netrc-file ]

Username:password authentication is a thing, no matter how much it’s discouraged. Here’s how to at least hide username and password from the process list.

$ chmod 600 ~/.netrc
$ cat ~/.netrc
machine binblog.de
login foo
password bar
$ curl -v -o /dev/null -n https://binblog.de
...
* Server auth using Basic with user 'foo'
...

To use any other file instead of ~/.netrc, use --netrc-file instead.

Too good to #0011

Mozilla Firefox APT special

The Mozilla Firefox APT repository is incompatible with legacy apt-mirror. Here’s how I install apt-mirror2 as a dedicated python-virtualenv

# apt-get install virtualenv
# virtualenv --no-setuptools /usr/local/apt-mirror2/
# /usr/local/apt-mirror2/bin/pip3 install -U apt-mirror
# /usr/local/apt-mirror2/bin/apt-mirror --help
  • Repeat as-is to update.
  • Here’s the bug that neccessitates the --no-setuptools option: “ModuleNotFoundError: No module named ‘debian'”

mirror.list entry for the Mozilla Firefox APT repository:

deb-all [signed-by=/path/to/packages-mozilla-org.gpg] https://packages.mozilla.org/apt mozilla main
deb-amd64 [signed-by=/path/to/packages-mozilla-org.gpg] https://packages.mozilla.org/apt mozilla main

How to convert Mozilla’s sloppy ASCII-armored PGP key:

$ curl -s -O https://packages.mozilla.org/apt/repo-signing-key.gpg
$ file repo-signing-key.gpg
repo-signing-key.gpg: PGP public key block Secret-Key
$ mv repo-signing-key.gpg repo-signing-key
$ gpg --dearmor repo-signing-key
$ file repo-signing-key.gpg
repo-signing-key.gpg: OpenPGP Public Key Version 4, Created Tue May 4 21:08:03 2021, RSA (Encrypt or Sign, 2048 bits); User ID; Signature; OpenPGP Certificate

Too good to #0010

In today’s installment:

  • “Headless” Nextcloud
  • Monitoring of fork activity

Mount Nextcloud files via rclone+Webdav, as a systemd user unit

# ~/.config/systemd/user/nextcloud-mount.service
[Unit]
Description=Mount Nextcloud via rclone-webdav

[Service]
ExecStartPre=mkdir -p %h/Nextcloud
ExecStart=rclone mount --vfs-cache-mode full --verbose nextcloud_webdav: %h/Nextcloud/
ExecStop=fusermount -u %h/Nextcloud/
ExecStopPost=rmdir %h/Nextcloud

[Install]
WantedBy=default.target

Sync instead of mount

Nextcloud via Webdav is absurdly slow, so maybe use nextcloudcmd instead, which unfortunately does not have its own daemonization:

# ~/.netrc (chmod 600)
machine my.nextcloud.example.com
login myuser
password *** (app password)
# ~/.config/systemd/user/nextcloudcmd.service
[Unit]
Description=nextcloudcmd (service)

[Service]
ExecStart=nextcloudcmd -n --silent %h/Nextcloud https://my.nextcloud.example.com
# ~/.config/systemd/user/nextcloudcmd.timer
[Unit]
Description=nextcloudcmd (timer)

[Timer]
OnStartupSec=60
OnUnitInactiveSec=300

[Install]
WantedBy=default.target

forkstat (8) – a tool to show process fork/exec/exit activity

High load without a single obvious CPU consuming process (not related to the Nextcloud shenanigans above) led me to forkstat(8):

Forkstat is a program that logs process fork(), exec(), exit(), coredump and process name change activity. It is useful for monitoring system behaviour and to track down rogue processes that are spawning off processes and potentially abusing the system.

$ sudo forkstat # (that's all)