Recent changes to this wiki. Not to be confused with my history.

Complete source to the wiki is available on gitweb or by cloning this site.

minor edits
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 244d1993..2201eeb6 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -42,11 +42,13 @@ This is going to partition `/dev/sdc` with:
         sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
         sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
 
+That will look something like this:
+
         root@curie:/home/anarcat# sgdisk -p /dev/sdc
         Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
         Model: ESD-S1C         
         Sector size (logical/physical): 512/512 bytes
-        Disk identifier (GUID): 932ED8E5-8B5C-4183-9967-56D7652C01DA
+        Disk identifier (GUID): [REDACTED]
         Partition table holds up to 128 entries
         Main partition table begins at sector 2 and ends at sector 33
         First usable sector is 34, last usable sector is 1953525134
@@ -98,7 +100,7 @@ workstation, we're betting that we will not suffer from this problem,
 after hearing a report from another Debian developer running this
 setup on their workstation successfully.
 
-# Creating "pools"
+# Creating pools
 
 ZFS pools are somewhat like "volume groups" if you are familiar with
 LVM, except they obviously also do things like RAID-10. (Even though
@@ -212,7 +214,7 @@ Also, the [FreeBSD handbook quick start](https://docs.freebsd.org/en/books/handb
 about their first example, which is with a single disk. So I am
 reassured at least. All 
 
-# Creating filesystems AKA "datasets"
+# Creating mount points
 
 Next we create the actual filesystems, known as "datasets" which are
 the things that get mounted on mountpoint and hold the actual files.
@@ -878,7 +880,7 @@ this.
 
 # References
 
-### ZFS documentation
+## ZFS documentation
 
  * [Debian wiki page](https://wiki.debian.org/ZFS): good introduction, basic commands, some
    advanced stuff

add toc
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 90ab321e..244d1993 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -10,6 +10,8 @@ because I find it too confusing and unreliable.
 
 So off we go.
 
+[[!toc levels=3]]
+
 # Installation
 
 Since this is a conversion (and not a new install), our procedure is

fix heading
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 1fed42b3..90ab321e 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -614,7 +614,7 @@ then you mount the root filesystem and all the others:
     mount -t tmpfs tmpfs /mnt/run &&
     mkdir /mnt/run/lock
 
-# Remaining work
+# Remaining issues
 
 TODO: swap. how do we do it?
 

document lockups
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index c6d12eb0..1fed42b3 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -618,8 +618,6 @@ then you mount the root filesystem and all the others:
 
 TODO: swap. how do we do it?
 
-TODO: talk about the lockups during migration
-
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
@@ -708,6 +706,174 @@ reporting the right timestamps in the end, although it does feel like
 *starting* all the processes (even if not doing any work yet) could
 skew the results.
 
+## Hangs during procedure
+
+During the procedure, it happened a few times where any ZFS command
+would completely hang. It seems that using an external USB drive to
+sync stuff didn't work so well: sometimes it would reconnect under a
+different device (from `sdc` to `sdd`, for example), and this would
+greatly confuse ZFS.
+
+Here, for example, is `sdd` reappearing out of the blue:
+
+    May 19 11:22:53 curie kernel: [  699.820301] scsi host4: uas
+    May 19 11:22:53 curie kernel: [  699.820544] usb 2-1: authorized to connect
+    May 19 11:22:53 curie kernel: [  699.922433] scsi 4:0:0:0: Direct-Access     ROG      ESD-S1C          0    PQ: 0 ANSI: 6
+    May 19 11:22:53 curie kernel: [  699.923235] sd 4:0:0:0: Attached scsi generic sg2 type 0
+    May 19 11:22:53 curie kernel: [  699.923676] sd 4:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
+    May 19 11:22:53 curie kernel: [  699.923788] sd 4:0:0:0: [sdd] Write Protect is off
+    May 19 11:22:53 curie kernel: [  699.923949] sd 4:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+    May 19 11:22:53 curie kernel: [  699.924149] sd 4:0:0:0: [sdd] Optimal transfer size 33553920 bytes
+    May 19 11:22:53 curie kernel: [  699.961602]  sdd: sdd1 sdd2 sdd3 sdd4
+    May 19 11:22:53 curie kernel: [  699.996083] sd 4:0:0:0: [sdd] Attached SCSI disk
+
+Next time I run a ZFS command (say `zpool list`), the command
+completely hangs (`D` state) and this comes up in the logs:
+
+    May 19 11:34:21 curie kernel: [ 1387.914843] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=71344128 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914859] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=205565952 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914874] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272789504 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914906] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=270336 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.914932] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073225728 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.914948] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073487872 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.915165] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272793600 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.915183] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=339853312 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.915648] WARNING: Pool 'bpool' has encountered an uncorrectable I/O failure and has been suspended.
+    May 19 11:34:21 curie kernel: [ 1387.915648] 
+    May 19 11:37:25 curie kernel: [ 1571.558614] task:txg_sync        state:D stack:    0 pid:  997 ppid:     2 flags:0x00004000
+    May 19 11:37:25 curie kernel: [ 1571.558623] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.558640]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.558650]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.558670]  schedule_timeout+0x8b/0x140
+    May 19 11:37:25 curie kernel: [ 1571.558675]  ? __next_timer_interrupt+0x110/0x110
+    May 19 11:37:25 curie kernel: [ 1571.558678]  io_schedule_timeout+0x4c/0x80
+    May 19 11:37:25 curie kernel: [ 1571.558689]  __cv_timedwait_common+0x12b/0x160 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.558694]  ? add_wait_queue_exclusive+0x70/0x70
+    May 19 11:37:25 curie kernel: [ 1571.558702]  __cv_timedwait_io+0x15/0x20 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.558816]  zio_wait+0x129/0x2b0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.558929]  dsl_pool_sync+0x461/0x4f0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559032]  spa_sync+0x575/0xfa0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559138]  ? spa_txg_history_init_io+0x101/0x110 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559245]  txg_sync_thread+0x2e0/0x4a0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559354]  ? txg_fini+0x240/0x240 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559366]  thread_generic_wrapper+0x6f/0x80 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.559376]  ? __thread_exit+0x20/0x20 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.559379]  kthread+0x11b/0x140
+    May 19 11:37:25 curie kernel: [ 1571.559382]  ? __kthread_bind_mask+0x60/0x60
+    May 19 11:37:25 curie kernel: [ 1571.559386]  ret_from_fork+0x22/0x30
+    May 19 11:37:25 curie kernel: [ 1571.559401] task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
+    May 19 11:37:25 curie kernel: [ 1571.559404] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.559409]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.559412]  ? __kmalloc_node+0x141/0x2b0
+    May 19 11:37:25 curie kernel: [ 1571.559417]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559420]  schedule_preempt_disabled+0xa/0x10
+    May 19 11:37:25 curie kernel: [ 1571.559424]  __mutex_lock.constprop.0+0x133/0x460
+    May 19 11:37:25 curie kernel: [ 1571.559435]  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
+    May 19 11:37:25 curie kernel: [ 1571.559537]  spa_all_configs+0x41/0x120 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559644]  zfs_ioc_pool_configs+0x17/0x70 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559752]  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559758]  ? _copy_from_user+0x28/0x60
+    May 19 11:37:25 curie kernel: [ 1571.559860]  zfsdev_ioctl+0x53/0xe0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559866]  __x64_sys_ioctl+0x83/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559869]  do_syscall_64+0x33/0x80
+    May 19 11:37:25 curie kernel: [ 1571.559873]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    May 19 11:37:25 curie kernel: [ 1571.559876] RIP: 0033:0x7fcf0ef32cc7
+    May 19 11:37:25 curie kernel: [ 1571.559878] RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    May 19 11:37:25 curie kernel: [ 1571.559881] RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
+    May 19 11:37:25 curie kernel: [ 1571.559883] RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
+    May 19 11:37:25 curie kernel: [ 1571.559885] RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
+    May 19 11:37:25 curie kernel: [ 1571.559886] R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
+    May 19 11:37:25 curie kernel: [ 1571.559888] R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
+    May 19 11:37:25 curie kernel: [ 1571.559980] task:zpool           state:D stack:    0 pid:11815 ppid:  3816 flags:0x00004000
+    May 19 11:37:25 curie kernel: [ 1571.559983] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.559988]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.559992]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559995]  io_schedule+0x42/0x70
+    May 19 11:37:25 curie kernel: [ 1571.560004]  cv_wait_common+0xac/0x130 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.560008]  ? add_wait_queue_exclusive+0x70/0x70
+    May 19 11:37:25 curie kernel: [ 1571.560118]  txg_wait_synced_impl+0xc9/0x110 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560223]  txg_wait_synced+0xc/0x40 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560325]  spa_export_common+0x4cd/0x590 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560430]  ? zfs_log_history+0x9c/0xf0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560537]  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560543]  ? _copy_from_user+0x28/0x60
+    May 19 11:37:25 curie kernel: [ 1571.560644]  zfsdev_ioctl+0x53/0xe0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560649]  __x64_sys_ioctl+0x83/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.560653]  do_syscall_64+0x33/0x80
+    May 19 11:37:25 curie kernel: [ 1571.560656]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    May 19 11:37:25 curie kernel: [ 1571.560659] RIP: 0033:0x7fdc23be2cc7
+    May 19 11:37:25 curie kernel: [ 1571.560661] RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    May 19 11:37:25 curie kernel: [ 1571.560664] RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
+    May 19 11:37:25 curie kernel: [ 1571.560666] RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
+    May 19 11:37:25 curie kernel: [ 1571.560667] RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
+    May 19 11:37:25 curie kernel: [ 1571.560669] R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
+    May 19 11:37:25 curie kernel: [ 1571.560671] R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
+
+Here's another example, where you see the USB controller bleeping out
+and back into existence:
+
+    mai 19 11:38:39 curie kernel: usb 2-1: USB disconnect, device number 2
+    mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronizing SCSI cache
+    mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
+    mai 19 11:39:25 curie kernel: INFO: task zed:1564 blocked for more than 241 seconds.
+    mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
+    mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+    mai 19 11:39:25 curie kernel: task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
+    mai 19 11:39:25 curie kernel: Call Trace:
+    mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
+    mai 19 11:39:25 curie kernel:  ? __kmalloc_node+0x141/0x2b0
+    mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
+    mai 19 11:39:25 curie kernel:  schedule_preempt_disabled+0xa/0x10
+    mai 19 11:39:25 curie kernel:  __mutex_lock.constprop.0+0x133/0x460
+    mai 19 11:39:25 curie kernel:  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
+    mai 19 11:39:25 curie kernel:  spa_all_configs+0x41/0x120 [zfs]
+    mai 19 11:39:25 curie kernel:  zfs_ioc_pool_configs+0x17/0x70 [zfs]
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
+    mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
+    mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
+    mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    mai 19 11:39:25 curie kernel: RIP: 0033:0x7fcf0ef32cc7
+    mai 19 11:39:25 curie kernel: RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
+    mai 19 11:39:25 curie kernel: RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
+    mai 19 11:39:25 curie kernel: RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
+    mai 19 11:39:25 curie kernel: R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
+    mai 19 11:39:25 curie kernel: R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
+    mai 19 11:39:25 curie kernel: INFO: task zpool:11815 blocked for more than 241 seconds.
+    mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
+    mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+    mai 19 11:39:25 curie kernel: task:zpool           state:D stack:    0 pid:11815 ppid:  2621 flags:0x00004004
+    mai 19 11:39:25 curie kernel: Call Trace:
+    mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
+    mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
+    mai 19 11:39:25 curie kernel:  io_schedule+0x42/0x70
+    mai 19 11:39:25 curie kernel:  cv_wait_common+0xac/0x130 [spl]
+    mai 19 11:39:25 curie kernel:  ? add_wait_queue_exclusive+0x70/0x70
+    mai 19 11:39:25 curie kernel:  txg_wait_synced_impl+0xc9/0x110 [zfs]
+    mai 19 11:39:25 curie kernel:  txg_wait_synced+0xc/0x40 [zfs]
+    mai 19 11:39:25 curie kernel:  spa_export_common+0x4cd/0x590 [zfs]
+    mai 19 11:39:25 curie kernel:  ? zfs_log_history+0x9c/0xf0 [zfs]
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
+    mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
+    mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
+    mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    mai 19 11:39:25 curie kernel: RIP: 0033:0x7fdc23be2cc7
+    mai 19 11:39:25 curie kernel: RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
+    mai 19 11:39:25 curie kernel: RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
+    mai 19 11:39:25 curie kernel: RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
+    mai 19 11:39:25 curie kernel: R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
+    mai 19 11:39:25 curie kernel: R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
+
+I understand those are rather extreme conditions: I would fully expect
+the pool to stop working if the underlying drives disappear. What
+doesn't seem acceptable is that a command would completely hang like
+this.
+
 # References
 
 ### ZFS documentation

move fio discussion to appendix
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 4861eac8..c6d12eb0 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -592,81 +592,6 @@ Another test was performed while in "rescue" mode but was ultimately
 lost. It's actually still in the old M.2 drive, but I cannot mount
 that device with the external USB controller I have right now.
 
-## Side note about fio job files
-
-I would love to have just a single `.fio` job file that lists multiple
-jobs to run *serially*. For example, this file describes the above
-workload pretty well:
-
-[[!format txt """
-[global]
-# cargo-culting Salter
-fallocate=none
-ioengine=posixaio
-runtime=60
-time_based=1
-end_fsync=1
-stonewall=1
-group_reporting=1
-# no need to drop caches, done by default
-# invalidate=1
-
-# Single 4KiB random read/write process
-[randread-4k-4g-1x]
-stonewall=1
-rw=randread
-bs=4k
-size=4g
-numjobs=1
-iodepth=1
-
-[randwrite-4k-4g-1x]
-stonewall=1
-rw=randwrite
-bs=4k
-size=4g
-numjobs=1
-iodepth=1
-
-# 16 parallel 64KiB random read/write processes:
-[randread-64k-256m-16x]
-stonewall=1
-rw=randread
-bs=64k
-size=256m
-numjobs=16
-iodepth=16
-
-[randwrite-64k-256m-16x]
-stonewall=1
-rw=randwrite
-bs=64k
-size=256m
-numjobs=16
-iodepth=16
-
-# Single 1MiB random read/write process
-[randread-1m-16g-1x]
-stonewall=1
-rw=randread
-bs=1m
-size=16g
-numjobs=1
-iodepth=1
-
-[randwrite-1m-16g-1x]
-stonewall=1
-rw=randwrite
-bs=1m
-size=16g
-numjobs=1
-iodepth=1
-"""]]
-
-... except the jobs are actually run in parallel, even though they are
-`stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
-to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
-
 # Recovery procedures
 
 For test purposes, I unmounted all systems during the procedure:
@@ -705,11 +630,84 @@ TODO: send/recv, automated snapshots
 TODO: merge this documentation with the [[hardware/tubman]]
 documentation. maybe create a separate zfs primer?
 
+## fio improvements
+
 I really want to improve my experience with `fio`. Right now, I'm just
 cargo-culting stuff from other folks and I don't really like
 it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
 that it doesn't really work that well for disk tests.
 
+I would love to have just a single `.fio` job file that lists multiple
+jobs to run *serially*. For example, this file describes the above
+workload pretty well:
+
+    [global]
+    # cargo-culting Salter
+    fallocate=none
+    ioengine=posixaio
+    runtime=60
+    time_based=1
+    end_fsync=1
+    stonewall=1
+    group_reporting=1
+    # no need to drop caches, done by default
+    # invalidate=1
+
+    # Single 4KiB random read/write process
+    [randread-4k-4g-1x]
+    rw=randread
+    bs=4k
+    size=4g
+    numjobs=1
+    iodepth=1
+
+    [randwrite-4k-4g-1x]
+    rw=randwrite
+    bs=4k
+    size=4g
+    numjobs=1
+    iodepth=1
+
+    # 16 parallel 64KiB random read/write processes:
+    [randread-64k-256m-16x]
+    rw=randread
+    bs=64k
+    size=256m
+    numjobs=16
+    iodepth=16
+
+    [randwrite-64k-256m-16x]
+    rw=randwrite
+    bs=64k
+    size=256m
+    numjobs=16
+    iodepth=16
+
+    # Single 1MiB random read/write process
+    [randread-1m-16g-1x]
+    rw=randread
+    bs=1m
+    size=16g
+    numjobs=1
+    iodepth=1
+
+    [randwrite-1m-16g-1x]
+    rw=randwrite
+    bs=1m
+    size=16g
+    numjobs=1
+    iodepth=1
+
+... except the jobs are actually started in parallel, even though they
+are `stonewall`'d, as far as I can tell by the reports. I [sent a
+mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u) to the [fio mailing list](https://lore.kernel.org/fio/) for clarification. 
+
+It looks like the jobs are *started* in parallel, but actual
+(correctly) run serially. It seems like this might just be a matter of
+reporting the right timestamps in the end, although it does feel like
+*starting* all the processes (even if not doing any work yet) could
+skew the results.
+
 # References
 
 ### ZFS documentation

migration completed
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 56403158..4861eac8 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -260,6 +260,17 @@ the things that get mounted on mountpoint and hold the actual files.
 
    ... and no, just creating `/mnt/var/lib` doesn't fix that problem.
 
+   Also note that you will *probably* need to change storage driver in
+   Docker, see the [zfs-driver documentation](https://docs.docker.com/storage/storagedriver/zfs-driver/) for details but,
+   basically, I did:
+   
+       echo '{ "storage-driver": "zfs" }' > /etc/docker/daemon.json
+
+   Note, as an aside, that podman has the same problem (and similar
+   solution):
+   
+       printf '[storage]\ndriver = "zfs"\n' > /etc/containers/storage.conf
+
  * make a `tmpfs` for `/run`:
 
         mkdir /mnt/run &&
@@ -376,14 +387,15 @@ seems to be at around 5Gbps:
 So it shouldn't cap at that speed. It's possible the USB adapter is
 failing to give me the full speed though.
 
-TODO: we are here
-
-TODO: ddrescue LVM setup to *other* NVMe drive, to allow for similar
-benchmarks later
-
-TODO: benchmark before, in single-user mode?
+At this point, we're about ready to do the final configuration. We
+drop to single user mode and do the rest of the procedure. That used
+to be `shutdown now`, but it seems like the systemd switch broke that,
+so now you can reboot into grub and pick the "recovery"
+option. Alternatively, you might try `systemctl rescue` (untested).
 
-TODO: rsync in single user mode, then continue below
+I also wanted to copy the drive over to another new NVMe drive, but
+that failed: it looks like the USB controller I have doesn't work with
+older, non-NVME drives.
 
 # Boot configuration
 
@@ -422,7 +434,8 @@ Enable the service:
 
     systemctl enable zfs-import-bpool.service
 
-TODO: fstab? swap?
+I had to trim down `/etc/fstab` and `/etc/crypttab` to only contain
+references to the legacy filesystems (`/srv` is still BTRFS!).
 
 Rebuild boot loader with support for ZFS, but also to workaround
 GRUB's missing zpool-features support:
@@ -474,22 +487,19 @@ Exit chroot:
 
 # Finalizing
 
-TODO: move Docker to the right place:
+One last sync was done in rescue mode:
 
-    rm /var/lib/docker/
-    mv /home/docker/* /var/lib/docker/
-    rmdir /home/docker
-
-TODO: last sync in single user mode
+    for fs in /boot/ /boot/efi/ / /home/; do
+        echo "syncing $fs to /mnt$fs..." && 
+        rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
+    done
 
-Unmount filesystems:
+Then we unmount all filesystems:
  
     mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
     zpool export -a
 
-TODO: reboot
-TODO: swap drives
-TODO: new benchmark
+Reboot, swap the drives, and boot in ZFS. Hurray!
 
 # Benchmarks
 
@@ -578,6 +588,10 @@ what it's worth. Those results are curiously inconsistent with the
 non-idle test, many tests perform more *poorly* than when the
 workstation was busy, which is troublesome.
 
+Another test was performed while in "rescue" mode but was ultimately
+lost. It's actually still in the old M.2 drive, but I cannot mount
+that device with the external USB controller I have right now.
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple
@@ -653,8 +667,34 @@ iodepth=1
 `stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
 to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
 
+# Recovery procedures
+
+For test purposes, I unmounted all systems during the procedure:
+
+    umount /mnt/boot/efi /mnt/boot/run
+    umount -a -t zfs
+    zpool export -a
+
+And disconnected the drive, to see how I would recover this system
+from another Linux system in case of a total motherboard failure.
+
+To import an existing pool, plug the device, then import the pool with
+an alternate root, so it doesn't mount over your existing filesystems,
+then you mount the root filesystem and all the others:
+
+    zpool import -l -a -R /mnt &&
+    zfs mount rpool/ROOT/debian &&
+    zfs mount -a &&
+    mount /dev/sdc2 /mnt/boot/efi &&
+    mount -t tmpfs tmpfs /mnt/run &&
+    mkdir /mnt/run/lock
+
 # Remaining work
 
+TODO: swap. how do we do it?
+
+TODO: talk about the lockups during migration
+
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
@@ -662,6 +702,9 @@ here.
 
 TODO: send/recv, automated snapshots
 
+TODO: merge this documentation with the [[hardware/tubman]]
+documentation. maybe create a separate zfs primer?
+
 I really want to improve my experience with `fio`. Right now, I'm just
 cargo-culting stuff from other folks and I don't really like
 it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
diff --git a/hardware/tubman.md b/hardware/tubman.md
index 4b0fc1d6..368cd904 100644
--- a/hardware/tubman.md
+++ b/hardware/tubman.md
@@ -444,6 +444,28 @@ IO statistics, every second:
 
     zpool iostat 1
 
+### Mounting
+
+After a `zfs list`, you should see the datasets you can mount. You can
+mount one by name, for example with:
+
+    zfs mount bpool/ROOT/debian
+
+Note that it will mount the device in its pre-defined `mountpoint`
+property. If you want to mount it elsewhere, this is the magic
+formula:
+
+    mount -o zfsutil -t zfs bpool/BOOT/debian /mnt
+
+If the dataset is encrypted, however, you first need to unlock it
+with:
+
+    zpool import -l -a -R /mnt
+
+Note that the above is preferred: it will set the entire imported pool
+to mount under `/mnt` instead of the toplevel. That way you don't need
+the earlier hack to mount it elsewhere.
+
 ### Snapshots
 
 Creating:

another benchmarks set done
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 8ccfb228..56403158 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -146,9 +146,8 @@ This is a more typical pool creation.
         zpool create \
             -o ashift=12 \
             -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
-            -O acltype=posixacl -O xattr=sa \
+            -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
             -O compression=zstd \
-            -O dnodesize=auto \
             -O relatime=on \
             -O canmount=off \
             -O mountpoint=/ -R /mnt \
@@ -558,6 +557,27 @@ assume my work affected the benchmarks greatly.
    * read: 165MiB/s (173MB/s), sync: 172MiB/s (180MB/s)
    * write: 74.7MiB/s (78.3MB/s), sync: 38.5MiB/s (40.4MB/s)
 
+### Somewhat idle test, mdadm/luks/lvm/ext4
+
+This test was done while I was away from my workstation. Everything
+was still running, so a bunch of stuff was probably waking up and
+disturbing the test, but it should be more reliable than the above.
+
+ * 4k blocks, 4GB, 1 process:
+   * read: 16.8MiB/s (17.7MB/s), sync: 18.9MiB/s (19.8MB/s)
+   * write: 73.8MiB/s (77.3MB/s), sync: 847KiB/s (867kB/s)
+ * 64k blocks, 256MB, 16 process:
+   * read: 526MiB/s (552MB/s), sync: 520MiB/s (546MB/s)
+   * write: 98.3MiB/s (103MB/s), sync: 29.6MiB/s (30.0MB/s)
+ * 1m blocks, 16G 1 process:
+   * read: 148MiB/s (155MB/s), sync: 162MiB/s (170MB/s)
+   * write: 109MiB/s (114MB/s), sync: 48.6MiB/s (50.0MB/s)
+
+It looks like the 64k test is the one that can max out the SSD, for
+what it's worth. Those results are curiously inconsistent with the
+non-idle test, many tests perform more *poorly* than when the
+workstation was busy, which is troublesome.
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple

and power consumptions improvements!
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index 5baade96..e024a55d 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -88,7 +88,11 @@ Cons:
    than my current (2021-09-27) laptop (Purism 13v4, currently says
    7h). power problems confirmed by [this report from Linux After
    Dark][linux-after-dark-framework] which also mentions that the USB adapters take power *even
-   when not in use* and quite a bit (400mW in some cases!)
+   when not in use* and quite a bit (400mW in some cases!). update:
+   apparently the [second generation laptop][] has improvements to the
+   battery life, namely associated with the "big-little" design of the
+   12th gen Intel chips but also [standby consumption](https://news.ycombinator.com/item?id=31433666) and
+   [firmware updates for various chipsets](https://news.ycombinator.com/item?id=31434021)
 
 [linux-after-dark-framework]: https://linuxafterdark.net/linux-after-dark-episode-14/
 
@@ -97,7 +101,10 @@ Cons:
    After Dark][linux-after-dark-framework]), so unlikely to have one in the future</del>
    Update: it seems like they cracked that nut and will ship an
    [ethernet expansion card](https://frame.work/ca/en/products/ethernet-expansion-card) in their [second generation
-   laptop](https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646), which is impressive
+   laptop][], which is impressive. Downside: the [chipset is
+   realtek](https://news.ycombinator.com/item?id=31434483), so probably firmware blobby.
+
+[second generation laptop]: https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646
 
  * a bit pricey for the performance, especially when compared to the
    competition (e.g. Dell XPS, Apple M1), but be worth waiting for

update: framework will ship an ethernet port, whoohoo!
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index f2020631..5baade96 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -92,9 +92,12 @@ Cons:
 
 [linux-after-dark-framework]: https://linuxafterdark.net/linux-after-dark-episode-14/
 
- * no RJ-45 port, and attempts at designing ones are failing because
-   the modular plugs are too thin to fit (according to [Linux After
-   Dark][linux-after-dark-framework]), so unlikely to have one in the future
+ * <del>no RJ-45 port, and attempts at designing ones are failing
+   because the modular plugs are too thin to fit (according to [Linux
+   After Dark][linux-after-dark-framework]), so unlikely to have one in the future</del>
+   Update: it seems like they cracked that nut and will ship an
+   [ethernet expansion card](https://frame.work/ca/en/products/ethernet-expansion-card) in their [second generation
+   laptop](https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646), which is impressive
 
  * a bit pricey for the performance, especially when compared to the
    competition (e.g. Dell XPS, Apple M1), but be worth waiting for

more todos, typos
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 80c44b0d..8ccfb228 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -379,6 +379,9 @@ failing to give me the full speed though.
 
 TODO: we are here
 
+TODO: ddrescue LVM setup to *other* NVMe drive, to allow for similar
+benchmarks later
+
 TODO: benchmark before, in single-user mode?
 
 TODO: rsync in single user mode, then continue below
@@ -420,6 +423,8 @@ Enable the service:
 
     systemctl enable zfs-import-bpool.service
 
+TODO: fstab? swap?
+
 Rebuild boot loader with support for ZFS, but also to workaround
 GRUB's missing zpool-features support:
 
@@ -483,7 +488,9 @@ Unmount filesystems:
     mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
     zpool export -a
 
-reboot to the new system.
+TODO: reboot
+TODO: swap drives
+TODO: new benchmark
 
 # Benchmarks
 
@@ -651,7 +658,7 @@ that it doesn't really work that well for disk tests.
  * [FreeBSD handbook](https://docs.freebsd.org/en/books/handbook/zfs/): FreeBSD-specific of course, but
    excellent as always
  * [OpenZFS FAQ](https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html)
- * [OpenZFS: Debian buyllseye root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.htm): excellent documentation, basis
+ * [OpenZFS: Debian buyllseye root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html): excellent documentation, basis
    for the above procedure
  * [another ZFS on linux documentation](https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/)
 

fix blob format
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2f476f9a..80c44b0d 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -557,7 +557,7 @@ I would love to have just a single `.fio` job file that lists multiple
 jobs to run *serially*. For example, this file describes the above
 workload pretty well:
 
-[[!format """
+[[!format txt """
 [global]
 # cargo-culting Salter
 fallocate=none

move benchmark script to git
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index aeebebd3..2f476f9a 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -508,43 +508,10 @@ article](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-
 already pretty strange. But also it doesn't include stuff like
 dropping caches or repeating results.
 
-So here's my variation. 
-
-[[!format sh """
-#!/bin/sh
-
-set -e
-
-common_flags="--group_reporting --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
-
-while read type bs size jobs extra ; do
-    name="${type}${bs}${size}${jobs}x$extra"
-    echo "dropping caches..." >&2
-    sync
-    echo 3 > /proc/sys/vm/drop_caches
-    echo "running job $name..." >&2
-    fio $common_flags --name="$name" \
-        --rw="$type" \
-        --bs="$bs" \
-        --size="$size" \
-        --numjobs="$jobs" \
-        --iodepth="$jobs" \
-        $extra
-done <<EOF
-randread  4k 4g 1
-randwrite 4k 4g 1
-randread  64k 256m 16
-randwrite 64k 256m 16
-randread  1m 16g 1
-randwrite 1m 16g 1
-randread  4k 4g 1 --fsync=1
-randwrite 4k 4g 1 --fsync=1
-randread  64k 256m 16 --fsync=1
-randwrite 64k 256m 16 --fsync=1
-randread  1m 16g 1 --fsync=1
-randwrite 1m 16g 1 --fsync=1
-EOF
-"""]]
+So here's my variation, which i called [fio-ars-bench.sh](https://gitlab.com/anarcat/scripts/-/blob/main/fio-ars-bench.sh) for
+now. It just batches a bunch of `fio` tests, one by one, 60 seconds
+each. It should take about 12 minutes to run, as there are 3 pair of
+tests, read/write, with and without async.
 
 And before I show the results, it should be noted there is a huge
 caveat here The test is done between:

some benchmarks
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 0249f8d4..aeebebd3 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -379,11 +379,9 @@ failing to give me the full speed though.
 
 TODO: we are here
 
-TODO: resync
+TODO: benchmark before, in single-user mode?
 
-TODO: benchmark before
-
-TODO: resync in single user mode, then 
+TODO: rsync in single user mode, then continue below
 
 # Boot configuration
 
@@ -517,7 +515,7 @@ So here's my variation.
 
 set -e
 
-common_flags="--group_reporting --minimal --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
+common_flags="--group_reporting --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
 
 while read type bs size jobs extra ; do
     name="${type}${bs}${size}${jobs}x$extra"
@@ -533,18 +531,18 @@ while read type bs size jobs extra ; do
         --iodepth="$jobs" \
         $extra
 done <<EOF
-randwrite 4k 4g 1
 randread  4k 4g 1
-randwrite 64k 256m 16
+randwrite 4k 4g 1
 randread  64k 256m 16
-randwrite 1m 16g 1
+randwrite 64k 256m 16
 randread  1m 16g 1
-randwrite 4k 4g 1 --fsync=1
+randwrite 1m 16g 1
 randread  4k 4g 1 --fsync=1
-randwrite 64k 256m 16 --fsync=1
+randwrite 4k 4g 1 --fsync=1
 randread  64k 256m 16 --fsync=1
-randwrite 1m 16g 1 --fsync=1
+randwrite 64k 256m 16 --fsync=1
 randread  1m 16g 1 --fsync=1
+randwrite 1m 16g 1 --fsync=1
 EOF
 """]]
 
@@ -568,6 +566,24 @@ not on reads. It's also possible it outperforms it on both, because
 it's a newer drive. A new test might be possible with a new external
 USB drive as well, although I doubt I will find the time to do this.
 
+## Results
+
+### Non-idle test, mdadm/luks/lvm/ext4
+
+Those tests were done with the above script, in `/home`, while working
+on other things on my workstation, which generally felt sluggish, so I
+assume my work affected the benchmarks greatly.
+
+ * 4k blocks, 4GB, 1 process:
+   * read: 21.5MiB/s (22.5MB/s), sync: 20.8MiB/s (21.9MB/s)
+   * write: 139MiB/s (146MB/s), sync: 1118KiB/s (1145kB/s)
+ * 64k blocks, 256MB, 16 process:
+   * read: 513MiB/s (537MB/s), sync: 512MiB/s (537MB/s)
+   * write: 160MiB/s (167MB/s), sync: 41.5MiB/s (43.5MB/s)
+ * 1m blocks, 16G 1 process:
+   * read: 165MiB/s (173MB/s), sync: 172MiB/s (180MB/s)
+   * write: 74.7MiB/s (78.3MB/s), sync: 38.5MiB/s (40.4MB/s)
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple
@@ -640,8 +656,8 @@ iodepth=1
 """]]
 
 ... except the jobs are actually run in parallel, even though they are
-`stonewall`'d, as far as I can tell by the reports. I sent a mail to
-the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
+`stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
+to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
 
 # Remaining work
 
@@ -652,6 +668,11 @@ here.
 
 TODO: send/recv, automated snapshots
 
+I really want to improve my experience with `fio`. Right now, I'm just
+cargo-culting stuff from other folks and I don't really like
+it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
+that it doesn't really work that well for disk tests.
+
 # References
 
 ### ZFS documentation

expand on benchmarks
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2fda422d..0249f8d4 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -21,6 +21,10 @@ So, install the required packages, on the current system:
 
     apt install --yes gdisk zfs-dkms zfs zfs-initramfs zfsutils-linux
 
+We also tell DKMS that we need to rebuild the initrd when upgrading:
+
+    echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
+
 # Partitioning
 
 This is going to partition `/dev/sdc` with:
@@ -336,6 +340,30 @@ idea. At this point, the procedure was restarted all the way back to
 which, surprisingly, doesn't require any confirmation (`zpool destroy
 rpool`).
 
+The second run was cleaner:
+
+    root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
+            echo "syncing $fs to /mnt$fs..." && 
+            rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
+        done
+    syncing /boot/ to /mnt/boot/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
+    syncing /boot/efi/ to /mnt/boot/efi/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/110)  
+    syncing / to /mnt/...
+     28,019,033,070  97%   42.03MB/s    0:10:35 (xfr#703671, ir-chk=1093/833515)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
+    could not make way for new symlink: var/lib/docker
+     34,081,807,102  98%   44.84MB/s    0:12:04 (xfr#736580, to-chk=0/867723)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+    syncing /home/ to /mnt/home/...
+    rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
+    IO error encountered -- skipping file deletion
+     24,043,086,450  96%   62.03MB/s    0:06:09 (xfr#151819, ir-chk=15117/172571)
+    file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/4C1FDBFEA976FF924D062FB990B24B897A77B84B"
+    315,423,626,507  96%   67.09MB/s    1:14:43 (xfr#2256845, to-chk=0/2994364)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+
+
 Also note the transfer speed: we seem capped at 76MB/s, or
 608Mbit/s. This is not as fast as I was expecting: the USB connection
 seems to be at around 5Gbps:
@@ -349,14 +377,8 @@ seems to be at around 5Gbps:
 So it shouldn't cap at that speed. It's possible the USB adapter is
 failing to give me the full speed though.
 
-TODO: make a new paste
-
 TODO: we are here
 
-TODO:
-
-    echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
-
 TODO: resync
 
 TODO: benchmark before
@@ -478,7 +500,7 @@ This is a test that was ran in single-user mode using fio and the
 
         fio --name=randwrite64k16x --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1
 
- * Single 1MiB random write process
+ * Single 1MiB random write process:
 
         fio --name=randwrite1m1x --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
 
@@ -488,7 +510,7 @@ article](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-
 already pretty strange. But also it doesn't include stuff like
 dropping caches or repeating results.
 
-So here's my variation
+So here's my variation. 
 
 [[!format sh """
 #!/bin/sh
@@ -497,14 +519,12 @@ set -e
 
 common_flags="--group_reporting --minimal --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
 
-# + --directory=/test?
-
 while read type bs size jobs extra ; do
     name="${type}${bs}${size}${jobs}x$extra"
-    echo "dropping caches..."
+    echo "dropping caches..." >&2
     sync
     echo 3 > /proc/sys/vm/drop_caches
-    echo "running job $name..."
+    echo "running job $name..." >&2
     fio $common_flags --name="$name" \
         --rw="$type" \
         --bs="$bs" \
@@ -528,6 +548,101 @@ randread  1m 16g 1 --fsync=1
 EOF
 """]]
 
+And before I show the results, it should be noted there is a huge
+caveat here The test is done between:
+
+ * a WDC WDS500G1B0B-00AS40 SSD, a [WD blue M.2 2280 SSD](https://www.westerndigital.com/products/internal-drives/wd-blue-sata-m-2-ssd#WDS500G2B0B) (running
+   mdadm/LUKS/LVM/ext4), that is at least 5 years old, spec'd at
+   560MB/s read, 530MB/s write
+
+ * a brand new [WD blue SN550](https://www.westerndigital.com/products/internal-drives/wd-blue-sn550-nvme-ssd#WDS500G2B0C) drive, which claims to be able to
+   push 2400MB/s read and 1750MB/s write
+
+In practice, I'm going to assume we'll never reach those numbers
+because we're not actually NVMe so the bottleneck isn't the disk
+itself. For our purposes, it might still give us useful results.
+
+My bias, before building, running and analysing those results is that
+ZFS should outperform the traditional stack on writes, but possibly
+not on reads. It's also possible it outperforms it on both, because
+it's a newer drive. A new test might be possible with a new external
+USB drive as well, although I doubt I will find the time to do this.
+
+## Side note about fio job files
+
+I would love to have just a single `.fio` job file that lists multiple
+jobs to run *serially*. For example, this file describes the above
+workload pretty well:
+
+[[!format """
+[global]
+# cargo-culting Salter
+fallocate=none
+ioengine=posixaio
+runtime=60
+time_based=1
+end_fsync=1
+stonewall=1
+group_reporting=1
+# no need to drop caches, done by default
+# invalidate=1
+
+# Single 4KiB random read/write process
+[randread-4k-4g-1x]
+stonewall=1
+rw=randread
+bs=4k
+size=4g
+numjobs=1
+iodepth=1
+
+[randwrite-4k-4g-1x]
+stonewall=1
+rw=randwrite
+bs=4k
+size=4g
+numjobs=1
+iodepth=1
+
+# 16 parallel 64KiB random read/write processes:
+[randread-64k-256m-16x]
+stonewall=1
+rw=randread
+bs=64k
+size=256m
+numjobs=16
+iodepth=16
+
+[randwrite-64k-256m-16x]
+stonewall=1
+rw=randwrite
+bs=64k
+size=256m
+numjobs=16
+iodepth=16
+
+# Single 1MiB random read/write process
+[randread-1m-16g-1x]
+stonewall=1
+rw=randread
+bs=1m
+size=16g
+numjobs=1
+iodepth=1
+
+[randwrite-1m-16g-1x]
+stonewall=1
+rw=randwrite
+bs=1m
+size=16g
+numjobs=1
+iodepth=1
+"""]]
+
+... except the jobs are actually run in parallel, even though they are
+`stonewall`'d, as far as I can tell by the reports. I sent a mail to
+the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
+
 # Remaining work
 
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!

update zfs migration
had to rebuild teh pools because utf8only is crap (and i don't only
have utf8)
i screwed up smartctl and sgdisks commands (scary) so i don't actually
know the block size.
actually use zstd encryption
start the first sync
design a benchmark procedure
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index d76dc3d7..2fda422d 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -36,27 +36,26 @@ This is going to partition `/dev/sdc` with:
         sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
         sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
 
-It looks like this:
-
-    root@curie:~# sgdisk -p /dev/sdb
-    Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
-    Model: WDC WD10JPLX-00M
-    Sector size (logical/physical): 512/4096 bytes
-    Disk identifier (GUID): D8806C02-B5A6-4705-ACA9-F5A92F98C2D1
-    Partition table holds up to 128 entries
-    Main partition table begins at sector 2 and ends at sector 33
-    First usable sector is 34, last usable sector is 1953525134
-    Partitions will be aligned on 2048-sector boundaries
-    Total free space is 3437 sectors (1.7 MiB)
-
-    Number  Start (sector)    End (sector)  Size       Code  Name
-       1            2048          411647   200.0 MiB   EF00  EFI System Partition
-       2          411648         2508799   1024.0 MiB  8300  
-       3         2508800        18958335   7.8 GiB     8300  
-       4        18958336      1953523711   922.5 GiB   8300
-
-This, by the way, says the device has 4KB sector size. `smartctl`
-agrees as well:
+        root@curie:/home/anarcat# sgdisk -p /dev/sdc
+        Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
+        Model: ESD-S1C         
+        Sector size (logical/physical): 512/512 bytes
+        Disk identifier (GUID): 932ED8E5-8B5C-4183-9967-56D7652C01DA
+        Partition table holds up to 128 entries
+        Main partition table begins at sector 2 and ends at sector 33
+        First usable sector is 34, last usable sector is 1953525134
+        Partitions will be aligned on 16-sector boundaries
+        Total free space is 14 sectors (7.0 KiB)
+
+        Number  Start (sector)    End (sector)  Size       Code  Name
+           1              48            2047   1000.0 KiB  EF02  
+           2            2048         1050623   512.0 MiB   EF00  
+           3         1050624         3147775   1024.0 MiB  BF01  
+           4         3147776      1953525134   930.0 GiB   BF00
+
+Unfortunately, we can't be sure of the sector size here, because the
+USB controller is probably lying to us about it. Normally, this
+`smartctl` command should tell us the sector size as well:
 
     root@curie:~# smartctl -i /dev/sdb -qnoserial
     smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-14-amd64] (local build)
@@ -77,8 +76,13 @@ agrees as well:
     SMART support is: Available - device has SMART capability.
     SMART support is: Enabled
 
+Above is the example of the builtin HDD drive. But the SSD device
+enclosed in that USB controller [doesn't support SMART commands](https://www.smartmontools.org/ticket/1054),
+so we can't trust that it really has 512 bytes sectors.
+
 This matters because we need to tweak the `ashift` value
-correctly. 4KB means `ashift=12`.
+correctly. We're going to go ahead the SSD drive has the common 4KB
+settings, which means `ashift=12`.
 
 Note here that we are *not* creating a separate partition for
 swap. Swap on ZFS volumes (AKA "swap on ZVOL") can trigger lockups and
@@ -137,11 +141,10 @@ This is a more typical pool creation.
 
         zpool create \
             -o ashift=12 \
-            -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
+            -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
             -O acltype=posixacl -O xattr=sa \
-            -O compression=lz4 \
+            -O compression=zstd \
             -O dnodesize=auto \
-            -O normalization=formD \
             -O relatime=on \
             -O canmount=off \
             -O mountpoint=/ -R /mnt \
@@ -160,8 +163,6 @@ Breaking this down:
  * `-O compression=zstd`: enable [zstd](https://en.wikipedia.org/wiki/Zstd) compression, can be
    disabled/enabled by dataset to with `zfs set compression=off
    rpool/example`
- * `-O normalization=formD`: normalize file names on comparisons (not
-   storage), implies `utf8only=on`
  * `-O relatime=on`: classic `atime` optimisation, another that could
    be used on a busy server is `atime=off`
  * `-O canmount=off`: do not make the pool mount automatically with
@@ -171,7 +172,14 @@ Breaking this down:
 
 Those settings are all available in [zfsprops(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zfsprops.8.en.html). Other flags are
 defined in [zpool-create(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zpool-create.8.en.html). The reasoning behind them is also
-explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian wiki](https://wiki.debian.org/ZFS#Advanced_Topics).
+explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian
+wiki](https://wiki.debian.org/ZFS#Advanced_Topics). Those flags were actually not used:
+
+ * `-O normalization=formD`: normalize file names on comparisons (not
+   storage), implies `utf8only=on`, which is a [bad idea](https://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames) (and
+   effectively meant my first sync failed to copy some files,
+   including [this folder from a supysonic checkout](https://github.com/spl0k/supysonic/tree/270fa9883b2f2bc98f1482a68f7d9022017af50b/tests/assets/%E6)). and this
+   cannot be changed after the filesystem is created. bad, bad, bad.
 
 ## Side note about single-disk pools
 
@@ -277,16 +285,71 @@ like this:
     rpool/var/lib/docker  899G  256K  899G   1% /mnt/var/lib/docker
     /dev/sdc2             511M  4.0K  511M   1% /mnt/boot/efi
 
+Now that we have everything setup and mounted, let's copy all files
+over.
 
+# Copying files
 
-# Copy files over
+This is a list of all the mounted filesystems
 
     for fs in /boot/ /boot/efi/ / /home/; do
         echo "syncing $fs to /mnt$fs..." && 
         rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
     done
 
-TODO: paste what it looked like
+You can check that the list is correct with:
+
+    mount -l -t ext4,btrfs,vfat | awk '{print $3}'
+
+Note that we skip `/srv` as it's on a different disk.
+
+On the first run, we had:
+
+    root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
+            echo "syncing $fs to /mnt$fs..." && 
+            rsync -aSHAXx --info=progress2 $fs /mnt$fs
+        done
+    syncing /boot/ to /mnt/boot/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
+    syncing /boot/efi/ to /mnt/boot/efi/...
+         16,831,437 100%  184.14MB/s    0:00:00 (xfr#101, to-chk=0/110)
+    syncing / to /mnt/...
+     28,019,293,280  94%   47.63MB/s    0:09:21 (xfr#703710, ir-chk=6748/839220)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
+    could not make way for new symlink: var/lib/docker
+     34,081,267,990  98%   50.71MB/s    0:10:40 (xfr#736577, to-chk=0/867732)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+    syncing /home/ to /mnt/home/...
+    rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
+     24,456,268,098  98%   68.03MB/s    0:05:42 (xfr#159867, ir-chk=6875/172377) 
+    file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/B3AB0CDA9C4454B3C1197E5A22669DF8EE849D90"
+    199,762,528,125  93%   74.82MB/s    0:42:26 (xfr#1437846, ir-chk=1018/1983979)rsync: [generator] recv_generator: mkdir "/mnt/home/anarcat/dist/supysonic/tests/assets/\#346" failed: Invalid or incomplete multibyte or wide character (84)
+    *** Skipping any contents from this failed directory ***
+    315,384,723,978  96%   76.82MB/s    1:05:15 (xfr#2256473, to-chk=0/2993950)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+
+Note the failure to transfer that supysonic file? It turns out they
+had a [weird filename in their source tree](https://github.com/spl0k/supysonic/pull/183), since then removed,
+but still it showed how the `utf8only` feature might not be such a bad
+idea. At this point, the procedure was restarted all the way back to
+"Creating pools", after unmounting all ZFS filesystems (`umount
+/mnt/run /mnt/boot/efi && umount -t zfs -a`) and destroying the pool,
+which, surprisingly, doesn't require any confirmation (`zpool destroy
+rpool`).
+
+Also note the transfer speed: we seem capped at 76MB/s, or
+608Mbit/s. This is not as fast as I was expecting: the USB connection
+seems to be at around 5Gbps:
+
+    anarcat@curie:~$ lsusb -tv | head -4
+    /:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
+        ID 1d6b:0003 Linux Foundation 3.0 root hub
+        |__ Port 1: Dev 4, If 0, Class=Mass Storage, Driver=uas, 5000M
+            ID 0b05:1932 ASUSTek Computer, Inc.
+
+So it shouldn't cap at that speed. It's possible the USB adapter is
+failing to give me the full speed though.
+
+TODO: make a new paste
 
 TODO: we are here
 
@@ -402,6 +465,69 @@ Unmount filesystems:
 
 reboot to the new system.
 
+# Benchmarks
+
+This is a test that was ran in single-user mode using fio and the
+[Ars Technica recommended tests](https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/), which are:
+
+ * Single 4KiB random write process:
+
+        fio --name=randwrite4k1x --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
+
+ * 16 parallel 64KiB random write processes:
+
+        fio --name=randwrite64k16x --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1
+
+ * Single 1MiB random write process
+
+        fio --name=randwrite1m1x --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

(Diff truncated)
start working on migrating my workstation to ZFS
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
new file mode 100644
index 00000000..d76dc3d7
--- /dev/null
+++ b/blog/zfs-migration.md
@@ -0,0 +1,429 @@
+In my [[hardware/tubman]] setup, I started using ZFS on an old server
+I had lying around. The machine is really old though (2011!) and it
+"feels" pretty slow. I want to see how much of that is ZFS and how
+much is the machine. Synthetic benchmarks [show that ZFS may be slower
+than mdadm in RAID-10 or RAID-6 configuration](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-disks-two-filesystems-one-winner/), so I want to
+confirm that on a live workload: my workstation. Plus, I want easy,
+regular, high performance backups (with send/receive snapshots) and
+there's no way I'm going to use [[BTRFS|2022-05-13-brtfs-notes]]
+because I find it too confusing and unreliable.
+
+So off we go.
+
+# Installation
+
+Since this is a conversion (and not a new install), our procedure is
+slightly different than the [official documentation](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html) but otherwise
+it's pretty much in the same spirit: we're going to use ZFS for
+everything, including the root filesystem.
+
+So, install the required packages, on the current system:
+
+    apt install --yes gdisk zfs-dkms zfs zfs-initramfs zfsutils-linux
+
+# Partitioning
+
+This is going to partition `/dev/sdc` with:
+
+ * 1MB MBR / BIOS legacy boot
+ * 512MB EFI boot
+ * 1GB bpool, unencrypted pool for /boot
+ * rest of the disk for zpool, the rest of the data
+
+        sgdisk --zap-all /dev/sdc
+        sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/sdc
+        sgdisk     -n2:1M:+512M   -t2:EF00 /dev/sdc
+        sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
+        sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
+
+It looks like this:
+
+    root@curie:~# sgdisk -p /dev/sdb
+    Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
+    Model: WDC WD10JPLX-00M
+    Sector size (logical/physical): 512/4096 bytes
+    Disk identifier (GUID): D8806C02-B5A6-4705-ACA9-F5A92F98C2D1
+    Partition table holds up to 128 entries
+    Main partition table begins at sector 2 and ends at sector 33
+    First usable sector is 34, last usable sector is 1953525134
+    Partitions will be aligned on 2048-sector boundaries
+    Total free space is 3437 sectors (1.7 MiB)
+
+    Number  Start (sector)    End (sector)  Size       Code  Name
+       1            2048          411647   200.0 MiB   EF00  EFI System Partition
+       2          411648         2508799   1024.0 MiB  8300  
+       3         2508800        18958335   7.8 GiB     8300  
+       4        18958336      1953523711   922.5 GiB   8300
+
+This, by the way, says the device has 4KB sector size. `smartctl`
+agrees as well:
+
+    root@curie:~# smartctl -i /dev/sdb -qnoserial
+    smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-14-amd64] (local build)
+    Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
+
+    === START OF INFORMATION SECTION ===
+    Model Family:     Western Digital Black Mobile
+    Device Model:     WDC WD10JPLX-00MBPT0
+    Firmware Version: 01.01H01
+    User Capacity:    1 000 204 886 016 bytes [1,00 TB]
+    Sector Sizes:     512 bytes logical, 4096 bytes physical
+    Rotation Rate:    7200 rpm
+    Form Factor:      2.5 inches
+    Device is:        In smartctl database [for details use: -P show]
+    ATA Version is:   ATA8-ACS T13/1699-D revision 6
+    SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
+    Local Time is:    Tue May 17 13:33:04 2022 EDT
+    SMART support is: Available - device has SMART capability.
+    SMART support is: Enabled
+
+This matters because we need to tweak the `ashift` value
+correctly. 4KB means `ashift=12`.
+
+Note here that we are *not* creating a separate partition for
+swap. Swap on ZFS volumes (AKA "swap on ZVOL") can trigger lockups and
+that issue is [still not fixed upstream](https://github.com/openzfs/zfs/issues/7734). [Ubuntu recommends using
+a separate partition for swap instead](https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847628). But since this is "just" a
+workstation, we're betting that we will not suffer from this problem,
+after hearing a report from another Debian developer running this
+setup on their workstation successfully.
+
+# Creating "pools"
+
+ZFS pools are somewhat like "volume groups" if you are familiar with
+LVM, except they obviously also do things like RAID-10. (Even though
+LVM can technically [also do RAID](https://manpages.debian.org/bullseye/lvm2/lvmraid.7.en.html), people typically use [mdadm](https://manpages.debian.org/bullseye/mdadm/mdadm.8.en.html)
+instead.) 
+
+In any case, the guide suggests creating two different pools here:
+one, in cleartext, for boot, and a separate, encrypted one, for the
+rest. Technically, the boot partition is required because the Grub
+bootloader only supports readonly ZFS pools, from what I
+understand. But I'm a little out of my depth here and just following
+the guide.
+
+## Boot pool creation
+
+This creates the boot pool in readonly mode with features that grub
+supports:
+
+        zpool create \
+            -o cachefile=/etc/zfs/zpool.cache \
+            -o ashift=12 -d \
+            -o feature@async_destroy=enabled \
+            -o feature@bookmarks=enabled \
+            -o feature@embedded_data=enabled \
+            -o feature@empty_bpobj=enabled \
+            -o feature@enabled_txg=enabled \
+            -o feature@extensible_dataset=enabled \
+            -o feature@filesystem_limits=enabled \
+            -o feature@hole_birth=enabled \
+            -o feature@large_blocks=enabled \
+            -o feature@lz4_compress=enabled \
+            -o feature@spacemap_histogram=enabled \
+            -o feature@zpool_checkpoint=enabled \
+            -O acltype=posixacl -O canmount=off \
+            -O compression=lz4 \
+            -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
+            -O mountpoint=/boot -R /mnt \
+            bpool /dev/sdc3
+
+I haven't investigated all those settings and just trust the upstream
+guide on the above.
+
+## Main pool creation
+
+This is a more typical pool creation.
+
+        zpool create \
+            -o ashift=12 \
+            -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
+            -O acltype=posixacl -O xattr=sa \
+            -O compression=lz4 \
+            -O dnodesize=auto \
+            -O normalization=formD \
+            -O relatime=on \
+            -O canmount=off \
+            -O mountpoint=/ -R /mnt \
+            rpool /dev/sdc4
+
+Breaking this down:
+
+ * `-o ashift=12`: mentioned above, 4k sector size
+ * `-O encryption=on -O keylocation=prompt -O keyformat=passphrase`:
+   encryption, prompt for a password, default algorithm is
+   `aes-256-gcm`, explicit in the guide, made implicit here
+ * `-O acltype=posixacl -O xattr=sa`: enable ACLs, with better
+   performance (not enabled by default)
+ * `-O dnodesize=auto`: related to extended attributes, less
+   compatibility with other implementations
+ * `-O compression=zstd`: enable [zstd](https://en.wikipedia.org/wiki/Zstd) compression, can be
+   disabled/enabled by dataset to with `zfs set compression=off
+   rpool/example`
+ * `-O normalization=formD`: normalize file names on comparisons (not
+   storage), implies `utf8only=on`
+ * `-O relatime=on`: classic `atime` optimisation, another that could
+   be used on a busy server is `atime=off`
+ * `-O canmount=off`: do not make the pool mount automatically with
+   `mount -a`?
+ * `-O mountpoint=/ -R /mnt`: mount pool on `/` in the future, but
+   `/mnt` for now
+
+Those settings are all available in [zfsprops(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zfsprops.8.en.html). Other flags are
+defined in [zpool-create(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zpool-create.8.en.html). The reasoning behind them is also
+explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian wiki](https://wiki.debian.org/ZFS#Advanced_Topics).
+
+## Side note about single-disk pools
+
+Also note that we're living dangerously here: single-disk ZFS pools
+are [rumoured to be more dangerous](https://www.truenas.com/community/threads/single-drive-zfs.35515/) than not running ZFS at
+all. The choice quote from this article is:
+
+> [...] any error can be detected, but cannot be corrected. This
+> sounds like an acceptable compromise, but its actually not. The
+> reason its not is that ZFS' metadata cannot be allowed to be
+> corrupted. If it is it is likely the zpool will be impossible to
+> mount (and will probably crash the system once the corruption is
+> found). So a couple of bad sectors in the right place will mean that
+> all data on the zpool will be lost. Not some, all. Also there's no
+> ZFS recovery tools, so you cannot recover any data on the drives.
+
+Compared with (say) ext4, where a single disk error can recovered,
+this is pretty bad. But we are ready to live with this with the idea
+that we'll have hourly offline snapshots that we can easily recover
+from. It's trade-off. Also, we're running this on a NVMe/M.2 drive

(Diff truncated)
responses
diff --git a/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment b/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment
new file mode 100644
index 00000000..44b3f443
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment
@@ -0,0 +1,40 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject=""""""
+ date="2022-05-16T14:45:50Z"
+ content="""
+> of course it is circumstantial, but facebook runs a few thousand servers with btrfs (AFAIR)
+
+i wouldn't call this "circumstantial", that's certainly a strong data point. But Facebook has a whole other scale and, if they're anything like other large SV shops, they have their own Linux kernel fork that might have improvements we don't.
+
+>  and personally I am running it since many years without a single failure.
+
+that, however, I would call "anecdotal" and something I hear a lot.... often followed with things like:
+
+> with some quirks, again, I admit - namely that healing is not automatic
+
+... which, for me, is the entire problem here. "it kind of works for me, except sometimes not" is really not what I expect from a filesystem.
+
+> Concerning subvolumes - you don't get separate disk usage simply because it is part of the the main volume (volid=5), and within there just a subdirectory. So getting disk usage from there would mean reading the whole tree (du-like -- btw, I recommend gdu which is many many times faster than du!).
+>
+> The root subvolume on fedora is not the same as volid=5 but just another subvolume. I also have root volumes for Debian/Arch/Fedora on my system (sharing /usr/local and /home volumes). That they called it root is indeed confusing.
+
+[Hacker News](https://news.ycombinator.com/item?id=31383007) helpfully reminded me that:
+
+> the author intentionally gave up early on understanding and simply
+> rants about everything that does not look or work as usual
+
+I think it's framed as criticism of my work, but I take it as a compliment. I reread the two paragraphs a few times, and they still don't make much sense to me. It just begs more questions:
+
+ 1. can we have more than one main volumes?
+ 2. why was it setup with subvolumes instead of volumes?
+ 3. why isn't everything volumes?
+
+I know I sound like a newbie meeting a complex topic and giving up. But here's the thing: I've encountered (and worked in production) with at least half a dozen filesystems in my lifetime (ext2/ext3/ext4, XFS, UFS, FAT16/FAT32, NTFS, HFS, ExFAT, ZFS), and for most of those, I could use them without having to go very deep into the internals. 
+
+But BTRFS gets obscure *quick*. Even going through official documentation (e.g. [BTRFS Design](https://btrfs.wiki.kernel.org/index.php/Btrfs_design)), you *start* with C structs. And somewhere down there there's this confusing diagram about the internal mechanics of the btree and how you build subvolumes and snapshots on top of that.
+
+If you want to hack on BTRFS, that's great. You can get up to speed pretty quick. But I'm not looking at BTRFS from an enthusiast, kernel developer look. I'm looking at it from a "OMG what is this" look, with very little time to deal with it. Every other filesystem architecture I've used like this so far has been able to somewhat be operational in a day or two. After spending multiple days banging my head on this problem, I felt I had to write this down, because everything seems so obtuse that I can't wrap my head around it.
+
+Anyways, thanks for the constructive feedback, it certainly clarifies things a little, but really doesn't make me want to adopt BTRFS in any significant way.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment b/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment
new file mode 100644
index 00000000..1e2bc245
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""comment 3"""
+ date="2022-05-16T14:58:23Z"
+ content="""
+> Seems worth to consider on your side, too, makes it a bit easier to deal with this (and with btrfs volumes you can have multiple dists booting) 
+
+I'm certainly considering some sort of RAID or snapshotting for my workstation now. Problem is it's a NUC so it really can't fit more disks.
+
+Considering my ... unfruitful experience with BTRFS, I probably will stay the heck away from it though, but thanks for the advice.
+
+> Working, up to date backups are a must have. 
+
+That's the understatement of the day. :p
+
+Thankfully, as I said, this machine is mostly throw-away. But because our installers are still kind of crap, it takes a while to recover it, so I am thinking RAID or offline snapshots could be useful to speed up recovery...
+"""]]

approve comment
diff --git a/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment b/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment
new file mode 100644
index 00000000..0e56bb11
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="180.94.215.40"
+ claimedauthor="Norbert"
+ subject="Some comments"
+ date="2022-05-14T01:21:55Z"
+ content="""
+Concerning stability: of course it is circumstantial, but facebook runs a few thousand servers with btrfs (AFAIR), and personally I am running it since many years without a single failure. Admittingly, raid5/6 is broken, don't touch it. raid1 works also rock solid afais (with some quirks, again, I admit - namely that healing is not automatic).
+
+Concerning subvolumes - you don't get separate disk usage simply because it is part of the the main volume (volid=5), and within there just a subdirectory. So getting disk usage from there would mean reading the whole tree (du-like -- btw, I recommend `gdu` which is many many times faster than du!).
+
+The `root` subvolume on fedora is not the same as `volid=5` but just another subvolume. I also have root volumes for Debian/Arch/Fedora on my system (sharing `/usr/local` and `/home` volumes). That they called it `root` is indeed confusing.
+
+One thing that I like a lot about btrfs is `btrfs send/receive`, it is a nice way to do incremental backups.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment b/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment
new file mode 100644
index 00000000..484f8d6b
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="88.196.217.58"
+ claimedauthor="Arti"
+ subject="Dying SSD-s"
+ date="2022-05-16T07:18:45Z"
+ content="""
+I also have experienced few SATA and NVME drives just disappearing on reboot or even during normal usage. In my experience SSD-s just stop working without any warnings. Working, up to date backups are a must have.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment b/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment
new file mode 100644
index 00000000..14e8abfe
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="180.94.215.40"
+ claimedauthor="Norbert"
+ subject="BTRFS raid?"
+ date="2022-05-14T01:12:25Z"
+ content="""
+I have seen similar things happening with some ssds, too. Since I run btrfs-raid (over 7 disks or so) it happens now and then, and usually is fixed by unplugging, plugging in a new disk, and rebalancing. Seems worth to consider on your side, too, makes it a bit easier to deal with this (and with btrfs volumes you can have multiple dists booting)
+"""]]

and another failure
diff --git a/blog/2022-05-13-nvme-disk-failure.md b/blog/2022-05-13-nvme-disk-failure.md
new file mode 100644
index 00000000..d4e4b69f
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure.md
@@ -0,0 +1,53 @@
+[[!meta title="NVMe/SSD disk failure"]]
+
+Yesterday, my workstation ([[curie|hardware/curie]]) was hung when I
+came in the office. After a "[skinny elephant](https://en.wikipedia.org/wiki/Raising_Skinny_Elephants_Is_Boring)", the box rebooted,
+but it couldn't find the primary disk (in the BIOS). Instead, it
+booted on the secondary HDD drive, still running an old Fedora 27
+install which somehow survived to this day, possibly because [[BTRFS
+is incomprehensible|blog/2022-05-13-btrfs-notes]].
+
+Somehow, I blindly accepted the Fedora prompt asking me to upgrade to
+Fedora 28, not realizing that:
+
+ 1. Fedora is now at release 36, not 28
+ 2. major upgrades take about an hour...
+ 3. ... and happen at boot time, blocking the entire machine (I'll
+    remember this next time I laugh at Windows and Mac OS users stuck
+    on updates on boot)
+ 4. you can't skip more than one major upgrade
+
+Which means that upgrading to latest would take over 4
+hours. Thankfully, it's mostly automated and seems to work pretty well
+(which is [not exactly the case for Debian](https://wiki.debian.org/AutomatedUpgrade)). It still seems like a
+lot of wasted time -- it would probably be better to just reinstall
+the machine at this point -- and not what I had planned to do that
+morning at all.
+
+In any case, after waiting all that time, the machine booted (in
+Fedora) again, and now it *could* detect the SSD disk. The BIOS could
+find the disk too, so after I reinstalled grub (from Fedora) and fixed
+the boot order, it rebooted, but secureboot failed, so I turned that
+off (!?), and I was back in Debian.
+
+I did an emergency backup with `ddrescue`, *from the running system*
+which probably doesn't really work as a backup (because the filesystem
+is likely to be corrupt) but it was fast enough (20 minutes) and gave
+me some peace of mind. My offsites backup have been down for a while
+and since I treat my workstations as "cattle" (not "pets"), I don't
+have a solid recovery scenario for those situations other than "just
+reinstall and run Puppet", which takes a while.
+
+Now I'm wondering what the next step is: probably replace the disk
+anyways (the new one is bigger: 1TB instead of 500GB), or keep the new
+one as a hot backup somehow. Too bad I don't have a snapshotting
+filesystem on there... (Technically, I have LVM, but LVM snapshots are
+heavy and slow, and can't atomically cover the entire machine.)
+
+It's kind of scary how this thing failed: totally dropped off the bus,
+just not in the BIOS at all. I prefer the way spinning rust fails:
+clickety sounds, tons of warnings beforehand, partial recovery
+possible. With this new flashy junk, you just lose everything all at
+once. Not fun.
+
+[[!tag debian-planet debian hardware fail]]
diff --git a/hardware/curie.mdwn b/hardware/curie.mdwn
index 7ade7c51..1873ff7f 100644
--- a/hardware/curie.mdwn
+++ b/hardware/curie.mdwn
@@ -129,6 +129,10 @@ the upgrade eventually go through, but it finally did.
 The [release notes](https://downloadmirror.intel.com/29102/eng/SY_0072_ReleaseNotes.pdf) detail the updates since the previous one (v61)
 which includes a bunch of security updates, for example.
 
+## SSD disk failure
+
+See [[blog/2022-05-13-nvme-disk-failure]].
+
 ## Replacement options
 
 The CMOS battery died some time in 2021, and I'm having a hard time

publish btrfs notes
diff --git a/blog/btrfs-notes.md b/blog/2022-05-13-brtfs-notes.md
similarity index 57%
rename from blog/btrfs-notes.md
rename to blog/2022-05-13-brtfs-notes.md
index 2ec14a15..71b484a5 100644
--- a/blog/btrfs-notes.md
+++ b/blog/2022-05-13-brtfs-notes.md
@@ -2,21 +2,21 @@
 
 I'm not a fan of [BTRFS](https://btrfs.wiki.kernel.org/). This page serves as a reminder of why,
 but also a cheat sheet to figure out basic tasks in a BTRFS
-environment because those are *not* obvious when coming from any other
-filesystem environment.
+environment because those are *not* obvious to me, even after
+repeatedly having to deal with them.
 
-Trigger warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
+Content warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
 
 [[!toc]]
 
 # Stability concerns
 
-I'm a little worried about its [stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been
-historically quite flaky. RAID-5 and RAID-6 are still marked
-[unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for example, and it's kind of a lucky guess whether
-your current kernel will behave properly with your planned
-workload. For example, [with Linux 4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status) were marked as "mostly OK"
-with a note that says:
+I'm worried about [BTRFS stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been historically
+... changing. RAID-5 and RAID-6 are still marked [unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for
+example. It's kind of a lucky guess whether your current kernel will
+behave properly with your planned workload. For example, [in Linux
+4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status), RAID-1 and RAID-10 were marked as "mostly OK" with a note that
+says:
 
 > Needs to be able to create two copies always. Can get stuck in
 > irreversible read-only mode if only one copy can be made.
@@ -28,27 +28,38 @@ Even as of now, RAID-1 and RAID-10 has this note:
 > improved so the reads will spread over the mirrors evenly or based
 > on device congestion.
 
+Granted, that's not a stability concern anymore, just performance. A
+reviewer of a draft of this article actually claimed that BTRFS only
+reads from one of the drives, which hopefully is inaccurate, but goes
+to show how confusing all this is.
+
 There are [other warnings](https://wiki.debian.org/Btrfs#Other_Warnings) in the Debian wiki that are quite
-worrisome. Even if those are fixed, it can be hard to tell *when* they
-were fixed.
+scary. Even the legendary Arch wiki [has a warning on top of their
+BTRFS page, still](https://wiki.archlinux.org/title/btrfs).
+
+Even if those issues are now fixed, it can be hard to tell *when* they
+were fixed. There is a [changelog by feature](https://btrfs.wiki.kernel.org/index.php/Changelog#By_feature) but it explicitly
+warns that it doesn't know "which kernel version it is considered
+mature enough for production use", so it's also useless for this.
 
 It would have been much better if BTRFS was released into the world
-only when those bugs were being completely fixed. Even now, we get
-mixed messages even in the official BTRFS documentation which says
-"The Btrfs code base is stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same
-time clearly stating [unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently
-RAID56).
-
-There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I will
-stop here, but let's just say that I feel a little uncomfortable
+only when those bugs were being completely fixed. Or that, at least,
+features were announced when they were stable, not just "we merged to
+mainline, good luck". Even now, we get mixed messages even in the
+official BTRFS documentation which says "The Btrfs code base is
+stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same time clearly stating
+[unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently RAID56).
+
+There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I
+will stop here, but let's just say that I feel a little uncomfortable
 trusting server data with full RAID arrays to BTRFS. But surely, for a
-workstation, things should just work smoothly... Right? Let's see the
-snags I hit.
+workstation, things should just work smoothly... Right? Well, let's
+see the snags I hit.
 
 # My BTRFS test setup
 
-Before I go any further, it will probably help to clarify how I am
-testing BTRFS in the first place.
+Before I go any further, I should probably clarify how I am testing
+BTRFS in the first place.
 
 The reason I tried BTRFS is that I was ... let's just say "strongly
 encouraged" by the [LWN](https://lwn.net) editors to install [Fedora](https://getfedora.org/) for the
@@ -69,34 +80,41 @@ table looks like this:
     └─sda4                   8:4    0 922,5G  0 part  
       └─fedora_crypt       253:4    0 922,5G  0 crypt /
 
+(This might not entirely be accurate: I rebuilt this from the Debian
+side of things.)
+
 This is pretty straightforward, except for the swap partition:
 normally, I just treat swap like any other logical volume and create
 it in a logical volume. This is now just speculation, but I bet it was
 setup this way because "swap" support was only added in BTRFS 5.0.
 
-I fully expect BTRFS fans to yell at me now because this is an old
+I fully expect BTRFS experts to yell at me now because this is an old
 setup and BTRFS is so much better now, but that's exactly the point
-here. That setup is not *that* old (2018? is that old? really?), and
-migrating to a new partition scheme isn't exactly practical right
-now. But let's move on to more practical considerations.
+here. That setup is not *that* old (2018? old? really?), and migrating
+to a new partition scheme isn't exactly practical right now. But let's
+move on to more practical considerations.
 
 # No builtin encryption
 
 BTRFS aims at replacing the entire [mdadm](https://en.wikipedia.org/wiki/Mdadm), [LVM][], and [ext4](https://en.wikipedia.org/wiki/Ext4)
-stack with a single entity, alongside adding new features like
+stack with a single entity, and adding new features like
 deduplication, checksums and so on.
 
-Yet there is one feature it is critically missing: encryption. See,
-*my* stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and ext4. This
-is convenient because I have only a single volume to decrypt.
+Yet there is one feature it is critically missing: encryption. See, my
+typical stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and
+ext4. This is convenient because I have only a single volume to
+decrypt.
 
 If I were to use BTRFS on servers, I'd need to have one LUKS volume
 *per-disk*. For a simple RAID-1 array, that's not too bad: one extra
 key. But for large RAID-10 arrays, this gets really unwieldy.
 
-The obvious BTRFS alternative, ZFS, supports encryption out of the box
-and mixes it above the disks so you only have one passphrase to
-enter.
+The obvious BTRFS alternative, ZFS, [supports encryption](https://arstechnica.com/gadgets/2021/06/a-quick-start-guide-to-openzfs-native-encryption/) out of
+the box and mixes it above the disks so you only have one passphrase
+to enter. The main downside of ZFS encryption is that it happens above
+the "pool" level so you can typically see filesystem names (and
+possibly snapshots, depending on how it is built), which is not the
+case with a more traditional stack.
 
 [LVM]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
 
@@ -107,13 +125,17 @@ traditional LVM stack (which is itself kind of confusing if you're new
 to that stuff), you have those layers:
 
  * disks: let's say `/dev/nvme0n1` and `nvme1n1`
- * mdadm RAID arrays: let's say the above disks are joined in a RAID-1
-   array in `/dev/md1`
- * LVM volume groups or VG: the above RAID device (technically a
+ * RAID arrays with mdadm: let's say the above disks are joined in a
+   RAID-1 array in `/dev/md1`
+ * volume groups or VG with LVM: the above RAID device (technically a
    "physical volume" or PV) is assigned into a VG, let's call it
-   `vg_tbbuild05`
+   `vg_tbbuild05` (multiple PVs can be added to a single VG which is
+   why there is that abstraction)
  * LVM logical volumes: out of *that* volume group actually "virtual
-   partitions" or "logical volumes" are created
+   partitions" or "logical volumes" are created, that is where your
+   filesystem lives
+ * filesystem, typically with ext4: that's your normal filesystem,
+   which treats the logical volume as just another block device
 
 A typical server setup would look like this:
 
@@ -130,18 +152,17 @@ A typical server setup would look like this:
     │     └─vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
     └─nvme0n1p4               259:4    0     1M  0 part
 
-
 I stripped the other `nvme1n1` disk because it's basically the same.
 
-Now, if we look at my workstation, which doesn't even have RAID, we
-have the following:
+Now, if we look at my BTRFS-enabled workstation, which doesn't even
+have RAID, we have the following:
 
  * disk: `/dev/sda` with, again, `/dev/sda4` being where BTRFS lives
  * filesystem: `fedora_crypt`, which is, confusingly, kind of like a
    volume group. it's where everything lives. i think.
  * subvolumes: `home`, `root`, `/`, etc. those are actually the things
    that get mounted. you'd think you'd mount a filesystem, but no, you
-   mount a subvolume
+   mount a subvolume. that is backwards.
 
 It looks something like this to `lsblk`:
 
@@ -189,8 +210,17 @@ This is *really* confusing. I don't even know if I understand this
 right, and I've been staring at this all afternoon. Hopefully, the
 lazyweb will correct me eventually.
 
-So at least I can refer to this section in the future, the next time I
-fumble around the `btrfs` commandline.
+(As an aside, why are they called "subvolumes"? If something is a
+"[sub](https://en.wiktionary.org/wiki/sub#Latin)" of "something else", that "something else" must exist
+right? But no, BTRFS doesn't have "volumes", it only has
+"subvolumes". Go figure. Presumably the filesystem still holds "files"
+though, at least empirically it doesn't seem like it lost anything so

(Diff truncated)
expand
diff --git a/blog/btrfs-notes.md b/blog/btrfs-notes.md
index 98cdb177..2ec14a15 100644
--- a/blog/btrfs-notes.md
+++ b/blog/btrfs-notes.md
@@ -39,7 +39,7 @@ mixed messages even in the official BTRFS documentation which says
 time clearly stating [unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently
 RAID56).
 
-There are much harsher BTRFS critics than me [out there](https://2.5admins.com/) so I will
+There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I will
 stop here, but let's just say that I feel a little uncomfortable
 trusting server data with full RAID arrays to BTRFS. But surely, for a
 workstation, things should just work smoothly... Right? Let's see the
@@ -280,6 +280,55 @@ how much disk each volume (and snapshot) takes:
 
 That's 56360 times faster.
 
+But yes, that's not fair: those in the know will know there's a
+*different* command to do what `df` does with BTRFS filesystems, the
+`btrfs filesystem usage` command:
+
+    root@curie:/home/anarcat# time btrfs filesystem usage /srv
+    Overall:
+        Device size:		 922.47GiB
+        Device allocated:		 916.47GiB
+        Device unallocated:		   6.00GiB
+        Device missing:		     0.00B
+        Used:			 884.97GiB
+        Free (estimated):		  30.84GiB	(min: 27.84GiB)
+        Free (statfs, df):		  30.84GiB
+        Data ratio:			      1.00
+        Metadata ratio:		      2.00
+        Global reserve:		 512.00MiB	(used: 0.00B)
+        Multiple profiles:		        no
+
+    Data,single: Size:906.45GiB, Used:881.61GiB (97.26%)
+       /dev/mapper/fedora_crypt	 906.45GiB
+
+    Metadata,DUP: Size:5.00GiB, Used:1.68GiB (33.58%)
+       /dev/mapper/fedora_crypt	  10.00GiB
+
+    System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
+       /dev/mapper/fedora_crypt	  16.00MiB
+
+    Unallocated:
+       /dev/mapper/fedora_crypt	   6.00GiB
+
+    real	0m0,004s
+    user	0m0,000s
+    sys	0m0,004s
+
+Almost as fast as ZFS's df! Good job. But wait. That doesn't actually
+tell me usage per *subvolume*. Notice it's `filesystem usage`, not
+`subvolume usage`, which unhelpfully refuses to exist. That command
+only shows that one "filesystem" internal statistics that are pretty
+opaque.. You can also appreciate it's wasting 6GB of "unallocated"
+disk space there: I probably did something Very Wrong and should be
+punished by Hacker News. I also wonder why it has 1.68GB of "metadata"
+used...
+
+At this point, I just really want to throw that thing out of the
+window and restart from scratch. I don't really feel like learning the
+BTRFS internals, as they seem oblique and completely bizarre to me. I
+bet that ZFS would do wonders here, and I'd get 8GB (or more?)
+back. Who knows.
+
 # Conclusion
 
 I find BTRFS utterly confusing and I'm worried about its
@@ -288,13 +337,27 @@ and coherence before I even consider running this anywhere else than a
 lab, and that's really too bad, because there are really nice features
 in BTRFS that would greatly help my workflow.
 
-Right now, I'm stuck with OpenZFS, which currently involves building
-kernel modules from scratch on every host. I'm hoping some day the
-copyright issues are resolved and we can at least ship binary
-packages, but the politics (e.g. convincing Debian that is the right
-thing to do, good luck) and the logistics (e.g. DKMS auto-builders? is
-that even a thing? how about signed DKMS packages? fun fun fun!) seem
-really impractical.
+Right now, I'm experimenting with OpenZFS. It's so much simpler, and
+just works, and it's rock solid. After [this 10 minute read](https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/), I had
+a good understanding of how ZFS worked: a `vdev` is kind of like a
+RAID array, a `vpool` is a volume group, and you create datasets
+(filesystems, like logical volumes + ext) underneath. In fact, that's
+probably all you need to know, unless you want to start optimizing
+more obscure things like [recordsize](https://klarasystems.com/articles/tuning-recordsize-in-openzfs/).
+
+Running ZFS on Linux which currently involves building kernel modules
+from scratch on every host. But I was able to setup a ZFS-only server
+using [this excellent documentation](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/) without too much problem.
+
+I'm hoping some day the copyright issues are resolved and we can at
+least ship binary packages, but the politics (e.g. convincing Debian
+that is the right thing to do, good luck) and the logistics (e.g. DKMS
+auto-builders? is that even a thing? how about signed DKMS packages?
+fun fun fun!) seem really impractical. Who knows, maybe hell will
+freeze over ([again](https://blogs.gnome.org/uraeus/2022/05/11/why-is-the-open-source-driver-release-from-nvidia-so-important-for-linux/)) and Oracle will fix the CDDL. I personally
+think that we should just completely ignore this problem and ship
+binary packages, but I'm a pragmatic and do not always fit well with
+the free software fundamentalists.
 
 Which means that, short term, we don't have a reliable, advanced
 filesystem in Linux. And that's really too bad.

btrfs notes
diff --git a/blog/btrfs-notes.md b/blog/btrfs-notes.md
new file mode 100644
index 00000000..98cdb177
--- /dev/null
+++ b/blog/btrfs-notes.md
@@ -0,0 +1,302 @@
+[[!meta title="BTRFS notes"]]
+
+I'm not a fan of [BTRFS](https://btrfs.wiki.kernel.org/). This page serves as a reminder of why,
+but also a cheat sheet to figure out basic tasks in a BTRFS
+environment because those are *not* obvious when coming from any other
+filesystem environment.
+
+Trigger warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
+
+[[!toc]]
+
+# Stability concerns
+
+I'm a little worried about its [stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been
+historically quite flaky. RAID-5 and RAID-6 are still marked
+[unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for example, and it's kind of a lucky guess whether
+your current kernel will behave properly with your planned
+workload. For example, [with Linux 4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status) were marked as "mostly OK"
+with a note that says:
+
+> Needs to be able to create two copies always. Can get stuck in
+> irreversible read-only mode if only one copy can be made.
+
+Even as of now, RAID-1 and RAID-10 has this note:
+
+> The simple redundancy RAID levels utilize different mirrors in a way
+> that does not achieve the maximum performance. The logic can be
+> improved so the reads will spread over the mirrors evenly or based
+> on device congestion.
+
+There are [other warnings](https://wiki.debian.org/Btrfs#Other_Warnings) in the Debian wiki that are quite
+worrisome. Even if those are fixed, it can be hard to tell *when* they
+were fixed.
+
+It would have been much better if BTRFS was released into the world
+only when those bugs were being completely fixed. Even now, we get
+mixed messages even in the official BTRFS documentation which says
+"The Btrfs code base is stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same
+time clearly stating [unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently
+RAID56).
+
+There are much harsher BTRFS critics than me [out there](https://2.5admins.com/) so I will
+stop here, but let's just say that I feel a little uncomfortable
+trusting server data with full RAID arrays to BTRFS. But surely, for a
+workstation, things should just work smoothly... Right? Let's see the
+snags I hit.
+
+# My BTRFS test setup
+
+Before I go any further, it will probably help to clarify how I am
+testing BTRFS in the first place.
+
+The reason I tried BTRFS is that I was ... let's just say "strongly
+encouraged" by the [LWN](https://lwn.net) editors to install [Fedora](https://getfedora.org/) for the
+[[terminal emulators series|blog/2018-04-12-terminal-emulators-1]].
+That, in turn, meant the setup was done with BTRFS, because that was
+somewhat the default in Fedora 27 (or did I want to experiment? I
+don't remember, it's been too long already).
+
+So Fedora was setup on my 1TB HDD and, with encryption, the partition
+table looks like this:
+
+    NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
+    sda                      8:0    0 931,5G  0 disk  
+    ├─sda1                   8:1    0   200M  0 part  /boot/efi
+    ├─sda2                   8:2    0     1G  0 part  /boot
+    ├─sda3                   8:3    0   7,8G  0 part  
+    │ └─fedora_swap        253:5    0   7.8G  0 crypt [SWAP]
+    └─sda4                   8:4    0 922,5G  0 part  
+      └─fedora_crypt       253:4    0 922,5G  0 crypt /
+
+This is pretty straightforward, except for the swap partition:
+normally, I just treat swap like any other logical volume and create
+it in a logical volume. This is now just speculation, but I bet it was
+setup this way because "swap" support was only added in BTRFS 5.0.
+
+I fully expect BTRFS fans to yell at me now because this is an old
+setup and BTRFS is so much better now, but that's exactly the point
+here. That setup is not *that* old (2018? is that old? really?), and
+migrating to a new partition scheme isn't exactly practical right
+now. But let's move on to more practical considerations.
+
+# No builtin encryption
+
+BTRFS aims at replacing the entire [mdadm](https://en.wikipedia.org/wiki/Mdadm), [LVM][], and [ext4](https://en.wikipedia.org/wiki/Ext4)
+stack with a single entity, alongside adding new features like
+deduplication, checksums and so on.
+
+Yet there is one feature it is critically missing: encryption. See,
+*my* stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and ext4. This
+is convenient because I have only a single volume to decrypt.
+
+If I were to use BTRFS on servers, I'd need to have one LUKS volume
+*per-disk*. For a simple RAID-1 array, that's not too bad: one extra
+key. But for large RAID-10 arrays, this gets really unwieldy.
+
+The obvious BTRFS alternative, ZFS, supports encryption out of the box
+and mixes it above the disks so you only have one passphrase to
+enter.
+
+[LVM]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
+
+# Subvolumes, filesystems, and devices
+
+I find BTRFS's architecture to be utterly confusing. In the
+traditional LVM stack (which is itself kind of confusing if you're new
+to that stuff), you have those layers:
+
+ * disks: let's say `/dev/nvme0n1` and `nvme1n1`
+ * mdadm RAID arrays: let's say the above disks are joined in a RAID-1
+   array in `/dev/md1`
+ * LVM volume groups or VG: the above RAID device (technically a
+   "physical volume" or PV) is assigned into a VG, let's call it
+   `vg_tbbuild05`
+ * LVM logical volumes: out of *that* volume group actually "virtual
+   partitions" or "logical volumes" are created
+
+A typical server setup would look like this:
+
+    NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
+    nvme0n1                   259:0    0   1.7T  0 disk  
+    ├─nvme0n1p1               259:1    0     8M  0 part  
+    ├─nvme0n1p2               259:2    0   512M  0 part  
+    │ └─md0                     9:0    0   511M  0 raid1 /boot
+    ├─nvme0n1p3               259:3    0   1.7T  0 part  
+    │ └─md1                     9:1    0   1.7T  0 raid1 
+    │   └─crypt_dev_md1       253:0    0   1.7T  0 crypt 
+    │     ├─vg_tbbuild05-root 253:1    0    30G  0 lvm   /
+    │     ├─vg_tbbuild05-swap 253:2    0 125.7G  0 lvm   [SWAP]
+    │     └─vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
+    └─nvme0n1p4               259:4    0     1M  0 part
+
+
+I stripped the other `nvme1n1` disk because it's basically the same.
+
+Now, if we look at my workstation, which doesn't even have RAID, we
+have the following:
+
+ * disk: `/dev/sda` with, again, `/dev/sda4` being where BTRFS lives
+ * filesystem: `fedora_crypt`, which is, confusingly, kind of like a
+   volume group. it's where everything lives. i think.
+ * subvolumes: `home`, `root`, `/`, etc. those are actually the things
+   that get mounted. you'd think you'd mount a filesystem, but no, you
+   mount a subvolume
+
+It looks something like this to `lsblk`:
+
+    NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
+    sda                      8:0    0 931,5G  0 disk  
+    ├─sda1                   8:1    0   200M  0 part  /boot/efi
+    ├─sda2                   8:2    0     1G  0 part  /boot
+    ├─sda3                   8:3    0   7,8G  0 part  [SWAP]
+    └─sda4                   8:4    0 922,5G  0 part  
+      └─fedora_crypt       253:4    0 922,5G  0 crypt /srv
+
+Notice how we don't see all the BTRFS volumes here? Maybe it's because
+I'm mounting this from the Debian side, but `lsblk` definitely gets
+confused here. I frankly don't quite understand what's going on, even
+after repeatedly looking around the [rather dismal
+documentation](https://btrfs.readthedocs.io/en/latest/). But that's what I gather from the following
+commands:
+
+    root@curie:/home/anarcat# btrfs filesystem show
+    Label: 'fedora'  uuid: 5abb9def-c725-44ef-a45e-d72657803f37
+    	Total devices 1 FS bytes used 883.29GiB
+    	devid    1 size 922.47GiB used 916.47GiB path /dev/mapper/fedora_crypt
+
+    root@curie:/home/anarcat# btrfs subvolume list /srv
+    ID 257 gen 108092 top level 5 path home
+    ID 258 gen 108094 top level 5 path root
+    ID 263 gen 108020 top level 258 path root/var/lib/machines
+
+I only got to that point through trial and error. Notice how I use an
+existing mountpoint to list the related subvolumes. If I try to use
+the filesystem path, the one that's listed in `filesystem show`, I
+fail:
+
+    root@curie:/home/anarcat# btrfs subvolume list /dev/mapper/fedora_crypt 
+    ERROR: not a btrfs filesystem: /dev/mapper/fedora_crypt
+    ERROR: can't access '/dev/mapper/fedora_crypt'
+
+Maybe I just need to use the label? Nope:
+
+    root@curie:/home/anarcat# btrfs subvolume list fedora
+    ERROR: cannot access 'fedora': No such file or directory
+    ERROR: can't access 'fedora'
+
+This is *really* confusing. I don't even know if I understand this
+right, and I've been staring at this all afternoon. Hopefully, the
+lazyweb will correct me eventually.
+
+So at least I can refer to this section in the future, the next time I
+fumble around the `btrfs` commandline.
+

(Diff truncated)
approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_bf38323e3eb278f95fea5e291a90178d._comment b/blog/2022-04-27-sbuild-qemu/comment_1_bf38323e3eb278f95fea5e291a90178d._comment
new file mode 100644
index 00000000..97719090
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_bf38323e3eb278f95fea5e291a90178d._comment
@@ -0,0 +1,32 @@
+[[!comment format=mdwn
+ ip="84.114.211.250"
+ claimedauthor="Christian Kastner"
+ subject="Faster bootup"
+ date="2022-05-08T16:57:09Z"
+ content="""
+>Do say more about this! I would love to get faster bootup, that's the main pain point right now. It does feel like runtime performance impact is negligible (but I'd love to improve on that too), but startup time definitely feels slow.
+
+Well, on my local amd64 system, a full boot to a console takes about 9s, 5-6s of which are spent in GRUB, loading the kernel and initramfs, and so on.
+
+If one doesn't need the GRUB menu, I guess 1s could be shaved off by setting GRUB_TIMEOUT=0 in /usr/share/sbuild/sbuild-qemu-create-modscript, then rebuilding a VM.
+
+It seems that the most time-consuming step is loading the initramfs and the initial boot, and I while I haven't looked into it yet, I feel like this could also be optimized. Minimal initramfs, minimal hardware, etc.
+
+>Are you familiar with Qemu's microvm platform? How would we experiment with stuff like that in the sbuild-qemu context?
+
+I've stumbled over it about a year ago, but didn't get it to run -- I think I had an older QEMU environment. With 1.7 now in bullseye-backports, I need to give it a try again soon.
+
+However, as far as I understand it, microvm only works for a very limited x86_64 environment. In other words, this would provide only the isolation features of a VM, but not point (1) and (2) of my earlier comment.
+
+Not that I'm against that, on the contrary, I'd still like to add that as a \"fast\" option.
+
+firecracker-vm (Rust, Apache 2.0, maintained by Amazon on GitHub) also provides a microvm-like solution. Haven't tried it yet, though.
+
+> How do I turn on host=guest?
+
+That should happen automatically through autopkgtest. sbuild-qemu calls sbuild, sbuild bridges with autopkgtest-virt-qemu, autopkgtest-virt-qemu has the host=guest detection built in.
+
+It's odd that there's nothing in the logs indicating whether this is happening (not even with --verbose or --debug), but a simple test is: if the build is dog-slow, as in 10-15x slower than native , it's without KVM :-)
+
+Note that in order to use KVM, the building user must be in the 'kvm' group.
+"""]]
diff --git a/blog/2022-05-06-wallabako-1.4.0-released/comment_1_50d37bb98c62941cbab992e827d478d6._comment b/blog/2022-05-06-wallabako-1.4.0-released/comment_1_50d37bb98c62941cbab992e827d478d6._comment
new file mode 100644
index 00000000..f9643455
--- /dev/null
+++ b/blog/2022-05-06-wallabako-1.4.0-released/comment_1_50d37bb98c62941cbab992e827d478d6._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ ip="91.54.59.28"
+ subject="comment 1"
+ date="2022-05-07T05:17:34Z"
+ content="""
+Thank you for the update, I've encountered wallabako some month back and really like it. 
+
+My motivation and setup for using it is similar to yours. However, I'm using wallabako als cli on OpenBSD and save epubs to a syncthing share that is connected with my pocketbook reader.
+
+Keep going.
+"""]]

small edits
diff --git a/blog/2022-05-06-wallabako-1.4.0-released.md b/blog/2022-05-06-wallabako-1.4.0-released.md
index a3d759aa..59325f01 100644
--- a/blog/2022-05-06-wallabako-1.4.0-released.md
+++ b/blog/2022-05-06-wallabako-1.4.0-released.md
@@ -16,9 +16,9 @@ readers probably don't even know I sometimes meddle with in
 
 # What's Wallabako
 
-Wallabako is a weird little program I designed to read articles on my
-E-book reader. I use it to spend less time on the computer: I save
-articles in a read-it-later app named [Wallabag](https://wallabag.org/) (hosted by a
+[Wallabako](https://gitlab.com/anarcat/wallabako/) is a weird little program I designed to read articles
+on my E-book reader. I use it to spend less time on the computer: I
+save articles in a read-it-later app named [Wallabag](https://wallabag.org/) (hosted by a
 generous friend), and then Wallabako connects to that app, downloads
 an EPUB version of the book, and then I can read it on the device
 directly.
@@ -33,6 +33,12 @@ interface (called "Nickel"), [Koreader](https://koreader.rocks/) and [Plato](htt
 use Koreader for everything nowadays, but it should work equally well
 on the others.
 
+Wallabako is actually setup to be started by `udev` when there's a
+connection change detected by the kernel, which is kind of a gross
+hack. It's clunky, but actually works and I thought for a while about
+switching to something else, but it's really the easiest way to go,
+and that requires the less interaction by the user.
+
 # Why I'm (still) using it
 
 I wrote Wallabako because I read a *lot* of articles on the
@@ -158,19 +164,18 @@ that thing to the ebook reader at every code iteration.
 I had originally thought I should add some sort of graphical interface
 in Koreader for Wallabako as well, and had [requested that feature
 upstream](https://github.com/koreader/koreader/issues/2621). Unfortunately (or fortunately?), they took my idea and
-just *ran* with it. Some courageous soul actually [wrote a full Wallabag plugin for
-koreader][] which makes implementing koreader support in Wallabako a
-much less pressing issue.
+just *ran* with it. Some courageous soul actually [wrote a full
+Wallabag plugin for koreader][], in Lua of course.
 
 Compared to the Wallabako implementation however, the koreader plugin
-is much slower, as it downloads articles serially instead of
-concurrently. It is, however, much more usable as the user is given a
-visible feedback of the various steps. I still had to enable full
-debugging to diagnose a problem (which was that I shouldn't have a
-trailing slash, and that some special characters don't work in
-passwords). It's also better to write the config file with a normal
-text editor, over SSH or with the Kobo mounted to your computer
-instead of typing those really long strings over the kobo.
+is much slower, probably because it downloads articles serially
+instead of concurrently. It is, however, much more usable as the user
+is given a visible feedback of the various steps. I still had to
+enable full debugging to diagnose a problem (which was that I
+shouldn't have a trailing slash, and that some special characters
+don't work in passwords). It's also better to write the config file
+with a normal text editor, over SSH or with the Kobo mounted to your
+computer instead of typing those really long strings over the kobo.
 
 There's [no sample config file][] which makes that harder but a
 workaround is to save the configuration with dummy values and fix them
@@ -180,7 +185,7 @@ loss][] (Wallabag article being deleted!) for an unsuspecting user...
 
 [lead to data loss]: https://github.com/koreader/koreader/issues/8936
 [no sample config file]: https://github.com/koreader/koreader/issues/7576
-[wrote a fullWallabag plugin for koreader]: https://github.com/koreader/koreader/pull/4271 
+[wrote a full Wallabag plugin for koreader]: https://github.com/koreader/koreader/pull/4271 
 
 So basically, I started working on Wallabag again because the koreader
 implementation of their Wallabag client was not up to spec for me. It

wallabako release
diff --git a/blog/2022-05-06-wallabako-1.4.0-released.md b/blog/2022-05-06-wallabako-1.4.0-released.md
new file mode 100644
index 00000000..a3d759aa
--- /dev/null
+++ b/blog/2022-05-06-wallabako-1.4.0-released.md
@@ -0,0 +1,255 @@
+[[!meta title="Wallabako 1.4.0 released"]]
+
+I don't particularly like it when people announce their personal
+projects on their blog, but I'm making an exception for this one,
+because it's a little special for me.
+
+You see, I have just released [Wallabako 1.4.0](https://gitlab.com/anarcat/wallabako/-/tags/1.4.0) (and a quick,
+mostly irrelevant [1.4.1 hotfix](https://gitlab.com/anarcat/wallabako/-/tags/1.4.1)) today. It's the first release of
+that project in almost 3 years (the previous was [1.3.1](https://gitlab.com/anarcat/wallabako/-/tags/1.3.1), before
+the pandemic).
+
+The other reason I figured I would mention it is that I have almost
+*never* talked about Wallabako on this blog at all, so many of my
+readers probably don't even know I sometimes meddle with in
+[Golang](https://go.dev/) which surprises even me sometimes.
+
+# What's Wallabako
+
+Wallabako is a weird little program I designed to read articles on my
+E-book reader. I use it to spend less time on the computer: I save
+articles in a read-it-later app named [Wallabag](https://wallabag.org/) (hosted by a
+generous friend), and then Wallabako connects to that app, downloads
+an EPUB version of the book, and then I can read it on the device
+directly.
+
+When I'm done reading the book, Wallabako notices and sets the article
+as read in Wallabag. I also set it to delete the book locally, but you
+can actually configure to keep those books around forever if you feel
+like it.
+
+Wallabako supports syncing read status with the built-in Kobo
+interface (called "Nickel"), [Koreader](https://koreader.rocks/) and [Plato](https://github.com/baskerville/plato/). I happen to
+use Koreader for everything nowadays, but it should work equally well
+on the others.
+
+# Why I'm (still) using it
+
+I wrote Wallabako because I read a *lot* of articles on the
+internet. It's actually *most* of my readings. I read about 10 books a
+year (which I don't think is much), but I probably read more in terms
+of time and pages in Wallabag. I haven't actually made the math, but I
+estimate I spend at least double the time reading articles than I
+spend reading books.
+
+If I wouldn't have Wallabag, I would have hundreds of tabs open in my
+web browser all the time. So at least that problem is easily solved:
+throw everything in Wallabag, sort and read later.
+
+If I wouldn't have Wallabako however, I would be either spend that
+time reading on the computer -- which I prefer to spend working on
+free software or work -- or on my phone -- which is kind of better,
+but really cramped.
+
+I had stopped (and developing) Wallabako for a while, actually, Around
+2019, I got tired of always read those technical articles (basically
+work stuff!) at home. I realized I was just not "reading" (as in
+books! fiction! fun stuff!) anymore, at least not as much as I
+wanted.
+
+So I tried to make this separation: the ebook reader is for cool book
+stuff. The rest is work. But because I had the Wallabag Android app on
+my phone and tablet, I could still read those articles there, which I
+thought was pretty neat. But that meant that I was constantly looking
+at my phone, which is something I'm generally trying to avoid, as it
+sets a bad example for the kids (small and big) around me.
+
+Then I realized there was one stray ebook reader lying around at
+home. I had recently [[bought a Kobo Aura
+HD|hardware/tablet/kobo-clara-hd]] to read books, and I like that
+device. And it's going to stay locked down to reading books. But
+there's still that old battered Kobo Glo HD reader lying around, and
+I figured I could just borrow it to read Wallabag articles.
+
+# What is this new release
+
+But oh boy that was a lot of work. Wallabako was kind of a mess: it
+was using the deprecated [go dep](https://github.com/golang/dep) tool, which lost the battle with
+[go mod](https://go.dev/ref/mod). Cross-compilation was broken for older devices, and I had
+to implement support for Koreader.
+
+## go mod
+
+So I had to learn `go mod`. I'm still not sure I got that part right:
+LSP is yelling at me because it can't find the imports, and I'm
+generally just "[YOLO everythihng][]" every time I get anywhere close
+to it. That's not the way to do Go, in general, and not how I like to
+do it either.
+
+[YOLO everythihng]: https://en.wikipedia.org/wiki/YOLO_(aphorism)
+
+But I guess that, given time, I'll figure it out and make it work for
+me. It certainly works now. I think.
+
+## Cross compilation
+
+The hard part was different. You see, Nickel uses [SQLite](https://www.sqlite.org/) to store
+metadata about books, so Wallabako actually needs to tap into that
+SQLite database to propagate read status. Originally, I just linked
+against some [sqlite3 library](https://github.com/mattn/go-sqlite3) I found lying around. It's basically
+a wrapper around the C-based SQLite and generally works fine. But that
+means you actually link your Golang program against a C library. And
+that's when things get a little nutty.
+
+If you would just build Wallabag naively, it would [fail when deployed
+on the Kobo Glo HD](https://gitlab.com/anarcat/wallabako/-/issues/43). That's because the device runs a really old
+kernel: the prehistoric `Linux kobo 2.6.35.3-850-gbc67621+ #2049
+PREEMPT Mon Jan 9 13:33:11 CST 2017 armv7l GNU/Linux`. That was built
+in 2017, but the kernel was actually [released in 2010](https://kernelnewbies.org/Linux_2_6_35), a whole *5
+years* before the [Glo HD was released, in 2015](https://wiki.mobileread.com/wiki/Kobo_Glo_HD) which is kind of
+outrageous. and yes, that is with the [latest firmware release](https://wiki.mobileread.com/wiki/Kobo_Firmware_Releases). 
+
+My bet is they just don't upgrade the kernel on those things, as the
+Glo was probably bought around 2017...
+
+In any case, the problem is we are cross-compiling here. And Golang is
+pretty good about cross-compiling, but because we have C in there,
+we're actually cross-compiling with "CGO" which is really just Golang
+with a GCC backend. And that's much, much harder to figure out because
+you need to pass down flags into GCC and so on. It was a nightmare.
+
+That's until I found this outrageous "little" project called
+[modernc.org/sqlite](https://modernc.org/sqlite). What that thing does (with a hefty does of
+dependencies that would make any Debian developer recoil in horror) is
+to *transpile* the SQLite C source code to Golang. You read that
+right: it rewrites SQLite in Go. On the fly. It's nuts.
+
+But it works. And you end up with a "pure go" program, and that thing
+compiles much faster and runs fine on older kernel.
+
+I still wasn't sure I wanted to just stick with that forever, so I
+kept the old sqlite3 code around, behind a compile-time tag. At the
+top of the `nickel_modernc.go` file, there's this magic string:
+
+    //+build !sqlite3
+
+And at the top of `nickel_sqlite3.go` file, there's this magic string:
+
+    //+build sqlite3
+
+So now, by default, the `modernc` file gets included, but if I pass
+`--tags sqlite3` to the Go compiler (to `go install` or whatever), it
+will actually switch to the other implementation. Pretty neat stuff.
+
+## Koreader port
+
+The last part was something I was hesitant in doing for a long time,
+but that turned out to be pretty easy. I have basically switch to
+using Koreader to read everything. Books, PDF, everything goes through
+it. I really like that it stores its metadata in sidecar files: I
+synchronize all my books with [Syncthing](https://syncthing.net/) which means I can carry
+my read status, annotations and all that stuff without having to think
+about it. (And yes, I installed Syncthing on my Kobo.)
+
+The [koreader.go port](https://gitlab.com/anarcat/wallabako/-/blob/8cce90771fbef9f8089a1f0569c184c6aa67f8d0/koreader.go) was less than 80 lines, and I could even
+make a nice little [test suite](https://gitlab.com/anarcat/wallabako/-/blob/8cce90771fbef9f8089a1f0569c184c6aa67f8d0/koreader_test.go) so that I don't have to redeploy
+that thing to the ebook reader at every code iteration.
+
+I had originally thought I should add some sort of graphical interface
+in Koreader for Wallabako as well, and had [requested that feature
+upstream](https://github.com/koreader/koreader/issues/2621). Unfortunately (or fortunately?), they took my idea and
+just *ran* with it. Some courageous soul actually [wrote a full Wallabag plugin for
+koreader][] which makes implementing koreader support in Wallabako a
+much less pressing issue.
+
+Compared to the Wallabako implementation however, the koreader plugin
+is much slower, as it downloads articles serially instead of
+concurrently. It is, however, much more usable as the user is given a
+visible feedback of the various steps. I still had to enable full
+debugging to diagnose a problem (which was that I shouldn't have a
+trailing slash, and that some special characters don't work in
+passwords). It's also better to write the config file with a normal
+text editor, over SSH or with the Kobo mounted to your computer
+instead of typing those really long strings over the kobo.
+
+There's [no sample config file][] which makes that harder but a
+workaround is to save the configuration with dummy values and fix them
+up after. Finally I also found the default setting ("Remotely delete
+finished articles") really dangerous as it can basically [lead to data
+loss][] (Wallabag article being deleted!) for an unsuspecting user...
+
+[lead to data loss]: https://github.com/koreader/koreader/issues/8936
+[no sample config file]: https://github.com/koreader/koreader/issues/7576
+[wrote a fullWallabag plugin for koreader]: https://github.com/koreader/koreader/pull/4271 
+
+So basically, I started working on Wallabag again because the koreader
+implementation of their Wallabag client was not up to spec for me. It
+might be good enough for you, but I guess if you like Wallabako, you
+should thank the koreader folks for their sloppy implementation, as
+I'm now working again on Wallabako.
+
+# Actual release notes
+
+Those are the actual [release notes for 1.4.0](https://gitlab.com/anarcat/wallabako/-/tags/1.4.0).
+

(Diff truncated)
response
diff --git a/blog/2022-04-27-sbuild-qemu/comment_7_4f11b05e9bb491c60920bf280c5076d9._comment b/blog/2022-04-27-sbuild-qemu/comment_7_4f11b05e9bb491c60920bf280c5076d9._comment
new file mode 100644
index 00000000..080d1407
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_7_4f11b05e9bb491c60920bf280c5076d9._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""optimizing qemu"""
+ date="2022-05-06T13:08:12Z"
+ content="""
+> If host=guest arch and KVM is enabled, the emulation overhead is negligible, and I guess the boot process could be sped up with a trick or two. Most of the time is spent in the BIOS resp. UEFI environment.
+
+Do say more about this! I would love to get faster bootup, that's the main pain point right now. It does feel like runtime performance impact is negligible (but I'd love to improve on that too), but startup time definitely feels slow.
+
+Are you familiar with Qemu's [microvm platform](https://qemu.readthedocs.io/en/latest/system/i386/microvm.html)? How would we experiment with stuff like that in the sbuild-qemu context? How do I turn on `host=guest`?
+
+Thanks for the feedback!
+"""]]

approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_ab0513b031389c5991892d5c1d2f256b._comment b/blog/2022-04-27-sbuild-qemu/comment_1_ab0513b031389c5991892d5c1d2f256b._comment
new file mode 100644
index 00000000..5e003ebe
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_ab0513b031389c5991892d5c1d2f256b._comment
@@ -0,0 +1,22 @@
+[[!comment format=mdwn
+ ip="84.114.211.250"
+ claimedauthor="Christian Kastner"
+ subject="Why a VM"
+ date="2022-05-04T14:57:19Z"
+ content="""
+> Why not a container?
+
+> You can obtain the same level of security/isolation, with 1/100 of the effort.
+
+Containers can have a good level of security/isolation, but VM isolation is still stronger.
+
+In any case, two clear advantages that VMs have over containers are that (1) one can test entire systems, which (2) may also have a foreign architecture.  There are also a number of other minor advantages to using QEMU of course, e.g. snapshotting.
+
+For example, I maintain the keyutils package, and in the process have discovered architecture-specific bugs in the kernel, for architectures I don't have physical access to. I needed to run custom kernels to debug these, and I can't to that with containers or on porterboxes.
+
+As a co-maintainer of scikit-learn, I've also discovered a number of upstream issues in scikit-learn and numpy for the architectures that upstreams can't/don't really test in CI (e.g. 32-bit ARM). I've run into all kinds of issues with porterboxes (e.g.: not enough space), which I don't have with a local image.
+
+So in my case, there's no way around sbuild-qemu and autopkgtest-virt-qemu anyway. And (echoing anarcat here) on the plus side: KVM + QEMU just feel much cleaner for isolation.
+
+If host=guest arch and KVM is enabled, the emulation overhead is negligible, and I guess the boot process could be sped up with a trick or two. Most of the time is spent in the BIOS resp. UEFI environment.
+"""]]

response
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_1c87c913f1eedf922415b7e9e61d82ab._comment b/blog/2022-04-27-sbuild-qemu/comment_1_1c87c913f1eedf922415b7e9e61d82ab._comment
new file mode 100644
index 00000000..953c6bf9
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_1c87c913f1eedf922415b7e9e61d82ab._comment
@@ -0,0 +1,41 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""Re: why a VM"""
+ date="2022-05-03T13:19:54Z"
+ content="""
+> My main point was in terms of effort.
+
+But what effort do you see in maintaining a VM image exactly?
+
+> Playing with croups, namespaces, and the selinux beast is surely haphazard, but nobody does it manually. 
+>
+> The typical use case are the CI/CD pipelines, where the developer just choose the base image to use.
+
+(I also use Docker to run GitLab CI pipelines, for the record, but that's not the topic here.)
+
+See, that's the big lie of containers. I could also just "choose a base image" for a VM, say from Vagrant boxes or whatever. I don't do that, because I like to know where the heck my stuff comes from. I pull it from official Debian mirrors, so I know exactly what's in there.
+
+When you pull from Docker hub, you have a somewhat dubious trace of where those images come from. We (Debian members) maintain a few official images, but I find their built process to be personnally quite [convoluted and confusing](https://github.com/debuerreotype/debuerreotype). 
+
+> It's just few lines of JSON or yaml and we are done.
+
+See the funny thing right there is I don't even know what you're talking about here, and I've been deploying containers for years. Are you refering to a Docker compose YAML file? Or a Kubernetes manifest? Or the container image metadata? Are you actually writing *that* by hand!?
+
+That doesn't show me how you setup security in your countainers, nor the guarantees it offers. Do you run the containers as root? Do you enable user namespaces? selinux or apparmor? what kind of seccomp profile will that build need?
+
+Those are *all* questions I'd need to have answered if I'd want any sort of isolation even *remotely* close to what a VM offers.
+
+> There's no need to update the system, configure services, keep an eye on the resources, etc.
+
+You still need to update the image. I don't need to configure services or keep an eye on resources in my model either.
+
+> Again, how you do it is amazing and clean, but surely not a scalable solution 
+
+It seems you're trying to sell me on the idea that containers are great and scale better than VMs for the general case, in an article where I specifically advise users to use VM for *specific* case of providing *stronger* isolation for untrusted builds. I don't need to scale those builds to thousands of builds a day (but i *will* note that the Debian buildd's have been doing this for a long time without containers).
+
+> I personally use a mix of different solutions based on the customer needs, often it's podman or Docker, but not always. 
+
+Same, it's not one size fits all. The topic here is building Debian packages, and I find qemu to be a great fit.
+
+I dislike containers, but it's sometimes the best tool for the job. Just not in this case.
+"""]]

approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_cdc867100dbd3c6f5a8466e8146e825f._comment b/blog/2022-04-27-sbuild-qemu/comment_1_cdc867100dbd3c6f5a8466e8146e825f._comment
new file mode 100644
index 00000000..35140737
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_cdc867100dbd3c6f5a8466e8146e825f._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ ip="213.55.225.140"
+ claimedauthor="Antenore"
+ subject="Re: why a VM"
+ date="2022-05-02T19:13:06Z"
+ content="""
+Thanks for your answer.
+In general yes, a VM is more secured as the system resources are separated, in that sense you're are right. My main point was in terms of effort.
+Playing with croups, namespaces, and the selinux beast is surely haphazard, but nobody does it manually. 
+The typical use case are the CI/CD pipelines, where the developer just choose the base image to use. It's just few lines of JSON or yaml and we are done.
+There's no need to update the system, configure services, keep an eye on the resources, etc.
+
+Again, how you do it is amazing and clean, but surely not a scalable solution 
+
+I personally use a mix of different solutions based on the customer needs, often it's podman or Docker, but not always. 
+
+"""]]

follow directory rename
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index 16a3cc73..aeed33e3 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -114,7 +114,7 @@ tired of them...
    down". this can be disabled with `:unbind <C-f>`. also see the
    [builtin Firefox shortcuts][] and the `pentadactyl` entry in the
    XULocalypse section below. [Krabby](https://krabby.netlify.com/), another of those
-   implementations, has an [interesting list of alternatives](https://github.com/alexherbo2/krabby/blob/master/doc/alternatives.md).
+   implementations, has an [interesting list of alternatives](https://github.com/alexherbo2/krabby/blob/c525cf13962f72f4810fdc8f8032e6d9001308ea/docs/alternatives.md).
 
 ## Previously used
 

fix typo in mta article, thanks nick black
diff --git a/blog/2020-04-14-opendkim-debian.mdwn b/blog/2020-04-14-opendkim-debian.mdwn
index fb6faefb..f3647c11 100644
--- a/blog/2020-04-14-opendkim-debian.mdwn
+++ b/blog/2020-04-14-opendkim-debian.mdwn
@@ -74,7 +74,7 @@ If one of those is missing, then you are doing something wrong and
 your "spamminess" score will be worse. The latter is especially tricky
 as it validates the "Envelope From", which is the `MAIL FROM:` header
 as sent by the originating MTA, which you see as `from=<>` in the
-postfix lost.
+postfix logs.
     
 The following will happen anyways, as soon as you have a signature,
 that's normal:

response
diff --git a/blog/2022-04-27-sbuild-qemu/comment_3_1fad8e5625f8f744a57f8a455101deb6._comment b/blog/2022-04-27-sbuild-qemu/comment_3_1fad8e5625f8f744a57f8a455101deb6._comment
new file mode 100644
index 00000000..b3147179
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_3_1fad8e5625f8f744a57f8a455101deb6._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""why a VM"""
+ date="2022-05-02T18:03:35Z"
+ content="""
+> Why not a container?
+
+A "container" doesn't actually exist in the Linux kernel. It's a hodge-podge collection of haphazard security measures that are really hard to get right. Some do, most don't.
+
+Besides, which container are you refering to? I know of `unshare`, LXC, LXD, Docker, podman... it can mean so many things that it actually loses its meaning.
+
+I find Qemu + KVM to be much cleaner, and yes, it does provide a much stronger security isolation than a container.
+"""]]

approve comment
diff --git a/blog/2022-04-27-lsp-in-debian/comment_1_c41ae2a49d701b2bdbdb747bf92e241d._comment b/blog/2022-04-27-lsp-in-debian/comment_1_c41ae2a49d701b2bdbdb747bf92e241d._comment
new file mode 100644
index 00000000..2d8c37b2
--- /dev/null
+++ b/blog/2022-04-27-lsp-in-debian/comment_1_c41ae2a49d701b2bdbdb747bf92e241d._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="188.27.129.132"
+ claimedauthor="Thomas Koch"
+ url="https://blog.koch.ro"
+ subject="thx, elpa-lsp-haskell"
+ date="2022-04-29T11:45:32Z"
+ content="""
+Thank you for featuring the emacs lsp setup. My goal is to have a usable Haskell development environment in Debian only one `apt install` away.
+
+Unfortunately the *elpa-lsp-haskell* package is not the Haskell language server but only a small emacs package to connect to it, see this RFP instead:
+
+[#968373](https://bugs.debian.org/968373) RFP: hls+ghcide -- Haskell Development Environment and Language Server
+"""]]
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_281d153f1739942af7d5b25ceb680c69._comment b/blog/2022-04-27-sbuild-qemu/comment_1_281d153f1739942af7d5b25ceb680c69._comment
new file mode 100644
index 00000000..615ebf35
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_281d153f1739942af7d5b25ceb680c69._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="213.55.240.88"
+ claimedauthor="Antenore"
+ subject="Isn't too much a full VM? "
+ date="2022-04-29T14:38:59Z"
+ content="""
+Why not a container?
+
+While the article is fascinating and useful, I find it overwhelming setting up all of this to just build a package.
+
+You can obtain the same level of security/isolation, with 1/100 of the effort.
+
+Am I wrong?
+"""]]

fix links
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index ed3705d7..1d7fe18e 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -812,6 +812,7 @@ complicated and considered out of scope of this tutorial.
 [vagrant rsync]: https://docs.vagrantup.com/v2/synced-folders/rsync.html
 """]]
 
+[Vagrant]: https://www.vagrantup.com/
 [other provisionning tools]: https://www.vagrantup.com/docs/provisioning/
 [Puppet]: https://www.vagrantup.com/docs/provisioning/puppet_apply.html
 [Ansible]: https://www.vagrantup.com/docs/provisioning/ansible.html
@@ -846,7 +847,7 @@ Another simple approach is to use plain [Qemu][]. We will need to use a
 special tool to create the virtual machine as debootstrap only creates
 a chroot, which virtual machines do not necessarily understand. 
 
-[Qemu][]: https://www.qemu.org/
+[Qemu]: https://www.qemu.org/
 
 [[!tip """
 With `sbuild-qemu`, above, you already have a qemu image, built with

major editorial review of the debian packaging guide
I did a full reread and corrections. Phew!
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 5b4042dc..ed3705d7 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -1,10 +1,13 @@
 [[!meta title="Quick Debian development guide"]]
 
-[[!toc levels=3]]
-
 [[!note "This guide is also available under the URL
 <https://deb.li/quickdev> and as a video presentation
-<https://www.youtube.com/watch?v=O83rIRRJysA>."]]
+<https://www.youtube.com/watch?v=O83rIRRJysA>. This is a living,
+changing document, although the video, obviously, isn't."]]
+
+[[!toc levels=3]]
+
+# Introduction
 
 This guides aims to kickstart people with working in existing Debian
 packages, either to backport software, patch existing packages or work
@@ -35,6 +38,8 @@ may find useful when looking for more information.
 [Debian policy]: https://www.debian.org/doc/debian-policy/
 [developer's manual suite]: https://www.debian.org/doc/devel-manuals
 
+# Minimal packaging workflow
+
 This guides tries to take an opinionated approach to maintaining
 Debian packages. It doesn't try to cover all cases, doesn't try to
 teach you about [debhelper][], [cdbs][], [uscan][] or [make][]. It
@@ -75,12 +80,13 @@ comfortable working on.[^lazy]
     regardless of the version control used. Furthermore, some packages
     do not use version control at all!
 
-To get the source code on an arbitrary package, visit the
-[package tracker][].[^tracker] In this case, we look at the
-[Calibre package tracker page][] and find the download links for the
-release we're interested in. Since we are doing a backport, we use the
-`testing` download link. If you are looking for an antique package,
-you can also find download links on [archive.debian.net][].
+To get the source code on an arbitrary package, visit the [package
+tracker][].[^tracker] In this case, we look at the [Calibre package
+tracker page][] and find the download links for the release we're
+interested in. Since we are doing a backport, we use the `testing`
+download link. If you are looking for an packages not in a
+distribution or antique package, you can also find download links on
+[snapshot.debian.org][] and [archive.debian.net][].
 
 <span/><div class="tip">It's also helpful to use [rmadison][], part of
 the [devscripts package][], to look at the various versions available
@@ -113,6 +119,7 @@ To get the Ubuntu results, I added the following line to my
 [devscripts package]: https://tracker.debian.org/devscripts
 [rmadison]: https://manpages.debian.org/rmadison
 [archive.debian.net]: https://archive.debian.net/
+[snapshot.debian.org]: https://snapshot.debian.org/
 
 What we are looking for is the [calibre_2.55.0+dfsg-1.dsc][] file, the
 "source description" file for the `2.55.0+dfsg-1` version that is
@@ -176,11 +183,11 @@ all the patches specific to Debian.
 Then dget downloads the files `.orig.tar.xz` and `.debian.tar.xz`
 files.
 
-[^dfsg]: Well, not exactly: in this case, it's a modification the
-upstream source code, prepared specifically to remove non-free
-software, hence the `+dfsg` suffix, which is an acronym for
-[Debian Free Software Guidelines][]. The `+dfsg` is simply a naming
-convention used to designated such modified tarballs.
+[^dfsg]: Well, not exactly: in this case, it's a modification of the
+    upstream source code, prepared specifically to remove non-free
+    software, hence the `+dfsg` suffix, which is an acronym for
+    [Debian Free Software Guidelines][]. The `+dfsg` is simply a
+    naming convention used to designated such modified tarballs.
 
 [Debian Free Software Guidelines]: https://www.debian.org/social_contract#guidelines
 
@@ -189,12 +196,13 @@ web of trust,[^openpgp] using [dscverify][]. The `.dsc` files includes
 checksums for the downloaded files, and those checksums are verified
 as well.
 
-Then the files are extracted using `dpkg-source -x`. Notice how `dget`
-is basically just a shortcut to commands you could all have ran by
-hand. This is something useful to keep in mind to understand how this
-process works.
+Then the files are extracted using `dpkg-source -x`. 
+
+Notice how `dget` is just a shortcut to commands you could all have
+ran by hand.
 
 [dscverify]: https://manpages.debian.org/dscverify
+
 [^openpgp]: In my case, this works cleanly, but that is only because
     the key is known on my system. `dget` actually offloads that work
     to `dscverify` which looks into the official keyrings in the
@@ -212,14 +220,18 @@ process works.
 
 [debian-keyring package]: https://packages.debian.org/debian-keyring
 
-If the version control system the package uses is familiar to you,
-you *can* use [debcheckout][] to checkout the source directly. If you
-are comfortable with many revision control systems, this may be better
-for you in general. However, keep in mind that it does not ensure
-end-to-end cryptographic integrity like the previous procedure
-does. It *will* be useful, however, if you want to review the source
-code history of the package to figure out where things come from.
+If the version control system the package uses is familiar to you, you
+*can* use [debcheckout][] to checkout the source directly. However,
+keep in mind that it does not ensure end-to-end cryptographic
+integrity like the previous procedure does, and instead relies on
+HTTPS-level transport security. 
+
+It might be useful if you prefer to collaborate with GitLab merge
+requests over at [salsa.debian.org][], but be warned that not all
+maintainers watch their GitLab projects as closely as the bug tracking
+system.
 
+[salsa.debian.org]: https://salsa.debian.org/
 [debcheckout]: https://manpages.debian.org/debcheckout
 
 # Modifying the package
@@ -263,12 +275,12 @@ changelog and what bugs are being fixed. Here's the result:
      -- Antoine Beaupré <anarcat@debian.org>  Tue, 26 Apr 2016 16:49:56 -0400
 
 [^tilde]: The "tilde" character to indicate that this is a *lower*
-version than the version I am backporting from, so that when the users
-eventually upgrade to the next stable (`stretch`, in this case), they
-will actually upgrade to the real version in stretch, and not keep the
-backport lying around. There is a
-[detailed algorithm description of how version number are compared][],
-which you can test using `dpkg --compare-versions` if you are unsure.
+    version than the version I am backporting from, so that when the
+    users eventually upgrade to the next stable (`stretch`, in this
+    case), they will actually upgrade to the real version in stretch,
+    and not keep the backport lying around. There is a [detailed
+    algorithm description of how version number are compared][], which
+    you can test using `dpkg --compare-versions` if you are unsure.
 
 [detailed algorithm description of how version number are compared]: https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
 
@@ -281,7 +293,7 @@ Note that there are other options you can pass to `dch`. I often use:
 
 There are more described in the [dch][] manpage. The
 [managing packages section][] of the developer's reference is useful
-in crafting those specific packages.
+in crafting packages specific to your situation.
 
 [managing packages section]: https://www.debian.org/doc/manuals/developers-reference/pkgs.html
 [dch]: https://manpages.debian.org/dch
@@ -374,9 +386,10 @@ it. The generic command to build a Debian package is
 files should all show up in the parent directory.
 
 [^changes]: the `.changes` file is similar to the `.dsc` file, but
-    also covers the `.deb` file. So it's the `.dsc` file for the
-    binary package.
-[^debuild]: I also often use `debuild` instead of `dpkg-buildpackage`
+    also covers the `.deb` file. So it's kind of the `.dsc` file for
+    the binary package (except there's also a `.changes` file for
+    source-only uploads, so not really).
+[^debuild]: You can also use `debuild` instead of `dpkg-buildpackage`
     because it also runs [lintian][] and signs the binary package with
     [debsign][].
 
@@ -388,29 +401,31 @@ If you are building from a VCS (e.g. git) checkout, you will get a lot
 of garbage in your source package. To avoid this, you need to use a
 tool specifically crafted for your VCS. I use [git-buildpackage][] (or
 `gbp` in short) for that purpose, but other also use the simpler
-[git-pkg][]. I find that `gbp` has more error checking but it is more
-complicated and less intuitive if you actually know what you are
-doing, which wasn't my case when I started.[^gitpkg]
+[git-pkg][]. I find that `gbp` has more error checking and a better
+workflow, but there are many opinions on how to do this.[^gitpkg]
+There are also *many* other ways of packaging Debian packages in Git,
+including [dgit][] and [git-dpm][], so until Debian standardizes on
+*one* of those, this guide will remain git-agnostic.
 </div>
 
+[git-dpm]: https://tracker.debian.org/pkg/git-dpm
+[dgit]: https://tracker.debian.org/pkg/dgit
 [git-pkg]: https://manpages.debian.org/git-pkg
 [git-buildpackage]: https://manpages.debian.org/git-buildpackage
 [^gitpkg]: git-pkg actually only extracts a source package from your
     git tree, and nothing else. There are hooks to trigger builds and
     so on, but it's basically expected that you do that yourself, and
-    gitpkg is just there to clean things up for your. git-buildpackage
-    does way more stuff, which can be confusing for people more
-    familiar with the Debian toolchain.
+    gitpkg is just there to clean things up for your.
 
-In any case, there's a catch here. The catch is that you need all the
-build-dependencies for the above builds to succeed. You may not have
-all of those, so you can try to install them with:
+In any case, there's a catch here: you need all the build-dependencies
+for the above builds to succeed. You may not have all of those, so you
+can try to install them with:
 
     sudo mk-build-deps -i -r calibre
 
-But this installs a lot of cruft on your system! `mk-build-deps` makes
-a dummy package to wrap them all up together, so they are easy to
-uninstall.
+`mk-build-deps` makes a dummy package to wrap them all up together, so

(Diff truncated)
move more discussion to the blog post
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index 548c5496..b5cd6290 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -310,6 +310,34 @@ right place... right? See also [[services/hosting]].
 [Qemu]: http://qemu.org/
 [chroot]: https://manpages.debian.org/chroot
 
+## pbuilder vs sbuild
+
+I was previously using `pbuilder` and switched in 2017 to `sbuild`.
+[AskUbuntu.com has a good comparative between pbuilder and sbuild][]
+that shows they are pretty similar. The big advantage of sbuild is
+that it is the tool in use on the buildds and it's written in Perl
+instead of shell.
+
+My concerns about switching were POLA (I'm used to pbuilder), the fact
+that pbuilder runs as a separate user (works with sbuild as well now,
+if the `_apt` user is present), and setting up COW semantics in sbuild
+(can't just plug cowbuilder there, need to configure overlayfs or
+aufs, which was non-trivial in Debian jessie).
+
+Ubuntu folks, again, have [more][] [documentation][] there. Debian
+also has [extensive documentation][], especially about [how to
+configure overlays][].
+
+I was ultimately convinced by [stapelberg's post on the topic][] which
+shows how much simpler sbuild really is...
+
+[stapelberg's post on the topic]: https://people.debian.org/~stapelberg/2016/11/25/build-tools.html
+[how to configure overlays]: https://wiki.debian.org/sbuild#sbuild_overlays_in_tmpfs
+[extensive documentation]: https://wiki.debian.org/sbuild
+[documentation]: https://wiki.ubuntu.com/SimpleSbuild
+[more]: https://wiki.ubuntu.com/SecurityTeam/BuildEnvironment
+[AskUbuntu.com has a good comparative between pbuilder and sbuild]: http://askubuntu.com/questions/53014/why-use-sbuild-over-pbuilder
+
 # Who
 
 Thanks lavamind for the introduction to the `sbuild-qemu` package.
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 9535bd02..5b4042dc 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -624,29 +624,6 @@ you often need `-sa` to provide the source tarball with the upload),
 you should use `--debbuildopts -sa` in `sbuild`. For git-buildpackage,
 simply add `-sa` to the commandline.
 
-[[!note """
-I was previously using `pbuilder` and switched in 2017 to `sbuild`. [AskUbuntu.com has a good comparative between pbuilder and sbuild][]
-that shows they are pretty similar. The big advantage of sbuild is
-that it is the tool in use on the buildds and it's written in Perl
-instead of shell. My concerns about switching were POLA (I'm used to
-pbuilder), the fact that pbuilder runs as a separate user (works with
-sbuild as well now, if the `_apt` user is present), and setting up COW
-semantics in sbuild (can't just plug cowbuilder there, need to
-configure overlayfs or aufs, which is non-trivial in jessie with
-backports...). Ubuntu folks, again, have [more][] [documentation][]
-there. Debian also has [extensive documentation][], especially about
-[how to configure overlays][]. I was convinced by
-[stapelberg's post on the topic][] which shows how simpler sbuild
-really is...
-
-[stapelberg's post on the topic]: https://people.debian.org/~stapelberg/2016/11/25/build-tools.html
-[how to configure overlays]: https://wiki.debian.org/sbuild#sbuild_overlays_in_tmpfs
-[extensive documentation]: https://wiki.debian.org/sbuild
-[documentation]: https://wiki.ubuntu.com/SimpleSbuild
-[more]: https://wiki.ubuntu.com/SecurityTeam/BuildEnvironment
-[AskUbuntu.com has a good comparative between pbuilder and sbuild]: http://askubuntu.com/questions/53014/why-use-sbuild-over-pbuilder
-"""]]
-
 <a name="offloading-cowpoke-and-debomatic" />
 
 ## Build servers
@@ -974,4 +951,7 @@ duplicate of [this other guide][].
 [this other guide]: https://wiki.debian.org/BuildingTutorial
 [this guide]: https://wiki.debian.org/BuildingAPackage
 
+A [[blog post|blog/2022-04-27-sbuild-qemu]] goes into depth about the
+alternatives to `qemu` and `sbuild`.
+
 [[!tag debian-planet debian debian-lts blog python-planet software geek free]]

rip out more todo work of the tutorial, into the blog post
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index ecf7d6fb..548c5496 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -235,6 +235,43 @@ found [libguestfs][] to be useful to operate on virtual images in
 various ways. [Libvirt][] and [Vagrant][] are also useful wrappers on
 top of the above systems.
 
+There are particularly a lot of different tools which use Docker,
+Virtual machines or some sort of isolation stronger than chroot to
+build packages. Here are some of the alternatives I am aware of:
+
+ * [Whalebuilder][] - Docker builder
+ * [conbuilder][] - "container" builder
+ * [debspawn][] - system-nspawn builder
+ * [docker-buildpackage][] - Docker builder
+ * [qemubuilder][] - qemu builder
+ * [qemu-sbuild-utils][] - qemu + sbuild + autopkgtest
+
+Take, for example, [Whalebuilder][], which uses Docker to build
+packages instead of `pbuilder` or `sbuild`. Docker provides more
+isolation than a simple `chroot`: in `whalebuilder`, packages are
+built without network access and inside a virtualized
+environment. Keep in mind there are limitations to Docker's security
+and that `pbuilder` and `sbuild` *do* build under a different user
+which will limit the security issues with building untrusted
+packages.
+
+On the upside, some of things are being fixed: `whalebuilder` is now
+an official Debian package ([[!debpkg whalebuilder]]) and has added
+the feature of [passing custom arguments to dpkg-buildpackage][].
+
+None of those solutions (except the `autopkgtest`/`qemu` backend) are
+implemented as a [sbuild plugin][], which would greatly reduce their
+complexity.
+
+[conbuilder]: https://salsa.debian.org/federico/conbuilder
+[debspawn]: https://github.com/lkorigin/debspawn
+[docker-buildpackage]: https://github.com/metux/docker-buildpackage
+[passing custom arguments to dpkg-buildpackage]: https://gitlab.com/uhoreg/whalebuilder/issues/4
+[qemubuilder]: https://wiki.debian.org/qemubuilder
+[sbuild plugin]: https://lists.debian.org/debian-devel/2018/08/msg00005.html
+[whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
+[qemu-sbuild-utils]: https://www.kvr.at/posts/qemu-sbuild-utils-01-sbuild-with-qemu/
+
 I was previously using [Qemu][] directly to run virtual machines, and
 had to create VMs by hand with various tools. This didn't work so well
 so I switched to using Vagrant as a de-facto standard to build
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 562f3eae..9535bd02 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -967,40 +967,6 @@ host your own Debian repository using [reprepro][] (Koumbit has some
 
 # Further work and remaining issues
 
-I am curious about other build environments which use Docker, Virtual
-machines or some sort of stronger isolation to build packages. Here
-are the alternatives I am aware of:
-
- * [Whalebuilder][] - Docker builder
- * [conbuilder][] - "container" builder
- * [debspawn][] - system-nspawn builder
- * [docker-buildpackage][] - Docker builder
- * [qemubuilder][] - qemu builder
- * [qemu-sbuild-utils][] - qemu + sbuild + autopkgtest
-
-Take, for example, [Whalebuilder][], which uses Docker to build
-packages instead of `pbuilder` or `sbuild`. Docker provides more
-isolation than a simple `chroot`: in `whalebuilder`, packages are
-built without network access and inside a virtualized
-environment. Keep in mind there are limitations to Docker's security
-and that `pbuilder` and `sbuild` *do* build under a different user
-which will limit the security issues with building untrusted
-packages. Furthermore, `whalebuilder` <del>is not currently packaged
-as an official Debian package</del> (it is now, see [[!debpkg
-whalebuilder]]) and lacks certain features, like [passing custom
-arguments to dpkg-buildpackage][] (update: fixed), so I don't feel it is quite ready
-yet. None of those solutions are implemented as a [sbuild plugin][],
-which would greatly reduce their complexity.
-
-[conbuilder]: https://salsa.debian.org/federico/conbuilder
-[debspawn]: https://github.com/lkorigin/debspawn
-[docker-buildpackage]: https://github.com/metux/docker-buildpackage
-[passing custom arguments to dpkg-buildpackage]: https://gitlab.com/uhoreg/whalebuilder/issues/4
-[qemubuilder]: https://wiki.debian.org/qemubuilder
-[sbuild plugin]: https://lists.debian.org/debian-devel/2018/08/msg00005.html
-[whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
-[qemu-sbuild-utils]: https://www.kvr.at/posts/qemu-sbuild-utils-01-sbuild-with-qemu/
-
 This guide should be integrated into the official documentation or the
 Debian wiki. It is eerily similar to [this guide][] which itself is a
 duplicate of [this other guide][].

fix tocs
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index a713bb79..ecf7d6fb 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -8,7 +8,7 @@ guide|software/debian-development]], I had a few pointers on how to
 configure sbuild with the normal `schroot` setup, but today I finished
 a [qemu](http://www.qemu.org/) based configuration.
 
-[[!toc]]
+[[!toc levels=3]]
 
 # Why
 
@@ -171,6 +171,8 @@ you feel like it.
 
 # Nitty-gritty details no one cares about
 
+## Fixing hang in sbuild cleanup
+
 I'm having a hard time making heads or tails of this, but please bear
 with me.
 
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 44cd1dda..562f3eae 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="Quick Debian development guide"]]
 
-[[!toc levels=2]]
+[[!toc levels=3]]
 
 [[!note "This guide is also available under the URL
 <https://deb.li/quickdev> and as a video presentation

reshuffle the quick debian devel guide
It was getting quite unwieldy, with
cowbuilder/pbuilder/sbuild/schroot/qemu instructions all over the
place. Now we clearly outline the schroot/qemu instructions
separately, and we've taken the VM disgression out into the blog post.
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index 3a8e2396..a713bb79 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -225,6 +225,52 @@ For some reason, before I added this line to my configuration:
 ... the "Cleanup" step would just completely hang. It was quite
 bizarre.
 
+## Disgression on the diversity of VM-like things
+
+There are a *lot* of different virtualization solutions one can use
+(e.g. [Xen][], [KVM][], [Docker][] or [Virtualbox][]). I have also
+found [libguestfs][] to be useful to operate on virtual images in
+various ways. [Libvirt][] and [Vagrant][] are also useful wrappers on
+top of the above systems.
+
+I was previously using [Qemu][] directly to run virtual machines, and
+had to create VMs by hand with various tools. This didn't work so well
+so I switched to using Vagrant as a de-facto standard to build
+development environment machines, but I'm returning to Qemu because it
+uses a similar backend as KVM and can be used to host longer-running
+virtual machines through libvirt.
+
+The great thing now is that `autopkgtest` has good support for `qemu`
+*and* `sbuild` has bridged the gap and can use it as a build
+backend. I originally had found those bugs in that setup, but *all* of
+them are now fixed:
+
+ * [#911977](https://bugs.debian.org/911977): sbuild: how do we correctly guess the VM name in autopkgtest?
+ * [#911979](https://bugs.debian.org/911979): sbuild: fails on chown in autopkgtest-qemu backend
+ * [#911963](https://bugs.debian.org/911963): autopkgtest qemu build fails with proxy_cmd: parameter not set
+ * [#911981](https://bugs.debian.org/911981): autopkgtest: qemu server warns about missing CPU features
+
+So we have unification! It's possible to run your virtual machines
+*and* Debian builds using a single VM image backend storage, which is
+no small feat, in my humble opinion. See the [sbuild-qemu blog post
+for the annoucement](https://www.kvr.at/posts/qemu-sbuild-utils-merged-into-sbuild/)
+
+Now I just need to figure out how to merge Vagrant, GNOME Boxes, and
+libvirt together, which should be a matter of placing images in the
+right place... right? See also [[services/hosting]].
+
+[Vagrant]: https://www.vagrantup.com/
+[Virtualbox]: https://en.wikipedia.org/wiki/Virtualbox
+[libguestfs]: https://en.wikipedia.org/wiki/Libguestfs
+[Libvirt]: https://en.wikipedia.org/wiki/Libvirt
+[Docker]: https://en.wikipedia.org/wiki/Docker_(software)
+[Xen]: https://en.wikipedia.org/wiki/Xen
+[HVM]: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
+[KVM]: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine
+[QCOW]: https://en.wikipedia.org/wiki/Qcow
+[Qemu]: http://qemu.org/
+[chroot]: https://manpages.debian.org/chroot
+
 # Who
 
 Thanks lavamind for the introduction to the `sbuild-qemu` package.
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 8d5c7ce0..44cd1dda 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -433,6 +433,8 @@ a clean, temporary `chroot`. To create that `.dsc` file, you can use
 `dpkg-buildpackage -S` simply call `sbuild` in the source directory
 which will create it for you.
 
+### schroot instructions
+
 To use sbuild, you first need to configure an image:
 
     sudo sbuild-createchroot --include=eatmydata,gnupg unstable /srv/chroot/unstable-amd64-sbuild http://deb.debian.org/debian
@@ -458,9 +460,52 @@ This assumes that:
     this). to create a tarball image, use this:
     
         sudo sbuild-createchroot --make-sbuild-tarball=/srv/chroot/unstable-amd64-sbuild.tar.gz unstable --chroot-prefix unstable-tar `mktemp -d` http://deb.debian.org/debian
+
+You can also use `qemu` instead, see below
 """]]
 
-[[!note """
+The above will create chroots for all the main suites and two
+architectures, using [debootstrap][]. You may of course modify this to
+taste based on your requirements and available disk space. My build
+directories count for around 7GB (including ~3GB of cached `.deb`
+packages) and each chroot is between 500MB and 700MB.
+
+[debootstrap]: https://manpages.debian.org/debootstrap
+
+[[!tip """
+A few handy sbuild-related commands:
+
+ * `sbuild -c bookworm-amd64-sbuild` - build in the `bookworm` chroot even
+   though another suite is specified (e.g. `UNRElEASED`,
+   `bookworm-backports` or `bookworm-security`)
+
+ * `sbuild --build-dep-resolver=aptitude` - use another solver for
+   dependencies, required for backports, for example. see the manpage
+   for details of those solvers.
+
+ * `schroot -c bookworm-amd64-sbuild` - enter the `bookworm` chroot to make
+   tests, changes will be discarded
+
+ * `sbuild-shell bookworm` - enter the `bookworm` chroot to make
+   *permanent* changes, which will *not* be discarded
+
+ * `sbuild-destroychroot` - supposedly destroys schroots created by
+   sbuild for later rebuilding, but I have found that command to be
+   quite unreliable. besides, all it does is:
+
+        rm -rf /srv/chroot/unstable-amd64-sbuild /etc/schroot/chroot.d/unstable-amd64-sbuild-*
+
+Also note that it is useful to add aliases to your `schroot`
+configuration files. This allows you, for example, to automatically
+build `bookworm-security` or `bookworm-backports` packages in the `bookworm`
+schroot. Just add this line to the relevant config in
+`/etc/schroot/chroot.d/`:
+
+    aliases=bookworm-security-amd64-sbuild,bookworm-backports-amd64-build
+"""]]
+
+### Qemu configuration
+
 To use qemu, use this instead:
 
     sudo mkdir -p /srv/sbuild/qemu/
@@ -500,15 +545,51 @@ something like this:
 
 Also see a more in-depth discussion about this configuration in [[this
 blog post|blog/2022-04-27-sbuild-qemu]].
-"""]]
 
-The above will create chroots for all the main suites and two
-architectures, using [debootstrap][]. You may of course modify this to
-taste based on your requirements and available disk space. My build
-directories count for around 7GB (including ~3GB of cached `.deb`
-packages) and each chroot is between 500MB and 700MB.
+[[!tip """
+A few handy `qemu` related commands:
 
-[debootstrap]: https://manpages.debian.org/debootstrap
+ * enter the VM to make test, changes will be discarded  (thanks Nick
+   Brown for the `sbuild-qemu-boot` tip!):
+ 
+        sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
+
+   That program is shipped only with bookworm and later, an equivalent
+   command is:
+
+        qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+   The key argument here is `-snapshot`.
+
+ * enter the VM to make *permanent* changes, which will *not* be
+   discarded:
+
+        sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
+
+   Equivalent command:
+
+        sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * update the VM (thanks lavamind):
+ 
+        sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
+
+ * build in a specific VM regardless of the suite specified in the
+   changelog (e.g. `UNRELEASED`, `bookworm-backports`,
+   `bookworm-security`, etc):
+
+        sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   Note that you'd also need to pass `--autopkgtest-opts` if you want
+   `autopkgtest` to run in the correct VM as well:
+
+        sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   You might also need parameters like `--ram-size` if you customized
+   it above.
+"""]]
+
+### Building with sbuild
 
 Then I build packages in one of three ways.
 
@@ -543,71 +624,6 @@ you often need `-sa` to provide the source tarball with the upload),
 you should use `--debbuildopts -sa` in `sbuild`. For git-buildpackage,
 simply add `-sa` to the commandline.
 
-[[!tip """
-A few handy sbuild-related commands:
-
- * `sbuild -c wheezy-amd64-sbuild` - build in the `wheezy` chroot even
-   though another suite is specified (e.g. `UNRElEASED`,
-   `wheezy-backports` or `wheezy-security`)
-
- * `sbuild --build-dep-resolver=aptitude` - use another solver for
-   dependencies, required for backports, for example. see the manpage
-   for details of those solvers.
-
- * `schroot -c wheezy-amd64-sbuild` - enter the `wheezy` chroot to make
-   tests, changes will be discarded

(fichier de différences tronqué)
settext/atx
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 1af799b4..8d5c7ce0 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -62,8 +62,7 @@ diagram:
 [cdbs]: https://manpages.debian.org/cdbs
 [debhelper]: https://manpages.debian.org/debhelper
 
-Find the source
-===============
+# Find the source
 
 In the following, I take the example of building a backport of the
 Calibre package, which I [needed][]. It's a good example because it
@@ -223,8 +222,7 @@ code history of the package to figure out where things come from.
 
 [debcheckout]: https://manpages.debian.org/debcheckout
 
-Modifying the package
-=====================
+# Modifying the package
 
 At this point, we have a shiny source tree available in the
 `calibre-2.55.0+dfsg/` directory:
@@ -233,8 +231,7 @@ At this point, we have a shiny source tree available in the
 
 We can start looking around and make some changes.
 
-Changing version
-----------------
+## Changing version
 
 The first thing we want to make sure we do is to bump the version
 number so that we don't mistakenly build a new package with the *same*
@@ -291,8 +288,7 @@ in crafting those specific packages.
 [security uploads]: https://www.debian.org/doc/manuals/developers-reference/pkgs.html#bug-security
 [non-maintainer uploads]: https://www.debian.org/doc/manuals/developers-reference/pkgs.html#nmu
 
-Changing package metadata
--------------------------
+## Changing package metadata
 
 If I needed to modify dependencies, I would have edited
 `debian/control` directly. Other modifications to the Debian package
@@ -303,8 +299,7 @@ package is built, but a good starting point is
 
 [Debian policy §4: Source packages]: https://www.debian.org/doc/debian-policy/ch-source.html
 
-Modifying the source code
--------------------------
+## Modifying the source code
 
 If I needed to modify the source tree *outside* `debian/`, I can do
 the modifications directly, then use `dpkg-source --commit` to
@@ -316,8 +311,7 @@ template when creating a new patch.
 [patch tagging guidelines]: http://dep.debian.net/deps/dep3/
 [quilt]: https://manpages.debian.org/quilt
 
-Applying patches
-----------------
+## Applying patches
 
 If I already have a patch I want to apply to the source tree, then
 [quilt][] is even *more* important. The first step is to import the
@@ -364,8 +358,7 @@ often extract the patch from a Git source tree fetched with
 Again, it's useful to add metadata to the patch and follow the
 [patch tagging guidelines][].
 
-Building the package
-====================
+# Building the package
 
 Now that we are satisfied with our modified package, we need to build
 it. The generic command to build a Debian package is
@@ -428,8 +421,7 @@ environment.
 
 For this, we need more powerful tools.
 
-Building in a clean environment
--------------------------------
+## Building in a clean environment
 
 I am using [[!man sbuild]] to build packages in a dedicated clean
 build environment. This means I can build packages for arbitrary
@@ -641,8 +633,7 @@ really is...
 
 <a name="offloading-cowpoke-and-debomatic" />
 
-Build servers
--------------
+## Build servers
 
 Sometimes, your machine is too slow to build this stuff yourself. If
 you have a more powerful machine lying around, you can send a source
@@ -710,8 +701,7 @@ have to build the package locally to be able to compare the results...
 [debomatic]: http://debomatic.github.io/
 [cowpoke]: https://manpages.debian.org/cowpoke
 
-Testing packages
-================
+# Testing packages
 
 Some packages have a built-in test suite which you should make sure
 runs properly during the build. Sometimes, backporting that test suite
@@ -723,8 +713,7 @@ also be used to see if the package cleans up properly after itself.
 [autopkgtest]: http://anonscm.debian.org/cgit/autopkgtest/autopkgtest.git/plain/doc/README.package-tests.rst
 [DEP8]: http://dep.debian.net/deps/dep8/
 
-With autopkgtest
-----------------
+## With autopkgtest
 
 When a package self-testing enabled, it will be ran by [Debian CI](https://ci.debian.net/)
 at various times. While there can be build-time tests, CI runs more
@@ -969,8 +958,7 @@ different mountpoint. Otherwise changes in the filesystem affect the
 parent host, in which case you can just copy over the chroot.
 """]]
 
-Uploading packages
-==================
+# Uploading packages
 
 Uploading packages can be done on your own personal archive if you
 have a webserver, using the following `~/.dput.cf` configuration:
@@ -1007,8 +995,7 @@ host your own Debian repository using [reprepro][] (Koumbit has some
 [official Debian archives]: https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#upload
 [backports]: http://backports.debian.org/Contribute/
 
-Further work and remaining issues
-=================================
+# Further work and remaining issues
 
 I am curious about other build environments which use Docker, Virtual
 machines or some sort of stronger isolation to build packages. Here

more tricks on autopkgtest VMs, thanks lazyweb!
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index a55e994b..3a8e2396 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -25,14 +25,16 @@ rely on qemu under the hood, certainly not chroots...
 
 I could also have decided to go with containers like LXC, LXD, Docker
 (with [conbuilder][], [whalebuilder][], [docker-buildpackage][]),
-systemd-nspawn (with [debspawn][]), or whatever: I didn't feel those
-offer the level of isolation that is provided by qemu.
+systemd-nspawn (with [debspawn][]), [unshare][] (with `schroot
+--chroot-mode=unshare`), or whatever: I didn't feel those offer the
+level of isolation that is provided by qemu.
 
 [conbuilder]: https://salsa.debian.org/federico/conbuilder
 [debspawn]: https://github.com/lkorigin/debspawn
 [docker-buildpackage]: https://github.com/metux/docker-buildpackage
 [qemubuilder]: https://wiki.debian.org/qemubuilder
 [whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
+[unshare]: https://floss.social/@vagrantc/108207501382862868
 
 The main downside of this approach is that it is (obviously) slower
 than native builds. But on modern hardware, that cost should be
@@ -87,52 +89,85 @@ This configuration will:
     default)
  4. tell `autopkgtest` to use `qemu` for builds *and* for tests
 
-# Remaining work
+Note that the VM created by `sbuild-qemu-create` have an unlocked root
+account with an empty password.
 
-One thing I haven't quite figured out yet is the equivalent of those
-two `schroot`-specific commands from my [[quick Debian development
-guide|software/debian-development]]:
+## Other useful tasks
 
- * `sbuild -c unstable-amd64-sbuild` - build in the `unstable` chroot even
-   though another suite is specified (e.g. `UNRElEASED`,
-   `unstable-backports` or `unstable-security`)
+ * enter the VM to make test, changes will be discarded  (thanks Nick
+   Brown for the `sbuild-qemu-boot` tip!):
+ 
+        sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
 
- * `schroot -c unstable-amd64-sbuild` - enter the `unstable` chroot to make
-   tests, changes will be discarded
+   That program is shipped only with bookworm and later, an equivalent
+   command is:
 
- * `sbuild-shell unstable` - enter the `unstable` chroot to make
-   *permanent* changes, which will *not* be discarded
+        qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
 
-In other words: "just give me a shell in that VM". It seems to me
-`autopkgtest-virt-qemu` should have a magic flag that does that, but
-it doesn't look like that's a thing. When that program starts, it just
-says `ok` and sits there. When `autopkgtest` massages it just the
-right way, however, it will do this funky commandline:
+   The key argument here is `-snapshot`.
 
-    qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+ * enter the VM to make *permanent* changes, which will *not* be
+   discarded:
+
+        sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
+
+   Equivalent command:
+
+        sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * update the VM (thanks lavamind):
+ 
+        sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
+
+ * build in a specific VM regardless of the suite specified in the
+   changelog (e.g. `UNRELEASED`, `bookworm-backports`,
+   `bookworm-security`, etc):
+
+        sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   Note that you'd also need to pass `--autopkgtest-opts` if you want
+   `autopkgtest` to run in the correct VM as well:
 
-... which is a typical qemu commandline, I regret to announce. I
-managed to somehow boot a VM similar to the one `autopkgtest`
-provisions with this magic incantation:
+        sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
 
-    mkdir tmp
-    cd tmp
-    qemu-img create -f qcow2 -F qcow2 -b /srv/sbuild/qemu/unstable-amd64.img overlay.img
-    mkdir shared
-    qemu-system-x86_64 -m 4096 -smp 2  -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:$PWD/monitor,server,nowait -serial unix:$PWD/ttyS0,server,nowait -serial unix:$PWD/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=$PWD/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=$PWD/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+   You might also need parameters like `--ram-size` if you customized
+   it above.
 
-That gives you a VM like `autopkgtest` which has those peculiarities:
+And yes, this is all quite complicated and could be streamlined a
+little, but that's what you get when you have years of legacy and just
+want to get stuff done. It seems to me `autopkgtest-virt-qemu` should
+have a magic flag starts a shell for you, but it doesn't look like
+that's a thing. When that program starts, it just
+says `ok` and sits there.
 
- * the `shared` directory is, well, shared with the VM
+Maybe because the authors consider the above to be simple enough (see
+also [bug #911977](https://bugs.debian.org/911977) for a discussion of this problem).
+
+## Live access to a running test
+
+When `autopkgtest` starts a VM, it uses this funky `qemu` commandline:
+
+    qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+
+... which is a [typical qemu commandline](https://lwn.net/Articles/872321/), I'm sorry to say. That
+gives us a VM with those settings (paths are relative to a temporary
+directory, `/tmp/autopkgtest-qemu.w1mlh54b/` in the above example):
+
+ * the `shared/` directory is, well, shared with the VM
  * port `10022` is forward to the VM's port `22`, presumably for SSH,
    but not SSH server is started by default
  * the `ttyS1` and `ttyS2` UNIX sockets are mapped to the first two
    serial ports (use `nc -U` to talk with those)
- * the `monitor` socket is a qemu control socket (see the [QEMU
-   monitor](https://people.redhat.com/pbonzini/qemu-test-doc/_build/html/topics/pcsys_005fmonitor.html) documentation)
+ * the `monitor` UNIX socket is a qemu control socket (see the [QEMU
+   monitor](https://people.redhat.com/pbonzini/qemu-test-doc/_build/html/topics/pcsys_005fmonitor.html) documentation, also `nc -U`)
+
+In other words, it's possible to access the VM with:
+
+    nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
 
-So I guess I could make a script out of this but for now this will
-have to be good enough.
+The `nc` socket interface is ... not great, but it works well
+enough. And you can probably fire up an SSHd to get a better shell if
+you feel like it.
 
 # Nitty-gritty details no one cares about
 
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 9de2a5dd..1af799b4 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -583,6 +583,39 @@ schroot. Just add this line to the relevant config in
     aliases=wheezy-security-amd64-sbuild,wheezy-backports-amd64-build
 """]]
 
+[[!tip """
+If you're using autopkgtest-qemu, the above is different and you
+should use those tips instead:
+
+ * enter the VM to make test, changes will be discarded:
+ 
+        qemu-img create -f qcow2 -F qcow2 -b /srv/sbuild/qemu/unstable-amd64.img overlay.img
+        qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * enter the VM to make *permanent* changes, which will *not* be
+   discarded:
+ 
+        sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * update the VM:
+ 
+        sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
+
+ * build in a specific VM regardless of the suite specified in the
+   changelog (e.g. `UNRELEASED`, `bookworm-backports`,
+   `bookworm-security`, etc):
+
+        sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   Note that you'd also need to pass `--autopkgtest-opts` if you want
+   `autopkgtest` to run in the correct VM as well:
+
+        sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   You might also need parameters like `--ram-size` if you customized
+   it above.
+"""]]
+
 [[!note """
 I was previously using `pbuilder` and switched in 2017 to `sbuild`. [AskUbuntu.com has a good comparative between pbuilder and sbuild][]
 that shows they are pretty similar. The big advantage of sbuild is

approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_71942d0d1465887d78e25e4f9ca91813._comment b/blog/2022-04-27-sbuild-qemu/comment_1_71942d0d1465887d78e25e4f9ca91813._comment
new file mode 100644
index 00000000..78723e2b
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_71942d0d1465887d78e25e4f9ca91813._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="81.110.21.226"
+ claimedauthor="Nick Brown"
+ subject="sbuild-qemu-boot"
+ date="2022-04-28T11:11:01Z"
+ content="""
+I spotted that 'sbuild-qemu-boot' was [added in 0.83](https://salsa.debian.org/debian/sbuild/-/commit/2e426f2eac7a81771d3963a0737e1d8fa2b60a2e) that looks to provide console access to the vm, though I've not had a chance to experiment with it yet,  it might help with two of the task you mention in your \"Remaining work\" section.
+"""]]

merge notes about lsp between the two pages
diff --git a/blog/2022-03-20-20-years-emacs.md b/blog/2022-03-20-20-years-emacs.md
index f7bb2d10..37757d33 100644
--- a/blog/2022-03-20-20-years-emacs.md
+++ b/blog/2022-03-20-20-years-emacs.md
@@ -66,25 +66,17 @@ up.
    lsp-mode is uncool and I should really do eglot instead, and that
    doesn't help.
    
-   **UPDATE**: I finally got tired and switched to `lsp-mode`. The
-   main reason for choosing it over eglot is that it's in Debian (and
-   eglot is not). (Apparently, eglot has more chance of being
-   upstreamed, "when it's done", but I guess I'll cross that bridge
-   when I get there.) `lsp-mode` feels slower than `elpy` but I
-   haven't done *any* of the [performance tuning](https://emacs-lsp.github.io/lsp-mode/page/performance/) and this will
-   improve even more with native compilation (see below).
-   
-   I already had `lsp-mode` partially setup in Emacs so I only had to
-   do [this small tweak to switch](https://gitlab.com/anarcat/emacs-d/-/commit/753ac702b08850322e92c56c2bbcc9afc70d599f) and [change the prefix key](https://gitlab.com/anarcat/emacs-d/-/commit/68331e54bd43a28fc75b28efb4de7f491ab77b72)
-   (because <kbd>s-l</kbd> or <kbd>mod</kbd> is used by my window
-   manager). I also had to pin LSP packages to bookworm [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/lsp.pp) and
-   [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/emacs.pp).
+   **UPDATE**: I finally got tired and switched to `lsp-mode`. See
+   [[this post for details|blog/2022-04-27-lsp-in-debian]].
 
  * I am not using [projectile](https://projectile.mx/). It's on some of my numerous todo
    lists somewhere, surely. I suspect it's important to getting my
    projects organised, but I still live halfway between the terminal
    and Emacs, so it's not quite clear what I would gain.
 
+   **Update**: I *also* started using projectile, but I'm not sure I
+   like it.
+
  * I had to ask what [native compilation](https://www.emacswiki.org/emacs/GccEmacs) was or why it mattered
    the first time I heard of it. And when I saw it again in the
    article, I had to click through to remember.
diff --git a/blog/2022-04-27-lsp-in-debian.md b/blog/2022-04-27-lsp-in-debian.md
index 308eac3f..cef805cf 100644
--- a/blog/2022-04-27-lsp-in-debian.md
+++ b/blog/2022-04-27-lsp-in-debian.md
@@ -40,10 +40,21 @@ and this `.emacs` snippet:
       (define-key lsp-ui-mode-map [remap xref-find-definitions] #'lsp-ui-peek-find-definitions)
       (define-key lsp-ui-mode-map [remap xref-find-references] #'lsp-ui-peek-find-references))
 
-
 Note: this configuration might have changed since I wrote this, see
 [my init.el configuration for the most recent config](https://gitlab.com/anarcat/emacs-d/blob/master/init.el).
 
+The main reason for choosing `lsp-mode` over eglot is that it's in
+Debian (and eglot is not). (Apparently, eglot has more chance of being
+upstreamed, "when it's done", but I guess I'll cross that bridge when
+I get there.)
+   
+I already had `lsp-mode` partially setup in Emacs so I only had to do
+[this small tweak to switch](https://gitlab.com/anarcat/emacs-d/-/commit/753ac702b08850322e92c56c2bbcc9afc70d599f) and [change the prefix key](https://gitlab.com/anarcat/emacs-d/-/commit/68331e54bd43a28fc75b28efb4de7f491ab77b72)
+(because <kbd>s-l</kbd> or <kbd>mod</kbd> is used by my window
+manager). I also had to pin LSP packages to bookworm [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/emacs.pp) so that
+it properly detects `pylsp` (the older version in Debian bullseye only
+supports `pyls`, not packaged in Debian).
+
 This won't do anything by itself: Emacs will need *something* to talk
 with to provide the magic. Those are called "servers" and are
 basically different programs, for each programming language, that
@@ -72,14 +83,16 @@ Server" in the description (which also found a few more `pyls` plugins,
 e.g. `black` support).
 
 Note that the Python packages, in particular, need to be upgraded to
-their bookworm releases to work properly. It seems like there's some
-interoperability problems there that I haven't quite figured out
-yet. See also my [Puppet configuration for LSP](https://gitlab.com/anarcat/puppet/-/blob/main/site-modules/profile/manifests/lsp.pp).
+their bookworm releases to work properly ([here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/lsp.pp)). It seems like
+there's some interoperability problems there that I haven't quite
+figured out yet. See also my [Puppet configuration for LSP](https://gitlab.com/anarcat/puppet/-/blob/main/site-modules/profile/manifests/lsp.pp).
 
 Finally, note that I have now completely switched away from [Elpy](https://elpy.readthedocs.io/)
-to pyls, and I'm quite happy with the results. It's slower, but it
-is much more powerful. I particularly like the "rename symbol"
-functionality, which ... mostly works.
+to pyls, and I'm quite happy with the results. `lsp-mode` feels slower
+than `elpy` but I haven't done *any* of the [performance tuning](https://emacs-lsp.github.io/lsp-mode/page/performance/)
+and this will improve even more with native compilation. And
+`lsp-mode` is much more powerful. I particularly like the "rename
+symbol" functionality, which ... mostly works.
 
 # Remaining work
 

remove link(foo) test
This is a test to regenerate the index, which doesn't seem to update
correctly for the two new articles.
diff --git a/blog.mdwn b/blog.mdwn
index 36640dc4..4d4862b7 100644
--- a/blog.mdwn
+++ b/blog.mdwn
@@ -16,7 +16,6 @@
   or tagged(blog)
 )
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 show="3"
@@ -72,7 +71,6 @@ trail=yes
 )
 and creation_year(2022)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -88,7 +86,6 @@ quick=yes
 )
 and creation_year(2021)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -104,7 +101,6 @@ quick=yes
 )
 and creation_year(2020)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -120,7 +116,6 @@ quick=yes
 )
 and creation_year(2019)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -136,7 +131,6 @@ quick=yes
 )
 and creation_year(2018)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -152,7 +146,6 @@ quick=yes
 )
 and creation_year(2017)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -168,7 +161,6 @@ quick=yes
 )
 and creation_year(2016)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -184,7 +176,6 @@ quick=yes
 )
 and creation_year(2015)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -200,7 +191,6 @@ quick=yes
 )
 and creation_year(2014)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -216,7 +206,6 @@ quick=yes
 )
 and creation_year(2013)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -232,7 +221,6 @@ quick=yes
 )
 and creation_year(2012)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -248,7 +236,6 @@ quick=yes
 )
 and creation_year(2011)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -264,7 +251,6 @@ quick=yes
 )
 and creation_year(2010)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -280,7 +266,6 @@ quick=yes
 )
 and creation_year(2009)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -296,7 +281,6 @@ quick=yes
 )
 and creation_year(2008)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -312,7 +296,6 @@ quick=yes
 )
 and creation_year(2007)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -328,7 +311,6 @@ quick=yes
 )
 and creation_year(2006)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -344,7 +326,6 @@ quick=yes
 )
 and creation_year(2005)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes

publish another draft I had lying around
diff --git a/blog/lsp-in-debian.md b/blog/2022-04-27-lsp-in-debian.md
similarity index 56%
rename from blog/lsp-in-debian.md
rename to blog/2022-04-27-lsp-in-debian.md
index 92b179d2..308eac3f 100644
--- a/blog/lsp-in-debian.md
+++ b/blog/2022-04-27-lsp-in-debian.md
@@ -1,11 +1,15 @@
+[[!meta title="Using LSP in Emacs and Debian"]]
+
 The [Language Server Protocol](https://microsoft.github.io/language-server-protocol/) (LSP) is a neat mechanism that
 provides a common interface to what used to be language-specific
 lookup mechanisms (like, say, running a Python interpreter in the
-background to find function definitions). There *is* also [ctags](https://en.wikipedia.org/wiki/Ctags)
-shipped with UNIX since forever, but that doesn't support looking
-*backwards* ("who uses this function") or linting. In short, LSP
-rocks, and how do I use it right now in my editor of choice (Emacs, in
-my case) and OS (Debian) please?
+background to find function definitions). 
+
+There *is* also [ctags](https://en.wikipedia.org/wiki/Ctags) shipped with UNIX since forever, but that
+doesn't support looking *backwards* ("who uses this function"),
+linting, or refactoring. In short, LSP rocks, and how do I use it
+right now in my editor of choice (Emacs, in my case) and OS (Debian)
+please?
 
 # Editor (emacs) setup
 
@@ -19,18 +23,26 @@ and this `.emacs` snippet:
 
     (use-package lsp-mode
       :commands (lsp lsp-deferred)
+      :hook ((python-mode go-mode) . lsp-deferred)
       :demand t
       :init
       (setq lsp-keymap-prefix "C-c l")
+      ;; TODO: https://emacs-lsp.github.io/lsp-mode/page/performance/
+      ;; also note re "native compilation": <+varemara> it's the
+      ;; difference between lsp-mode being usable or not, for me
       :config
       (setq lsp-auto-configure t))
 
-Note: this configuration might have changed since I wrote this, see
-[my init.el configuration for the most recent config](https://gitlab.com/anarcat/emacs-d/blob/master/init.el). Extras I'm
-considering:
+    (use-package lsp-ui
+      :config
+      (setq lsp-ui-flycheck-enable t)
+      (add-to-list 'lsp-ui-doc-frame-parameters '(no-accept-focus . t))
+      (define-key lsp-ui-mode-map [remap xref-find-definitions] #'lsp-ui-peek-find-definitions)
+      (define-key lsp-ui-mode-map [remap xref-find-references] #'lsp-ui-peek-find-references))
 
-    (lsp-mode . lsp-enable-which-key-integration)
-    :hook (python-mode . lsp-deferred) ; and other modes...
+
+Note: this configuration might have changed since I wrote this, see
+[my init.el configuration for the most recent config](https://gitlab.com/anarcat/emacs-d/blob/master/init.el).
 
 This won't do anything by itself: Emacs will need *something* to talk
 with to provide the magic. Those are called "servers" and are
@@ -40,8 +52,8 @@ provide the magic.
 # Servers setup
 
 The Emacs package provides a way (<kbd>M-x lsp-install-server</kbd>)
-to install *some* of them, but I prefer to manage those tools (just
-like `lsp-mode` itself) through Debian packages if possible. Those are
+to install *some* of them, but I prefer to manage those tools through
+Debian packages if possible, just like `lsp-mode` itself. Those are
 the servers I currently know of in Debian:
 
 | package                   | languages          |
@@ -59,6 +71,16 @@ that didn't find `ccls`, for example, because that just said "Language
 Server" in the description (which also found a few more `pyls` plugins,
 e.g. `black` support).
 
+Note that the Python packages, in particular, need to be upgraded to
+their bookworm releases to work properly. It seems like there's some
+interoperability problems there that I haven't quite figured out
+yet. See also my [Puppet configuration for LSP](https://gitlab.com/anarcat/puppet/-/blob/main/site-modules/profile/manifests/lsp.pp).
+
+Finally, note that I have now completely switched away from [Elpy](https://elpy.readthedocs.io/)
+to pyls, and I'm quite happy with the results. It's slower, but it
+is much more powerful. I particularly like the "rename symbol"
+functionality, which ... mostly works.
+
 # Remaining work
 
 ## Puppet and Ruby
@@ -67,22 +89,17 @@ I still have to figure how to actually use this: I mostly spend my
 time in Puppet these days, there is no server listed in the [Emacs
 lsp-mode language list][], but there *is* one listed over at the
 [upstream language list][], the [puppet-editor-services](https://github.com/puppetlabs/puppet-editor-services)
-server. But it's not packaged in Debian, and seems
-somewhat... involved. Would still be a huge boost. The [Voxpupuli
-team](https://voxpupuli.org/) have [vim install instructions](https://voxpupuli.org/blog/2019/04/08/puppet-lsp-vim/) which also suggest
-installing [solargraph](https://github.com/castwide/solargraph), the Ruby language server, also not
-packaged in Debian.
+server. 
+
+But it's not packaged in Debian, and seems somewhat... involved. It
+could still be a good productivity boost. The [Voxpupuli team](https://voxpupuli.org/) have
+[vim install instructions](https://voxpupuli.org/blog/2019/04/08/puppet-lsp-vim/) which also suggest installing
+[solargraph](https://github.com/castwide/solargraph), the Ruby language server, also not packaged in
+Debian.
 
 [Emacs lsp-mode language list]: https://emacs-lsp.github.io/lsp-mode/page/languages/
 [upstream language list]: https://microsoft.github.io/language-server-protocol/implementors/servers/
 
-## Python
-
-When I'm not in Puppet land, I'm mostly in Python, and there I am
-usually in [Elpy](https://elpy.readthedocs.io/). It's unclear to me if LSP totally replaces Elpy,
-or if they work alongside each other, so that's another thing I need
-to look into.
-
 ## Bash
 
 I guess I do a bit of shell scripting from time to time nowadays, even
@@ -97,9 +114,4 @@ Here are more language servers available:
  * [Emacs lsp-mode language list][]: all servers known to the Emacs
    mode
 
-## Overall
-
-Basically, I'm not using this at all right now and those are just
-notes...
-
-[[!tag draft]]
+[[!tag emacs programming editor debian debian-planet python-planet]]

publish, minimal edits
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index b58658b4..a55e994b 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -34,6 +34,10 @@ offer the level of isolation that is provided by qemu.
 [qemubuilder]: https://wiki.debian.org/qemubuilder
 [whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
 
+The main downside of this approach is that it is (obviously) slower
+than native builds. But on modern hardware, that cost should be
+minimal.
+
 # How
 
 Basically, you need this:
@@ -86,16 +90,17 @@ This configuration will:
 # Remaining work
 
 One thing I haven't quite figured out yet is the equivalent of those
-two `schroot`-specific commands:
+two `schroot`-specific commands from my [[quick Debian development
+guide|software/debian-development]]:
 
- * `sbuild -c wheezy-amd64-sbuild` - build in the `wheezy` chroot even
+ * `sbuild -c unstable-amd64-sbuild` - build in the `unstable` chroot even
    though another suite is specified (e.g. `UNRElEASED`,
-   `wheezy-backports` or `wheezy-security`)
+   `unstable-backports` or `unstable-security`)
 
- * `schroot -c wheezy-amd64-sbuild` - enter the `wheezy` chroot to make
+ * `schroot -c unstable-amd64-sbuild` - enter the `unstable` chroot to make
    tests, changes will be discarded
 
- * `sbuild-shell wheezy` - enter the `wheezy` chroot to make
+ * `sbuild-shell unstable` - enter the `unstable` chroot to make
    *permanent* changes, which will *not* be discarded
 
 In other words: "just give me a shell in that VM". It seems to me
@@ -189,4 +194,4 @@ bizarre.
 
 Thanks lavamind for the introduction to the `sbuild-qemu` package.
 
-[[!tag debian-planet debian packaging python-planet draft]]
+[[!tag debian-planet debian packaging python-planet]]

do not list drafts
diff --git a/tag/draft.mdwn b/tag/draft.mdwn
index dd27d9d9..5b5aa619 100644
--- a/tag/draft.mdwn
+++ b/tag/draft.mdwn
@@ -1,4 +1,3 @@
 [[!meta title="pages tagged draft"]]
 
-[[!inline pages="tagged(draft)" actions="no" archive="yes"
-feedshow=10]]
+Deliberately unlisted.

a short tutorial about sbuild
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
new file mode 100644
index 00000000..b58658b4
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -0,0 +1,192 @@
+[[!meta title="building Debian packages under qemu with sbuild"]]
+
+I've been using [sbuild](https://wiki.debian.org/sbuild) for a while to build my Debian packages,
+mainly because it's what is used by the [Debian autobuilders](https://wiki.debian.org/buildd), but
+also because it's pretty powerful and efficient. Configuring it *just
+right*, however, can be a challenge. In my [[quick Debian development
+guide|software/debian-development]], I had a few pointers on how to
+configure sbuild with the normal `schroot` setup, but today I finished
+a [qemu](http://www.qemu.org/) based configuration.
+
+[[!toc]]
+
+# Why
+
+I want to use qemu mainly because it provides better isolation than a
+[chroot](https://en.wikipedia.org/wiki/Chroot). I sponsor packages sometimes and while I typically audit
+the source code before building, it still feels like the extra
+protection shouldn't hurt.
+
+I also like the idea of unifying my existing virtual machine setup
+with my build setup. My current VM is kind of all over the place:
+[libvirt](https://libvirt.org/), [vagrant](https://www.vagrantup.com/), [GNOME Boxes](https://wiki.gnome.org/Apps/Boxes), etc?). I've been slowly
+converging over libvirt however, and most solutions I use right now
+rely on qemu under the hood, certainly not chroots...
+
+I could also have decided to go with containers like LXC, LXD, Docker
+(with [conbuilder][], [whalebuilder][], [docker-buildpackage][]),
+systemd-nspawn (with [debspawn][]), or whatever: I didn't feel those
+offer the level of isolation that is provided by qemu.
+
+[conbuilder]: https://salsa.debian.org/federico/conbuilder
+[debspawn]: https://github.com/lkorigin/debspawn
+[docker-buildpackage]: https://github.com/metux/docker-buildpackage
+[qemubuilder]: https://wiki.debian.org/qemubuilder
+[whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
+
+# How
+
+Basically, you need this:
+
+    sudo mkdir -p /srv/sbuild/qemu/
+    sudo apt install sbuild-qemu
+    sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
+
+Then to make this used by default, add this to `~/.sbuildrc`:
+
+    # run autopkgtest inside the schroot
+    $run_autopkgtest = 1;
+    # tell sbuild to use autopkgtest as a chroot
+    $chroot_mode = 'autopkgtest';
+    # tell autopkgtest to use qemu
+    $autopkgtest_virt_server = 'qemu';
+    # tell autopkgtest-virt-qemu the path to the image
+    # use --debug there to show what autopkgtest is doing
+    $autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
+    # tell plain autopkgtest to use qemu, and the right image
+    $autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
+    # no need to cleanup the chroot after build, we run in a completely clean VM
+    $purge_build_deps = 'never';
+    # no need for sudo
+    $autopkgtest_root_args = '';
+
+Note that the above will use the default autopkgtest (1GB, one core)
+and qemu (128MB, one core) configuration, which might be a little low
+on resources. You probably want to be explicit about this, with
+something like this:
+
+    # extra parameters to pass to qemu
+    # --enable-kvm is not necessary, detected on the fly by autopkgtest
+    my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
+    # tell autopkgtest-virt-qemu the path to the image
+    # use --debug there to show what autopkgtest is doing
+    $autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
+    $autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
+
+This configuration will:
+
+ 1. create a virtual machine image in `/srv/sbuild/qemu` for
+    `unstable`
+ 2. tell `sbuild` to use that image to create a temporary VM to build
+    the packages
+ 3. tell `sbuild` to run `autopkgtest` (which should really be
+    default)
+ 4. tell `autopkgtest` to use `qemu` for builds *and* for tests
+
+# Remaining work
+
+One thing I haven't quite figured out yet is the equivalent of those
+two `schroot`-specific commands:
+
+ * `sbuild -c wheezy-amd64-sbuild` - build in the `wheezy` chroot even
+   though another suite is specified (e.g. `UNRElEASED`,
+   `wheezy-backports` or `wheezy-security`)
+
+ * `schroot -c wheezy-amd64-sbuild` - enter the `wheezy` chroot to make
+   tests, changes will be discarded
+
+ * `sbuild-shell wheezy` - enter the `wheezy` chroot to make
+   *permanent* changes, which will *not* be discarded
+
+In other words: "just give me a shell in that VM". It seems to me
+`autopkgtest-virt-qemu` should have a magic flag that does that, but
+it doesn't look like that's a thing. When that program starts, it just
+says `ok` and sits there. When `autopkgtest` massages it just the
+right way, however, it will do this funky commandline:
+
+    qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+
+... which is a typical qemu commandline, I regret to announce. I
+managed to somehow boot a VM similar to the one `autopkgtest`
+provisions with this magic incantation:
+
+    mkdir tmp
+    cd tmp
+    qemu-img create -f qcow2 -F qcow2 -b /srv/sbuild/qemu/unstable-amd64.img overlay.img
+    mkdir shared
+    qemu-system-x86_64 -m 4096 -smp 2  -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:$PWD/monitor,server,nowait -serial unix:$PWD/ttyS0,server,nowait -serial unix:$PWD/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=$PWD/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=$PWD/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+
+That gives you a VM like `autopkgtest` which has those peculiarities:
+
+ * the `shared` directory is, well, shared with the VM
+ * port `10022` is forward to the VM's port `22`, presumably for SSH,
+   but not SSH server is started by default
+ * the `ttyS1` and `ttyS2` UNIX sockets are mapped to the first two
+   serial ports (use `nc -U` to talk with those)
+ * the `monitor` socket is a qemu control socket (see the [QEMU
+   monitor](https://people.redhat.com/pbonzini/qemu-test-doc/_build/html/topics/pcsys_005fmonitor.html) documentation)
+
+So I guess I could make a script out of this but for now this will
+have to be good enough.
+
+# Nitty-gritty details no one cares about
+
+I'm having a hard time making heads or tails of this, but please bear
+with me.
+
+In `sbuild` + `schroot`, there's this notion that we don't really need
+to cleanup after ourselves inside the schroot, as the schroot will
+just be delted anyways. This behavior *seems* to be handled by the
+internal "Session Purged" parameter.
+
+At least in lib/Sbuild/Build.pm, we can see this:
+
+    my $is_cloned_session = (defined ($session->get('Session Purged')) &&
+			     $session->get('Session Purged') == 1) ? 1 : 0;
+
+    [...]
+    
+    if ($is_cloned_session) {
+	$self->log("Not cleaning session: cloned chroot in use\n");
+    } else {
+	if ($purge_build_deps) {
+	    # Removing dependencies
+	    $resolver->uninstall_deps();
+	} else {
+	    $self->log("Not removing build depends: as requested\n");
+	}
+    }
+
+The `schroot` builder defines that parameter as:
+
+	    $self->set('Session Purged', $info->{'Session Purged'});
+
+... which is ... a little confusing to me. $info is:
+
+    my $info = $self->get('Chroots')->get_info($schroot_session);
+
+... so I presume that depends on whether the schroot was correctly
+cleaned up? I stopped digging there...
+
+`ChrootUnshare.pm` is way more explicit:
+
+    $self->set('Session Purged', 1);
+
+I wonder if we should do something like this with the autopkgtest
+backend. I guess people might *technically* use it with something else
+than qemu, but qemu is the typical use case of the autopkgtest
+backend, in my experience. Or at least certainly with things that
+cleanup after themselves. Right?
+
+For some reason, before I added this line to my configuration:
+
+    $purge_build_deps = 'never';
+
+... the "Cleanup" step would just completely hang. It was quite
+bizarre.
+
+# Who
+
+Thanks lavamind for the introduction to the `sbuild-qemu` package.
+
+[[!tag debian-planet debian packaging python-planet draft]]
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index beaea8e9..9de2a5dd 100644

(fichier de différences tronqué)
more sbuild configuration cleanups
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 9e7cdae2..beaea8e9 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -471,11 +471,10 @@ This assumes that:
 [[!note """
 To use qemu, use this instead:
 
+    sudo mkdir -p /srv/sbuild/qemu/
     sudo apt install sbuild-qemu
     sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
 
-The `/srv/sbuild/qemu` directory needs to exist.
-
 Then to make this used by default, add this to `~/.sbuildrc`:
 
     # run autopkgtest inside the schroot
@@ -709,6 +708,9 @@ To use qemu in autopkgtest, pass do this instead:
 
     sudo autopkgtest-build-qemu unstable unstable-amd64.img
     autopkgtest libotr_4.1.1-3_amd64.changes -- qemu unstable-amd64.img
+
+Also note that if you followed the `sbuild-qemu` configuration above,
+this is already done by default at the end of your `sbuild` run.
 """]]
 
 That's it! Tests can also be ran against the current directory, in

fix typo in sbuild config
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 097b5ef6..9e7cdae2 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -505,7 +505,7 @@ something like this:
     # tell autopkgtest-virt-qemu the path to the image
     # use --debug there to show what autopkgtest is doing
     $autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
-    $autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
+    $autopkgtest_opts = [ '--', 'qemu', @_qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
 """]]
 
 The above will create chroots for all the main suites and two

more parameters to the sbuild configuration
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index bef5ca4e..097b5ef6 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -478,10 +478,34 @@ The `/srv/sbuild/qemu` directory needs to exist.
 
 Then to make this used by default, add this to `~/.sbuildrc`:
 
+    # run autopkgtest inside the schroot
+    $run_autopkgtest = 1;
+    # tell sbuild to use autopkgtest as a chroot
     $chroot_mode = 'autopkgtest';
+    # tell autopkgtest to use qemu
     $autopkgtest_virt_server = 'qemu';
+    # tell autopkgtest-virt-qemu the path to the image
+    # use --debug there to show what autopkgtest is doing
     $autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
+    # tell plain autopkgtest to use qemu, and the right image
     $autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
+    # no need to cleanup the chroot after build, we run in a completely clean VM
+    $purge_build_deps = 'never';
+    # no need for sudo
+    $autopkgtest_root_args = '';
+
+Note that the above will use the default autopkgtest (1GB, one core)
+and qemu (128MB, one core) configuration, which might be a little low
+on resources. You probably want to be explicit about this, with
+something like this:
+
+    # extra parameters to pass to qemu
+    # --enable-kvm is not necessary, detected on the fly by autopkgtest
+    my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
+    # tell autopkgtest-virt-qemu the path to the image
+    # use --debug there to show what autopkgtest is doing
+    $autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
+    $autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
 """]]
 
 The above will create chroots for all the main suites and two

sbuild-qemu
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 366eae0b..bef5ca4e 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -468,6 +468,22 @@ This assumes that:
         sudo sbuild-createchroot --make-sbuild-tarball=/srv/chroot/unstable-amd64-sbuild.tar.gz unstable --chroot-prefix unstable-tar `mktemp -d` http://deb.debian.org/debian
 """]]
 
+[[!note """
+To use qemu, use this instead:
+
+    sudo apt install sbuild-qemu
+    sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
+
+The `/srv/sbuild/qemu` directory needs to exist.
+
+Then to make this used by default, add this to `~/.sbuildrc`:
+
+    $chroot_mode = 'autopkgtest';
+    $autopkgtest_virt_server = 'qemu';
+    $autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
+    $autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
+"""]]
+
 The above will create chroots for all the main suites and two
 architectures, using [debootstrap][]. You may of course modify this to
 taste based on your requirements and available disk space. My build
@@ -660,16 +676,16 @@ can be ran locally. First, install it:
 
     sudo apt install autopkgtest
 
-Then create a test environment:
-
-    sudo autopkgtest-build-qemu unstable debian-sid-amd64-autopkgtest.qcow2
+Then you can run the tests against the built package:
 
-[[!tip """We use qemu here even though we use schroot normally. We're
-looking at unifying this, see below."""]]
+    autopkgtest libotr_4.1.1-3_amd64.changes -- schroot unstable-amd64-sbuild
 
-Then you can run the tests against the built package:
+[[!note """
+To use qemu in autopkgtest, pass do this instead:
 
-    autopkgtest libotr_4.1.1-3_amd64.changes -- qemu debian-sid-amd64-autopkgtest.qcow2
+    sudo autopkgtest-build-qemu unstable unstable-amd64.img
+    autopkgtest libotr_4.1.1-3_amd64.changes -- qemu unstable-amd64.img
+"""]]
 
 That's it! Tests can also be ran against the current directory, in
 which case autopkgtest will build the package for you first, also in
@@ -722,8 +738,10 @@ for more information. Once we can build packages with KVM (or
 autopkgtest with vagrant), this guide should be updated to only use
 that everywhere.
 
-**Update**: [qemu-sbuild](https://www.kvr.at/posts/qemu-sbuild-utils-merged-into-sbuild/) is the new kid in town here, and it *was*
-merged into sbuild, so it should make the above "unification" possible!
+**Update**: [sbuild-qemu](https://www.kvr.at/posts/qemu-sbuild-utils-merged-into-sbuild/) is the new kid in town here, and it *was*
+merged into sbuild, so it makes unification much easier. I have
+peppered this tutorial with references to qemu accordingly, and so far
+it works for me.
 </div>
 
 [Vagrant]: https://www.vagrantup.com/

another laptop?
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index 2fb03303..f2020631 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -295,6 +295,15 @@ layout](https://en.wikipedia.org/wiki/QWERTY#United_Kingdom). Doh! pgup, pgdown,
 with "meta" which is annoying, but common. I asked about the other
 keys, to be continued.
 
+The [starbook](https://starlabs.systems/pages/starbook) also looks interesting, see [this hackernews
+discussion which compare it to the Framework](https://news.ycombinator.com/item?id=31034024).
+
+> They do some things better than Framework such as supporting Ryzen
+> processors, and seem a bit cheaper overall. The battery life seems
+> like it would be better. They have a spare parts store as well and a
+> full disassembly guide as well as an "open warranty". I was never a
+> fan of Framework's swappable ports.
+
 Pyra
 ----
 

final edit
diff --git a/blog/2022-04-13-wifi-tuning.md b/blog/2022-04-13-wifi-tuning.md
new file mode 100644
index 00000000..73275564
--- /dev/null
+++ b/blog/2022-04-13-wifi-tuning.md
@@ -0,0 +1,127 @@
+[[!meta title="Tuning my wifi radios"]]
+
+After listening to an episode of the [2.5 admins podcast](https://2.5admins.com/), I
+realized there was some sort of low-hanging fruit I could pick to
+better tune my WiFi at home. You see, I'm kind of a fraud in WiFi: I
+only [started a WiFi mesh in Montreal](https://wiki.reseaulibre.ca/) (now defunct), I don't
+*really* know how any of that stuff works. So I was surprised to hear
+one of the podcast host say "it's all about airtime" and "you want to
+reduce the power on your access points" (APs). It seemed like sound
+advice: better bandwidth means less time on air, means less
+collisions, less latency, and less power also means less
+collisions. Worth a try, right?
+
+# Frequency
+
+So the first thing I looked at was [WifiAnalyzer](https://vremsoftwaredevelopment.github.io/WiFiAnalyzer/) to see if I had
+any optimisation I could do there. Normally, I try to avoid having
+nearby APs on the same frequency to avoid collisions, but who knows,
+maybe I had messed that up. And turns out I did! Both APs were on
+"auto" for 5GHz, which typically means "do nothing or worse".
+
+5GHz is really interesting, because, in theory, there are [LOTS of
+channels][] to pick from, it goes up to 196!! And both my APs were on
+36, what gives?
+
+[LOTS of channels]: https://en.wikipedia.org/wiki/List_of_WLAN_channels#5_GHz_(802.11a/h/j/n/ac/ax)
+
+So the first thing I did was to set it to channel 100, as there was
+that long gap in WifiAnalyzer where *no* other AP was. But that just
+broke 5GHz on the AP. The OpenWRT GUI (luci) would just say "wireless
+not associated" and the ESSID wouldn't show up in a scan anymore.
+
+At first, I thought this was a problem with OpenWRT or my hardware,
+but I could reproduce the problem with both my APs: a [TP-Link Archer
+A7 v5](https://openwrt.org/toh/tp-link/archer_a7_v5) and a [Turris Omnia](https://openwrt.org/toh/turris/turris_omnia) (see also [my review](https://anarc.at/blog/2016-11-15-omnia/)).
+
+As it turns out, that's because that range of the WiFi band interferes
+with trivial things like satellites and radar, which make the actually
+very useful radar maps look like useless christmas trees. So those
+channels require [DFS](https://en.wikipedia.org/wiki/Dynamic_frequency_selection) to operate. DFS works by first listening on
+the frequency for a certain amount of time (1-2 minute, but could be
+as high as 10) to see if there's something else transmitting at all.
+
+So typically, that means they just don't operate at all in those
+bands, especially if you're near any major city which generally means
+you *are* near a weather radar that *will* transmit on that band.
+
+In the system logs, if you have such a problem, you might see this:
+
+    Apr  9 22:17:39 octavia hostapd: wlan0: DFS-CAC-START freq=5500 chan=100 sec_chan=1, width=0, seg0=102, seg1=0, cac_time=60s
+    Apr  9 22:17:39 octavia hostapd: DFS start_dfs_cac() failed, -1
+
+... and/or this:
+
+    Sat Apr  9 18:05:03 2022 daemon.notice hostapd: Channel 100 (primary) not allowed for AP mode, flags: 0x10095b NO-IR RADAR
+    Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Configured channel (100) not found from the channel list of current mode (2) IEEE 802.11a
+    Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Hardware does not support configured channel
+
+Here, it clearly says `RADAR` (in all caps too, which means it's
+really important). `NO-IR` is also important, I'm not sure what it
+means but it could be that you're not allowed to transmit in that band
+because of other local regulations. 
+
+There might be a way to workaround those by changing the "region" in
+the Luci GUI, but I didn't mess with that, because I figured that
+other devices will have *that* already configured. So using a
+forbidden channel might make it more difficult for clients to connect
+(although it's possible this is enforced only on the AP side).
+
+In any case, 5GHz is promising, but in reality, you only get from
+channel 36 (5.170GHz) to 48 (5.250GHz), inclusively. Fast counters
+will notice that is *exactly* 80MHz, which means that if an AP is
+configured for that hungry, all-powerful 80MHz, it will effectively
+take up *all* 5GHz channels at once.
+
+This, in other words, is as bad as 2.4GHz, where you also have only
+two 40MHz channels. (Really, what did you expect: this is an
+unregulated frequency controlled by commercial interests...)
+
+So the first thing I did was to switch to 40MHz. This gives me two
+distinct channels in 5GHz at no noticeable bandwidth cost. (In fact, I
+couldn't find hard data on what the bandwidth ends up being on those
+frequencies, but I could still get 400Mbps which is fine for my use
+case.)
+
+# Power
+
+The next thing I did was to fiddle with power. By default, both radios
+were configured to transmit as much power as they needed to reach
+clients, which means that if a client gets farther away, it would
+boost its transmit power which, in turns, would mean the client would
+still connect to instead of *failing* and properly roaming to the
+other AP. 
+
+The higher power also means more interference with neighbors and other
+APs, although that matters less if they are on different channels.
+
+On 5GHz, power was about 20dBm (100 mW) -- and more on the Turris! --
+when I first looked, so I tried to lower it drastically to 5dBm (3mW)
+just for kicks. That didn't work so well, so I bumped it back up to 14
+dBm (25 mW) and that seems to work well: clients hit about -80dBm when
+they get far enough from the AP, which gets close to the noise floor
+(and where the neighbor APs are), which is exactly what I want.
+
+On 2.4GHz, I lowered it down even further, to 10 dBm (10mW) since it's
+better at going through wells, I figured it would need less power. And
+anyways, I rather people use the 5GHz APs, so maybe that will act as
+an encouragement to switch. I was still able to connect correctly to
+the APs at that power as well.
+
+# Other tweaks
+
+I disabled the "Allow legacy 802.11b rates" setting in the 5GHz
+configuration. According to [this discussion](https://forum.openwrt.org/t/clarification-on-allow-legacy-802-11b-rates-in-advance-setting/65429):
+
+> Checking the "Allow b rates" affects what the AP will transmit. In
+> particular it will send most overhead packets including beacons,
+> probe responses, and authentication / authorization as the slow,
+> noisy, 1 Mb DSSS signal. That is bad for you and your neighbors. Do
+> not check that box. The default really should be unchecked.
+
+This, in particular, "*will make the AP unusable to distant clients,
+which again is a good thing for public wifi in general*". So I just
+unchecked that box and I feel happier now. I didn't make tests to see
+the effect separately however, so this is mostly just a guess.
+
+[[!tag wifi radio debian-planet python-planet openwrt]]
diff --git a/blog/radio.md b/blog/radio.md
deleted file mode 100644
index aafb7041..00000000
--- a/blog/radio.md
+++ /dev/null
@@ -1,234 +0,0 @@
-[[!meta title="Tweaking my wifi radios"]]
-
-After listening to an episode of the [2.5 admins podcast](https://2.5admins.com/), I
-realized there was some sort of low-hanging fruit I could pick to
-better tune my WiFi at home. You see, I'm kind of a fraud in WiFi: I
-only [started a WiFi mesh in Montreal](https://wiki.reseaulibre.ca/) (now defunct), I don't
-*really* know how any of that stuff works. So I was surprised to hear
-one of the podcast host say "it's all about airtime" and "you want to
-reduce the power on your APs". It seemed like sound advice: better
-bandwidth means less time on air, means less collisions, less latency,
-and less power also means less collisions. Worth a try, right?
-
-# Frequency
-
-So the first thing I looked at was [WifiAnalyzer](https://vremsoftwaredevelopment.github.io/WiFiAnalyzer/) to see if I had
-any optimisation I could do there. Normally, I try to avoid having
-nearby APs on the same frequency to avoid collisions, but who knows,
-maybe I had messed that up. And turns out I did! Both APs were on
-"auto" for 5GHz, which typically means "do nothing or worse".
-
-5GHz is really interesting, because, in theory, there are [LOTS of
-channels][] to pick from, it goes up to 196!! And both my APs were on
-36, what gives?
-
-[LOTS of channels]: https://en.wikipedia.org/wiki/List_of_WLAN_channels#5_GHz_(802.11a/h/j/n/ac/ax)
-
-So the first thing I did was to set it to channel 100, as there was
-that long gap in WifiAnalyzer where *no* other AP was. But that just
-broke 5GHz on the AP. The OpenWRT GUI (luci) would just say "wireless
-not associated" and the ESSID wouldn't show up in a scan anymore.
-
-As it turns out, that's because that range of the WiFi band interferes
-with trivial things like satellites and radar, which make the actually
-very useful radar maps look like useless christmas trees. So those
-channels require [DFS](https://en.wikipedia.org/wiki/Dynamic_frequency_selection) to operate and, typically, that means they
-just don't operate at all, especially if you're near any major city
-which, typically, means you *are* near a weather radar that *will*
-transmit on that band.
-
-In the system logs, if you have such a problem, you might see this:
-
-    Apr  9 22:17:39 octavia hostapd: wlan0: DFS-CAC-START freq=5500 chan=100 sec_chan=1, width=0, seg0=102, seg1=0, cac_time=60s
-    Apr  9 22:17:39 octavia hostapd: DFS start_dfs_cac() failed, -1
-
-... and/or this:
-
-    Sat Apr  9 18:05:03 2022 daemon.notice hostapd: Channel 100 (primary) not allowed for AP mode, flags: 0x10095b NO-IR RADAR
-    Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Configured channel (100) not found from the channel list of current mode (2) IEEE 802.11a
-    Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Hardware does not support configured channel
-
-Here, it clearly says `RADAR` (in all caps too, which means it's
-really important). `NO-IR` is also important, I'm not sure what it
-means but it could be that you're not allowed to transmit in that band
-because of other local regulations.
-
-In any case, 5GHz is promising, but in reality, you only get from
-channel 36 (5.170GHz) to 48 (5.250GHz), inclusively. Fast counters
-will notice that is *exactly* 80MHz, which means that if an AP is
-configured for that hungry, all-powerful 80MHz, it will effectively
-take up *all* 5GHz channels at once.
-

(fichier de différences tronqué)
start draft radio article
diff --git a/blog/radio.md b/blog/radio.md
new file mode 100644
index 00000000..aafb7041
--- /dev/null
+++ b/blog/radio.md
@@ -0,0 +1,234 @@
+[[!meta title="Tweaking my wifi radios"]]
+
+After listening to an episode of the [2.5 admins podcast](https://2.5admins.com/), I
+realized there was some sort of low-hanging fruit I could pick to
+better tune my WiFi at home. You see, I'm kind of a fraud in WiFi: I
+only [started a WiFi mesh in Montreal](https://wiki.reseaulibre.ca/) (now defunct), I don't
+*really* know how any of that stuff works. So I was surprised to hear
+one of the podcast host say "it's all about airtime" and "you want to
+reduce the power on your APs". It seemed like sound advice: better
+bandwidth means less time on air, means less collisions, less latency,
+and less power also means less collisions. Worth a try, right?
+
+# Frequency
+
+So the first thing I looked at was [WifiAnalyzer](https://vremsoftwaredevelopment.github.io/WiFiAnalyzer/) to see if I had
+any optimisation I could do there. Normally, I try to avoid having
+nearby APs on the same frequency to avoid collisions, but who knows,
+maybe I had messed that up. And turns out I did! Both APs were on
+"auto" for 5GHz, which typically means "do nothing or worse".
+
+5GHz is really interesting, because, in theory, there are [LOTS of
+channels][] to pick from, it goes up to 196!! And both my APs were on
+36, what gives?
+
+[LOTS of channels]: https://en.wikipedia.org/wiki/List_of_WLAN_channels#5_GHz_(802.11a/h/j/n/ac/ax)
+
+So the first thing I did was to set it to channel 100, as there was
+that long gap in WifiAnalyzer where *no* other AP was. But that just
+broke 5GHz on the AP. The OpenWRT GUI (luci) would just say "wireless
+not associated" and the ESSID wouldn't show up in a scan anymore.
+
+As it turns out, that's because that range of the WiFi band interferes
+with trivial things like satellites and radar, which make the actually
+very useful radar maps look like useless christmas trees. So those
+channels require [DFS](https://en.wikipedia.org/wiki/Dynamic_frequency_selection) to operate and, typically, that means they
+just don't operate at all, especially if you're near any major city
+which, typically, means you *are* near a weather radar that *will*
+transmit on that band.
+
+In the system logs, if you have such a problem, you might see this:
+
+    Apr  9 22:17:39 octavia hostapd: wlan0: DFS-CAC-START freq=5500 chan=100 sec_chan=1, width=0, seg0=102, seg1=0, cac_time=60s
+    Apr  9 22:17:39 octavia hostapd: DFS start_dfs_cac() failed, -1
+
+... and/or this:
+
+    Sat Apr  9 18:05:03 2022 daemon.notice hostapd: Channel 100 (primary) not allowed for AP mode, flags: 0x10095b NO-IR RADAR
+    Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Configured channel (100) not found from the channel list of current mode (2) IEEE 802.11a
+    Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Hardware does not support configured channel
+
+Here, it clearly says `RADAR` (in all caps too, which means it's
+really important). `NO-IR` is also important, I'm not sure what it
+means but it could be that you're not allowed to transmit in that band
+because of other local regulations.
+
+In any case, 5GHz is promising, but in reality, you only get from
+channel 36 (5.170GHz) to 48 (5.250GHz), inclusively. Fast counters
+will notice that is *exactly* 80MHz, which means that if an AP is
+configured for that hungry, all-powerful 80MHz, it will effectively
+take up *all* 5GHz channels at once.
+
+This, in other words, is as bad as 2.4GHz, where you also have only
+two 40MHz channels. (Really, what did you expect: this is an
+unregulated frequency controlled by commercial interests...)
+
+So the first thing I did was to switch to 40MHz. This gives me two
+distinct channels in 5GHz at no noticeable bandwidth cost. (In fact, I
+couldn't find hard data on what the bandwidth ends up being on those
+frequencies, but I could still get 100+Mbps which is fine for my use
+case.)
+
+# Power
+
+
+
+6:40:12 <@anarcat> but i have no idea, haven't looked
+17:27:45 <@anarcat> i'm trying to tweak the radio frequency on my TP-link Archer A7 V5 on the 5GHz band to something else than "auto" to reduce interference. channel 36 works, channel 44 also good, but when i set it to channel 52 in OpenWRT, the radio just doesn't work anymore. Luci GUI says "wireless not associated" and the SSID doesn't show up in scans. wtf?
+17:27:49 <@anarcat> pollo: ^ any ideas?
+17:35:43 <+taggart> anarcat: 52-144 require DFS https://en.wikipedia.org/wiki/List_of_WLAN_channels#United_States
+17:35:44 <+peoplesmic> List of WLAN channels - Wikipedia (at en.wikipedia.org)
+17:35:51 <+taggart> I bet that is messing with things
+17:36:39 <@anarcat> taggart: what is DFS?
+17:36:49 <+taggart> https://en.wikipedia.org/wiki/Dynamic_frequency_selection
+17:36:50 <+peoplesmic> Dynamic frequency selection - Wikipedia (at en.wikipedia.org)
+17:37:02 <+taggart> the AP has to listen to make sure it can't hear anything before using it
+17:37:04 <@anarcat> oh i see
+17:37:28 <@anarcat> damn so 5GHz is basically just that one 80MHz channel then?
+17:37:31 <@anarcat> that's bonkers
+17:37:34 <+taggart> IIRC the luci gui might have some DFS option things
+17:37:50 <+taggart> but I pretty much try to stay out of that range
+17:38:24 <@anarcat> how about 149+
+17:38:30 <+taggart> often the higher channels aren't occupied
+17:38:34 <+taggart> yeah I use those
+17:38:56 <+taggart> I have an android app that does a survey and I walk around with it
+17:39:09 <+taggart> to see what is in use and pick stuff
+17:39:33 <@anarcat> yeah i use wifi analyzer too
+17:39:39 <@anarcat> i was wondering why the heck no one was on 100-144
+17:39:41 <+taggart> in a few installs I have 2-3 APs in the same building and want to put them on their own channels, but also avoid neighbors
+17:39:47 <+taggart> (they are wired backhaul)
+17:40:22 <@anarcat> yeah, i have a wired backhaul too and i'm trying to tune 5ghz
+17:40:28 <@anarcat> everything is on fucking 36 now
+17:40:40 <@anarcat> 149 doesn't work either
+17:40:57 <@anarcat> taggart: 40 or 80MHz?
+17:41:03 <+taggart> I haven't messed with channel bandwidth much, but I think the higher data rates require it
+17:41:16 <+taggart> so then you are trying to make sure you have a whole band free
+17:41:51 <@anarcat> but then there's no band free
+17:42:00 <@anarcat> everyone has 5g radios now and is screaming like hell
+17:42:17 <+taggart> my APs are 802.11ac, I don't have any ax stuff yet
+17:42:29 <@anarcat> 802.11r?
+17:42:33 <@anarcat> what's ax?
+17:43:02 <+taggart> aka WiFi6 https://en.wikipedia.org/wiki/IEEE_802.11ax
+17:43:03 <+peoplesmic> Wi-Fi 6 - Wikipedia (at en.wikipedia.org)
+17:43:32 <@anarcat> ah yeah no not here either
+17:43:53 <@anarcat> i've been struggling to figure out wth the bandwidth looks like for 40MHz vs 80 and couldn't figure it out either
+17:44:33 <@anarcat> taggart: do you tweak radio power to limit interference?
+17:44:47 <+taggart> in my neighborhood most of the other APs I can see all have "xfinity" or CenturyLink style ESSIDs, all in the default lowest channel and taking up a wide band
+17:45:25 <@anarcat> see i'm on 40MHz now and my phone has 400mbps
+17:45:26 <+taggart> anarcat: on my ubiquiti ones I have them set to do that (but not change channels, I hard code those)
+17:45:28 <@anarcat> 10dbm
+17:45:40 <@anarcat> taggart: yeah but to what?
+17:46:23 <@anarcat> i think i'm going to switch to 40MHz
+17:46:37 <+taggart> they do it dynamically since they manage all the APs. so I think periodically each AP listens for the others and adjusts, but I'm not sure
+17:46:38 <@anarcat> apparently, the narrower bandwidth helps dealing with distance and obstacles
+17:46:52 <@anarcat> taggart: yeah i don't have that luxury here
+17:46:53 <+taggart> I have them on separate channels so many it doesn't anyway now that I think about it
+17:47:39 <+taggart> as for your neighbors it's a prisoners dilemma and I suspect the default routers people get from ISPs don't care at all about the neighbors
+17:48:43 <+taggart> every time I work on this I think "maybe I should just paint that side of the house with wifi blocking paint"
+17:50:02 <@anarcat> aka "metal"? :p
+17:50:06 <+taggart> yeah
+17:50:22 <@anarcat> "i should make my house a farraday cage" is not going to do wonders with your phone calls though :p
+17:50:25 <+taggart> if you were replacing siding or drywall you could do that
+17:50:28 <@anarcat> i mean maybe you want that too
+17:50:43 <+taggart> if wifi calling works well, maybe it would be a win
+18:02:26 <@anarcat> re dfs 17:58:49 <Tapper> anarcat - 22:20 after setting a 5ghz channel to a dfs one you need to wate for 60 secs for the radio to do a scan for radar
+18:02:31 <@anarcat> from #openwrt
+18:05:05 <@anarcat> taggart: ^
+18:18:50 <@anarcat> Apr  9 22:17:39 octavia hostapd: wlan0: DFS-CAC-START freq=5500 chan=100 sec_chan=1, width=0, seg0=102, seg1=0, cac_time=60s
+18:18:51 <@anarcat> Apr  9 22:17:39 octavia hostapd: DFS start_dfs_cac() failed, -1
+18:18:53 <@anarcat> interesting
+18:19:23 <+taggart> yeah so it was DFS
+18:20:22 <+taggart> the whole "we need to make the radio firmware proprietary" argument mostly was justified by stuff like that "but if we don't then people won't implement DFS correctly", etc
+18:21:37 <@anarcat> that was on the turris
+18:21:43 <@anarcat> the tplink didn'T even get there
+
+
+
+
+#openwrt
+
+17:20:32 <anarcat> hi
+17:20:36 | Joins: Warped3 ── Quits: arminwei-, duxsco | 17:20:44
+17:20:46 <anarcat> i have a tplink archer here, and i'm trying to set the frequency on 5Ghz
+17:21:26 <anarcat> if i leave it at auto, it works. if i set it to channel 36, it works too. channel 44, also good. then, i set it to channel 52 and blip! the radio disappears
+17:21:53 <anarcat> the luci GUI says "wireless is not associated" and a wifi scan on my phone (wifi analyzer or regular android wifi settings) doesn't show the AP anymore
+17:21:56 <anarcat> wth is going on?
+17:22:09 <anarcat> is the radio falling over whenever i cross 5260MHz?
+17:24:36 <anarcat> the device is TP-Link AC1750 v5
+17:25:59 | Quits: Warped3, ecloud | 17:27:43
+17:29:18 <anarcat> i have the same problem with luci on my Turris Omnia as well, i *must* be doing something wrong
+17:34:25 <Lynx-> what is a Turssis Omnia?
+17:35:01 | Joins: ecloud
+17:36:05 <anarcat> this: https://anarc.at/blog/2016-11-15-omnia/
+17:36:31 <anarcat> or this https://openwrt.org/toh/turris/turris_omnia dpeending on what you want to know
+17:36:56 <Lynx-> thnx
+17:38:57 | Joins: Leoneof
+17:41:07 <anarcat> 17:22:09 <anarcat> is the radio falling over whenever i cross 5260MHz?
+17:41:25 <anarcat> apparently that could be a regulation issue, 52+ is DFS/TPC https://en.wikipedia.org/wiki/List_of_WLAN_channels#United_States
+17:44:38 | Joins: duxsco, Guest1571 ── Quits: Haaninjo_, minimal, Grommish ── Nicks: arminweigl → Guest1571 | 17:58:25
+17:58:49 <Tapper> anarcat - 22:20 after setting a 5ghz channel to a dfs one you need to wate for 60 secs for the radio to do a scan for radar
+17:59:29 <Tapper> Then your wifi will start working as long as it does not find any radar on that channel.
+17:59:47 <Tapper> Same gos for you anarcat
+17:59:48 | Quits: arminwei|
+18:02:19 <anarcat> oh nice
+18:04:05 | Joins: Guest1572 ── Quits: Leoneof, Guest1571 ── Nicks: arminweigl_ → Guest1572 | 18:05:58
+18:06:42 <anarcat> Tapper: i have jut tried to wait 60 seconds after switching to channel 100 and i still don't see the AP
+18:07:16 <Tapper> Have a look in your logs to see what it says about the radio.
+18:07:32 <Tapper> That was to anarcat
+18:07:51 <anarcat> Tapper: system log? kernel log? sorry i'm clicking through luci here :)
+18:08:04 <Tapper> System log
+18:08:12 <anarcat> ah, that's telling:
+18:08:12 <anarcat> Sat Apr  9 18:05:03 2022 daemon.notice hostapd: Channel 100 (primary) not allowed for AP mode, flags: 0x10095b NO-IR RADAR
+18:08:12 <anarcat> Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Configured channel (100) not found from the channel list of current mode (2) IEEE 802.11a
+18:08:13 <anarcat> Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Hardware does not support configured channel
+18:08:29 <anarcat> i'm AC, should i do N?
+18:08:43 <anarcat> Tapper: ^
+18:09:39 <Tapper> anarcat no mate you could just stick with channel 36 if you want. It will still be fast
+18:10:08 <anarcat> Tapper: yeah well there's a lot of shit on 36 :p
+18:10:26 <anarcat> i switched to 40MHz and 44 at least, so that my two APs won't fight with each other at least
+18:10:34 <Tapper> Or jump up to a channel above 100
+18:10:36 <anarcat> but it would sure be nice to use all those precious ones :p
+18:10:47 <anarcat> yeah but 100 yells at me with the above
+18:11:02 <anarcat> or you mean like 104?
+18:11:09 <Tapper> It should go up to about 150 or 160
+18:11:38 <Tapper> You want to keep your channels 80 mhz wide

(fichier de différences tronqué)
fix broken link
diff --git "a/services/r\303\251seau.mdwn" "b/services/r\303\251seau.mdwn"
index 9af2f191..dc5f0728 100644
--- "a/services/r\303\251seau.mdwn"
+++ "b/services/r\303\251seau.mdwn"
@@ -35,7 +35,7 @@ La topologie implique généralement:
  1. internet
  2. modem DSL
  3. [[hardware/octavia]] (router, Turris Omnia)
-    1. [[hardware/marcos]]
+    1. [[hardware/server/marcos]]
     2. [[hardware/rosa]]
        1. [[hardware/ursula]] (vero, home cinema)
     3. ATA (Cisco SPA-112 VoIP adapter)

notice stuff move
diff --git "a/services/r\303\251seau/plan.dia" "b/services/r\303\251seau/plan.dia"
index 10639ca9..15fca7bd 100644
Binary files "a/services/r\303\251seau/plan.dia" and "b/services/r\303\251seau/plan.dia" differ
diff --git "a/services/r\303\251seau/plan.svg" "b/services/r\303\251seau/plan.svg"
index 779a1c15..dab7ef3c 100644
Binary files "a/services/r\303\251seau/plan.svg" and "b/services/r\303\251seau/plan.svg" differ

todo
diff --git "a/services/r\303\251seau/crapn.mdwn" "b/services/r\303\251seau/crapn.mdwn"
index c8824bae..fd98120d 100644
--- "a/services/r\303\251seau/crapn.mdwn"
+++ "b/services/r\303\251seau/crapn.mdwn"
@@ -1,5 +1,7 @@
 [[!meta title="Remplacement des services de communication au Crap'N"]]
 
+TODO: merge with above? wtf *is* this.
+
 J'aimerais remplacer le service internet au CrapN par un autre service internet qui me permetterait de déménager mon serveur (nommé "marcos") ainsi que les différents [[services]] que je gère présentement à la maison. En résumé, les services sont:
 
 * cinéma maison avec large librairie de films et de musique

minimal atwood docs
diff --git a/hardware/atwood.md b/hardware/atwood.md
new file mode 100644
index 00000000..f27ab3f9
--- /dev/null
+++ b/hardware/atwood.md
@@ -0,0 +1 @@
+Turris mox router, named after Margaret Atwood.
diff --git a/services/dns.mdwn b/services/dns.mdwn
index 18e612d2..c9a2c13f 100644
--- a/services/dns.mdwn
+++ b/services/dns.mdwn
@@ -72,7 +72,7 @@ femmes. Exemples utilisés:
 
  * [[hardware/angela]] ([Davis](https://en.wikipedia.org/wiki/Angela_Davis))
  * bell ([Hooks](https://en.wikipedia.org/wiki/Bell_hooks))
- * ([Margaret](https://en.wikipedia.org/wiki/Margaret_Atwood)) Atwood
+ * ([Margaret](https://en.wikipedia.org/wiki/Margaret_Atwood)) [[hardware/atwood]]
  * ([Marie](https://en.wikipedia.org/wiki/Marie_Curie)) [[hardware/curie]]
  * ([Richard](https://en.wikipedia.org/wiki/Richard_Dawkins)) dawkins
  * [[hardware/emma]] ([Goldman](https://en.wikipedia.org/wiki/Emma_Goldman))

document rosa in network map, make text-only network map
diff --git "a/services/r\303\251seau.mdwn" "b/services/r\303\251seau.mdwn"
index 24388a46..9af2f191 100644
--- "a/services/r\303\251seau.mdwn"
+++ "b/services/r\303\251seau.mdwn"
@@ -1,4 +1,5 @@
-Le réseau est constitué d'une ensemble d'interconnexions [[!wikipedia gigabit]] et d'un réseau [[wifi]] <del>connecté en partie avec le [[mesh]]</del> et un pare-feu roulant sous FreeBSD avec `pf`, nommée `roadkiller`.
+Le réseau est constitué d'une ensemble d'interconnexions [[!wikipedia
+gigabit]] et d'un réseau [[wifi]], avec un uplink DSL.
 
 Update: ipv6 and dns work better with new router. see good test page: <http://en.conn.internet.nl/connection/>.
 
@@ -29,6 +30,25 @@ Plan du réseau
 
   [1]: plan.svg "IP addresses specified if present, otherwise model number detailed."
 
+La topologie implique généralement:
+
+ 1. internet
+ 2. modem DSL
+ 3. [[hardware/octavia]] (router, Turris Omnia)
+    1. [[hardware/marcos]]
+    2. [[hardware/rosa]]
+       1. [[hardware/ursula]] (vero, home cinema)
+    3. ATA (Cisco SPA-112 VoIP adapter)
+
+La configuration DHCP de octavia inclus aussi des static lease pour:
+
+ * [[hardware/server/mafalda]] (ex print server, now ran by octavia/marcos)
+ * [[hardware/server/plastik]] (ex wifi bridge, now spare at the office)
+ * [[hardware/dawkins]] (? ex wifi bridge?)
+ * kobo - Kobo Clara?
+ * [[hardware/curie]] (workstation, moved to the office)
+ * [[hardware/atwood]] (Turris Mox router, unused)
+
 Vitesse
 =======
 
diff --git "a/services/r\303\251seau/plan.dia" "b/services/r\303\251seau/plan.dia"
index dda5eeda..10639ca9 100644
Binary files "a/services/r\303\251seau/plan.dia" and "b/services/r\303\251seau/plan.dia" differ
diff --git "a/services/r\303\251seau/plan.svg" "b/services/r\303\251seau/plan.svg"
index 0cedaf50..779a1c15 100644
Binary files "a/services/r\303\251seau/plan.svg" and "b/services/r\303\251seau/plan.svg" differ

framework notes
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index 4b389477..2fb03303 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -41,6 +41,24 @@ Framework
 
 <https://frame.work/>
 
+Specs:
+
+ * 1300$USD for i5 with 8GB RAM, 256GB storage, Windows 10
+ * DIY kit: 1250$USD for i5 with 16GB RAM, 500GB NVMe, no OS (+100$
+   for 1TB)
+ * 3.5mm combo headphone jack
+ * 60W USB-C
+ * 55 Wh battery, between 2h and 10h battery
+ * 1080p 60fps camera
+ * backlit keyboard
+ * 1.3kg, 15.85mm x 296.63mm x 228.98mm
+ * 1 year warranty
+ * 13.5” 3:2, 2256x1504, 100% sRGB color gamut, and >400 nit
+ * Intel® Wi-Fi 6E AX210
+ * fingerprint reader
+
+Pros:
+
  * easily repairable (qrcodes pointing to repair guides!), [10/10
    score from ifixit.com](https://www.ifixit.com/News/51614/framework-laptop-teardown-10-10-but-is-it-perfect), which they call "exceedingly rare"
  * four modular USB-C ports which can fit HDMI, USB-C (pass-through,
@@ -48,33 +66,39 @@ Framework
    external storage (250GB, 1TB), [active modding community](https://community.frame.work/c/expansion-cards/developer-program/90)
  * replaceable mainboard
  * first two batches shipped, third batch sold out, fourth batch ships
-   in October 2021
- * 1300$USD for i5 with 8GB RAM, 256GB storage, Windows 10
- * DIY kit: 1250$USD for i5 with 16GB RAM, 500GB NVMe, no OS (+100$
-   for 1TB)
+   in October 2021, generally keeps up with shipping
  * good reviews: [Ars Technica](https://arstechnica.com/gadgets/2021/07/frameworks-new-lightweight-modular-laptop-delivers-on-its-promises/), [Fedora developer](https://www.scrye.com/wordpress/nirik/2021/08/29/frame-work-laptop-the-hyperdetailed-fedora-review/), [iFixit
    teardown](https://www.ifixit.com/News/51614/framework-laptop-teardown-10-10-but-is-it-perfect), more critical review: [OpenBSD developer](https://jcs.org/2021/08/06/framework)
  * [test account on fwupd.org](https://fwupd.org/lvfs/vendors/#framework), "expressed interest to port to
    coreboot" (according to the Fedora developer) and [are testing
    firmware updates over fwupd](https://community.frame.work/t/framework-firmware-on-the-lvfs/4466), supposedly "plans to release
    firmware updates there" (according to the Fedora dev)
+ * [phoronix review](https://www.phoronix.com/scan.php?page=article&item=framework-laptop&num=1)
  * [excellent documentation of the (proprietary) BIOS](https://community.frame.work/t/bios-guide/4178/1)
- * 3.5mm combo headphone jack
- * 60W USB-C
- * 55 Wh battery, between 2h and 10h battery, described as "mediocre"
-   by Ars Technica, certainly not up to the Dell XPS 13 standard, but
-   may be better or equivalent than my current (2021-09-27) laptop
-   (Purism 13v4, currently says 7h)
- * 1080p 60fps camera
- * backlit keyboard, annoying LED around power button
- * 1.3kg, 15.85mm x 296.63mm x 228.98mm
- * 1 year warranty
- * 13.5” 3:2, 2256x1504, 100% sRGB color gamut, and >400 nit
- * Intel® Wi-Fi 6E AX210
+ * amazing keyboard and touchpad, according to [Linux After Dark][linux-after-dark-framework]
+
+Cons:
+
+ * annoying LED around power button
+
  * fan quiet when idle, but can be noisy when running
- * fingerprint reader
 
-https://www.phoronix.com/scan.php?page=article&item=framework-laptop&num=1
+ * battery described as "mediocre" by Ars Technica (above), certainly
+   not up to the Dell XPS 13 standard, but may be better or equivalent
+   than my current (2021-09-27) laptop (Purism 13v4, currently says
+   7h). power problems confirmed by [this report from Linux After
+   Dark][linux-after-dark-framework] which also mentions that the USB adapters take power *even
+   when not in use* and quite a bit (400mW in some cases!)
+
+[linux-after-dark-framework]: https://linuxafterdark.net/linux-after-dark-episode-14/
+
+ * no RJ-45 port, and attempts at designing ones are failing because
+   the modular plugs are too thin to fit (according to [Linux After
+   Dark][linux-after-dark-framework]), so unlikely to have one in the future
+
+ * a bit pricey for the performance, especially when compared to the
+   competition (e.g. Dell XPS, Apple M1), but be worth waiting for
+   second generation
 
 GPD pocket
 ----------

of course LWN did the article
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index 37cf333c..a36b1315 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -140,4 +140,8 @@ through the inevitable debian-devel flamewar to figure it out. I
 already wrecked havoc on the `#debian-devel` IRC channel asking newbie
 questions so I won't stir that mud any further for now.
 
+(Update: LWN, of course, *did* make an [article about usrmerge in
+Debian](https://lwn.net/Articles/890219/). I will read it soon and can then tell you know if it's
+brilliant, but they are typically spot on.)
+
 [[!tag debian-planet debian packaging]]

Revert "do not assume zigo's gender, hope i'm not screwing up this one"
I had a repsonse from zigo that he identifies as male.
This reverts commit 8832e41a9dce9ab831097cb1ec705555b4891be6.
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index b32d65ac..37cf333c 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -81,11 +81,11 @@ babies and resurrect your cat!
 It went well!
 
 The old maintainer was [actually fine with the change](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964718#10) because
-their team wasn't using the package anymore anyways. They asked to be
+his team wasn't using the package anymore anyways. He asked to be
 kept as an uploader, which I was glad to oblige. 
 
-(They replied a few months after the deadline, but I wasn't in a rush
-anyways, so that doesn't matter. It was polite for them to answer, even
+(He replied a few months after the deadline, but I wasn't in a rush
+anyways, so that doesn't matter. It was polite for him to answer, even
 if, technically, I was already allowed to take it over.)
 
 What happened next is less shiny for me though. I totally forgot about

do not assume zigo's gender, hope i'm not screwing up this one
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index 1bd6d1a6..b32d65ac 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -81,11 +81,11 @@ babies and resurrect your cat!
 It went well!
 
 The old maintainer was [actually fine with the change](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964718#10) because
-their team wasn't using the package anymore anyways. He asked to be
+their team wasn't using the package anymore anyways. They asked to be
 kept as an uploader, which I was glad to oblige. 
 
-(He replied a few months after the deadline, but I wasn't in a rush
-anyways, so that doesn't matter. It was polite for him to answer, even
+(They replied a few months after the deadline, but I wasn't in a rush
+anyways, so that doesn't matter. It was polite for them to answer, even
 if, technically, I was already allowed to take it over.)
 
 What happened next is less shiny for me though. I totally forgot about

try to tone down the hyperbole, obviously too late
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index 743b3e60..1bd6d1a6 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -112,12 +112,20 @@ But at least my kingdom is growing.
 
 # Appendix
 
-Just in case someone didn't get the metaphor, I'm not a monarchist
-promoting feodalism as a practice to manage a community. I do not
+Just in case someone didn't notice the hyperbole, I'm not a monarchist
+promoting feudalism as a practice to manage a community. I do not
 intend to really "grow my kingdom" and I think the culture around
 "property" of "packages" is kind of absurd in Debian. I kind of wish
 it would go away.
 
+(Update: It has also been pointed out that I might have made Debian
+seem more confrontational than it actually is. And it's kind of true:
+most work and interactions in Debian actually go fine, it's only a
+minority of issues that degenerate into conflicts. It's just that they
+tend to take up a lot of space in the community, and I find that
+particularly draining. And I think think our "package ownership"
+culture is part of at least some of those problems.)
+
 [Team maintenance](https://wiki.debian.org/Teams), the [LowNMU](https://wiki.debian.org/LowThresholdNmu) process, and [low threshold
 adoption](https://wiki.debian.org/LowThresholdAdoption) processes are all steps in the good direction, but they
 are all *opt in*. At least the package salvaging process is someone a

clarify the issue: it's not with the CTTE
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index b9994989..743b3e60 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -124,12 +124,12 @@ are all *opt in*. At least the package salvaging process is someone a
 little more ... uh... coercive? Or at least it allows the community
 to step in and do the right thing, in a sense.
 
-We'll see what happens with the coming wars around the tech committee,
-which are bound to touch on that topic. (Hint: our next drama is
-called "usrmerge".) Hopefully, [LWN](https://lwn.net/) will make a brilliant article
-to sum it up for us so that I don't have to go through the inevitable
-debian-devel flamewar to figure it out. I already wrecked havoc on the
-`#debian-devel` IRC channel asking newbie questions so I won't stir
-that mud any further for now.
+We'll see what happens with the coming wars around the recent tech
+committee decision, which are bound to touch on that topic. (Hint: our
+next drama is called "usrmerge".) Hopefully, [LWN](https://lwn.net/) will make a
+brilliant article to sum it up for us so that I don't have to go
+through the inevitable debian-devel flamewar to figure it out. I
+already wrecked havoc on the `#debian-devel` IRC channel asking newbie
+questions so I won't stir that mud any further for now.
 
 [[!tag debian-planet debian packaging]]

clarify what "it" is
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index 5fff2e6e..b9994989 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -89,9 +89,9 @@ anyways, so that doesn't matter. It was polite for him to answer, even
 if, technically, I was already allowed to take it over.)
 
 What happened next is less shiny for me though. I totally forgot about
-the ITS, even after the maintainer reminded me of it. See, the thing
-is the ITS doesn't show up on my [dashboard](https://qa.debian.org/developer.php?login=anarcat@debian.org) at all. So I totally
-forgot about it (yes, twice).
+the ITS, even after the maintainer reminded me of its existence. See,
+the thing is the ITS doesn't show up on my [dashboard](https://qa.debian.org/developer.php?login=anarcat@debian.org) at all. So I
+totally forgot about it (yes, twice).
 
 In fact, the only reason I remembered it was that got into the process
 of formulating *another* ITS ([1008753](https://bugs.debian.org/1008753), [trocla](https://tracker.debian.org/pkg/trocla)) and I was trying

link to the trocla package
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index 5a4c5cf9..5fff2e6e 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -94,7 +94,7 @@ is the ITS doesn't show up on my [dashboard](https://qa.debian.org/developer.php
 forgot about it (yes, twice).
 
 In fact, the only reason I remembered it was that got into the process
-of formulating *another* ITS ([1008753](https://bugs.debian.org/1008753), trocla) and I was trying
+of formulating *another* ITS ([1008753](https://bugs.debian.org/1008753), [trocla](https://tracker.debian.org/pkg/trocla)) and I was trying
 to figure out how to write the email. Then I remembered: "*hey wait, I
 think I did this before!*" followed by "*oops, yes, I totally did this
 before and forgot for 9 months*".

gender neutrality matters
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index 67064ab6..5a4c5cf9 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -17,7 +17,7 @@ know who you are) can skip to the next section.
 Traditionally, the [Debian project](https://debian.org/) (my Linux-based operating
 system of choice) has prided itself on the self-managed, anarchistic
 organisation of its packaging. Each package maintainer is the lord of
-his little kingdom. Some maintainers like to accumulate lots of
+their little kingdom. Some maintainers like to accumulate lots of
 kingdoms to rule over.
 
 (Yes, it really doesn't sound like anarchism when you put it like
@@ -29,8 +29,8 @@ they don't want. Typically, when things go south, someone makes a
 complaint to the [Debian Technical Committee](https://www.debian.org/devel/tech-ctte) (CTTE) which is
 established by the [Debian constitution](https://www.debian.org/devel/constitution) to resolve such
 conflicts. The committee is appointed by the Debian Project leader,
-himself elected each year (and there's an [election coming up](https://www.debian.org/vote/2022/vote_002) if
-you haven't heard).
+elected each year (and there's an [election coming up](https://www.debian.org/vote/2022/vote_002) if you
+haven't heard).
 
 Typically, the CTTE will then vote and formulate a decision. But
 here's the trick: maintainers are still free to do whatever they want

edits publish
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
index 474c9418..67064ab6 100644
--- a/blog/2022-03-31-first-package-salvaged.md
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -20,37 +20,51 @@ organisation of its packaging. Each package maintainer is the lord of
 his little kingdom. Some maintainers like to accumulate lots of
 kingdoms to rule over.
 
-It's *really* hard to make package maintainers do something they don't
-want. Typically, when things go south, someone makes a complaint to
-the [Debian Technical Committee](https://www.debian.org/devel/tech-ctte) (CTTE) which is established by the
-[Debian constitution](https://www.debian.org/devel/constitution) to resolve such conflicts. The committee is
-appointed by the Debian Project leader, himself elected each year (and
-there's an [election coming up](https://www.debian.org/vote/2022/vote_002) if you haven't heard).
-
-Typically, the CTTE will then vote on the bug and formulate a
-decision. But here's the trick: maintainers are still free to do
-whatever they want after that, in a sense. It's not like the CTTE can
-just break down doors and force maintainers to type code. (I won't go
-into the details of the *why* of that, but it involves legal issues
-and, I think, something about the Turing halting problem.)
-
-Anyways. Point is all that is super heavy and no one wants to go
-there. And sometimes, packages just get lost. Maintainers get
-distracted, or busy with something else. It's not that they *want* to
-abandon their packages. They love their little fiefdoms. It's just
-there was a famine or a war or something and everyone died, and they
-have better things to do than put up fences or whatever.
-
-So clever people in Debian found a better way (than the CTTE) of
-handling such problems. It's called the [Package Salvaging](https://wiki.debian.org/PackageSalvaging)
-process. Through that process, a maintainer can propose to take over
-an existing package from another maintainer, if a [certain set of
-condition are met and a specific process is followed](https://www.debian.org/doc/manuals/developers-reference/pkgs.en.html#package-salvaging).
-
-Normally, taking over another's package is basically a war
+(Yes, it really doesn't sound like anarchism when you put it like
+that. Yes, it's complicated: there's a constitution and voting
+involved. And yes, we're old.)
+
+Therefore, it's *really* hard to make package maintainers do something
+they don't want. Typically, when things go south, someone makes a
+complaint to the [Debian Technical Committee](https://www.debian.org/devel/tech-ctte) (CTTE) which is
+established by the [Debian constitution](https://www.debian.org/devel/constitution) to resolve such
+conflicts. The committee is appointed by the Debian Project leader,
+himself elected each year (and there's an [election coming up](https://www.debian.org/vote/2022/vote_002) if
+you haven't heard).
+
+Typically, the CTTE will then vote and formulate a decision. But
+here's the trick: maintainers are still free to do whatever they want
+after that, in a sense. It's not like the CTTE can just break down
+doors and force maintainers to type code. 
+
+(I won't go into the details of the *why* of that, but it involves
+legal issues and, I think, something about the [Turing halting
+problem](https://en.wikipedia.org/wiki/Halting_problem). Or something like that.)
+
+Anyways. The point is all that is super heavy and no one wants to go
+there...
+
+(Know-it-all Debian developers, I know you are still reading this
+anyways and disagree with that statement, but please, please, make it
+true.)
+
+... but sometimes, packages just get lost. Maintainers get distracted,
+or busy with something else. It's not that they *want* to abandon
+their packages. They love their little fiefdoms. It's just there was a
+famine or a war or something and everyone died, and they have better
+things to do than put up fences or whatever.
+
+So clever people in Debian found a better way of handling such
+problems than waging war in the poor old CTTE's backyard. It's called
+the [Package Salvaging](https://wiki.debian.org/PackageSalvaging) process. Through that process, a maintainer
+can propose to take over an existing package from another maintainer,
+if a [certain set of condition are met and a specific process is
+followed](https://www.debian.org/doc/manuals/developers-reference/pkgs.en.html#package-salvaging).
+
+Normally, taking over another maintainer's package is basically a war
 declaration, rarely seen in the history of Debian (yes, I do think it
-happened though), as rowdy as ours is. But through this process, it
-seems we have found a fair way of going forward.
+happened!), as rowdy as ours is. But through this process, it seems we
+have found a fair way of going forward.
 
 The process is basically like this:
 
@@ -59,6 +73,9 @@ The process is basically like this:
  3. upload a package making the change, with another week delay
  4. you now have one more package to worry about
 
+Easy right? It actually is! Process! It's magic! It will cure your
+babies and resurrect your cat!
+
 # So how did that go?
 
 It went well!
@@ -72,21 +89,21 @@ anyways, so that doesn't matter. It was polite for him to answer, even
 if, technically, I was already allowed to take it over.)
 
 What happened next is less shiny for me though. I totally forgot about
-the ITS. See, the thing is the ITS doesn't show up on my
-[dashboard](https://qa.debian.org/developer.php?login=anarcat@debian.org) at all. So I totally forgot about it until very
-recently.
-
-In fact, the only reason I found out about it was that got into the
-process of formulating *another* ITS ([1008753](https://bugs.debian.org/1008753), trocla) and I was
-trying to figure out how to write the email. Then I remembered: "hey
-wait, I think I did this before!" followed by "oops, yes, I totally
-did this before and forgot for 9 months".
-
-So, not great. Also, the package is not in its best shape still. I was
-still able to upload the upstream version that was pending 1.5.0 to
-clear out the ITS, at least. And then there's already two new upstream
-releases to upload, so I pushed 1.7.0 to experimental as well, for
-good measure.
+the ITS, even after the maintainer reminded me of it. See, the thing
+is the ITS doesn't show up on my [dashboard](https://qa.debian.org/developer.php?login=anarcat@debian.org) at all. So I totally
+forgot about it (yes, twice).
+
+In fact, the only reason I remembered it was that got into the process
+of formulating *another* ITS ([1008753](https://bugs.debian.org/1008753), trocla) and I was trying
+to figure out how to write the email. Then I remembered: "*hey wait, I
+think I did this before!*" followed by "*oops, yes, I totally did this
+before and forgot for 9 months*".
+
+So, not great. Also, the package is still not in a perfect shape. I
+was able to upload the upstream version that was pending 1.5.0 to
+clear out the ITS, basically. And then there's already two new
+upstream releases to upload, so I pushed 1.7.0 to experimental as
+well, for good measure.
 
 Unfortunately, I still can't enable tests because [everything is on
 fire](https://bugs.debian.org/1008768), as usual.
@@ -98,20 +115,21 @@ But at least my kingdom is growing.
 Just in case someone didn't get the metaphor, I'm not a monarchist
 promoting feodalism as a practice to manage a community. I do not
 intend to really "grow my kingdom" and I think the culture around
-"property" of "packages" is kind of absurd in Debian, and I wish it
-would go away. 
+"property" of "packages" is kind of absurd in Debian. I kind of wish
+it would go away.
 
-Team maintenance, the [LowNMU](https://wiki.debian.org/LowThresholdNmu) process, and [low threshold
+[Team maintenance](https://wiki.debian.org/Teams), the [LowNMU](https://wiki.debian.org/LowThresholdNmu) process, and [low threshold
 adoption](https://wiki.debian.org/LowThresholdAdoption) processes are all steps in the good direction, but they
 are all *opt in*. At least the package salvaging process is someone a
-little more ... uh... coercitive? Or at least it allows the community
+little more ... uh... coercive? Or at least it allows the community
 to step in and do the right thing, in a sense.
 
-We'll see what happens with the wars around the tech committee
-next. (Hint: our next drama is called "usrmerge".) Hopefully, LWN will
-make a brilliant article to sum it up for us so that I don't have to
-go through the debian-devel flamewar to figure it out. I already
-wrecked havoc on the `#debian-devel` IRC channel asking newbie
-questions so I won't stir that mud any further for now.
+We'll see what happens with the coming wars around the tech committee,
+which are bound to touch on that topic. (Hint: our next drama is
+called "usrmerge".) Hopefully, [LWN](https://lwn.net/) will make a brilliant article
+to sum it up for us so that I don't have to go through the inevitable
+debian-devel flamewar to figure it out. I already wrecked havoc on the
+`#debian-devel` IRC channel asking newbie questions so I won't stir
+that mud any further for now.
 
-[[!tag draft]]
+[[!tag debian-planet debian packaging]]

salvaged my firs tpackage, whoohoo
diff --git a/blog/2022-03-31-first-package-salvaged.md b/blog/2022-03-31-first-package-salvaged.md
new file mode 100644
index 00000000..474c9418
--- /dev/null
+++ b/blog/2022-03-31-first-package-salvaged.md
@@ -0,0 +1,117 @@
+[[!meta title="Salvaged my first Debian package"]]
+
+I finally salvaged my first Debian package, [python-invoke](https://tracker.debian.org/pkg/python-invoke). As
+part of [ITS 964718](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964718), I moved the package from the Openstack Team
+to the [Python team](https://wiki.debian.org/Teams/PythonTeam). The Python team might not be super happy with
+it, because it's breaking some of its rules, but at least someone
+(ie. me) is actively working (and using) the package.
+
+[[!toc]]
+
+# Wait what
+
+People not familiar with Debian will not understand *anything* in that
+first paragraph, so let me expand. Know-it-all Debian developers (you
+know who you are) can skip to the next section.
+
+Traditionally, the [Debian project](https://debian.org/) (my Linux-based operating
+system of choice) has prided itself on the self-managed, anarchistic
+organisation of its packaging. Each package maintainer is the lord of
+his little kingdom. Some maintainers like to accumulate lots of
+kingdoms to rule over.
+
+It's *really* hard to make package maintainers do something they don't
+want. Typically, when things go south, someone makes a complaint to
+the [Debian Technical Committee](https://www.debian.org/devel/tech-ctte) (CTTE) which is established by the
+[Debian constitution](https://www.debian.org/devel/constitution) to resolve such conflicts. The committee is
+appointed by the Debian Project leader, himself elected each year (and
+there's an [election coming up](https://www.debian.org/vote/2022/vote_002) if you haven't heard).
+
+Typically, the CTTE will then vote on the bug and formulate a
+decision. But here's the trick: maintainers are still free to do
+whatever they want after that, in a sense. It's not like the CTTE can
+just break down doors and force maintainers to type code. (I won't go
+into the details of the *why* of that, but it involves legal issues
+and, I think, something about the Turing halting problem.)
+
+Anyways. Point is all that is super heavy and no one wants to go
+there. And sometimes, packages just get lost. Maintainers get
+distracted, or busy with something else. It's not that they *want* to
+abandon their packages. They love their little fiefdoms. It's just
+there was a famine or a war or something and everyone died, and they
+have better things to do than put up fences or whatever.
+
+So clever people in Debian found a better way (than the CTTE) of
+handling such problems. It's called the [Package Salvaging](https://wiki.debian.org/PackageSalvaging)
+process. Through that process, a maintainer can propose to take over
+an existing package from another maintainer, if a [certain set of
+condition are met and a specific process is followed](https://www.debian.org/doc/manuals/developers-reference/pkgs.en.html#package-salvaging).
+
+Normally, taking over another's package is basically a war
+declaration, rarely seen in the history of Debian (yes, I do think it
+happened though), as rowdy as ours is. But through this process, it
+seems we have found a fair way of going forward.
+
+The process is basically like this:
+
+ 1. file a bug proposing the change
+ 2. wait three weeks
+ 3. upload a package making the change, with another week delay
+ 4. you now have one more package to worry about
+
+# So how did that go?
+
+It went well!
+
+The old maintainer was [actually fine with the change](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964718#10) because
+their team wasn't using the package anymore anyways. He asked to be
+kept as an uploader, which I was glad to oblige. 
+
+(He replied a few months after the deadline, but I wasn't in a rush
+anyways, so that doesn't matter. It was polite for him to answer, even
+if, technically, I was already allowed to take it over.)
+
+What happened next is less shiny for me though. I totally forgot about
+the ITS. See, the thing is the ITS doesn't show up on my
+[dashboard](https://qa.debian.org/developer.php?login=anarcat@debian.org) at all. So I totally forgot about it until very
+recently.
+
+In fact, the only reason I found out about it was that got into the
+process of formulating *another* ITS ([1008753](https://bugs.debian.org/1008753), trocla) and I was
+trying to figure out how to write the email. Then I remembered: "hey
+wait, I think I did this before!" followed by "oops, yes, I totally
+did this before and forgot for 9 months".
+
+So, not great. Also, the package is not in its best shape still. I was
+still able to upload the upstream version that was pending 1.5.0 to
+clear out the ITS, at least. And then there's already two new upstream
+releases to upload, so I pushed 1.7.0 to experimental as well, for
+good measure.
+
+Unfortunately, I still can't enable tests because [everything is on
+fire](https://bugs.debian.org/1008768), as usual.
+
+But at least my kingdom is growing.
+
+# Appendix
+
+Just in case someone didn't get the metaphor, I'm not a monarchist
+promoting feodalism as a practice to manage a community. I do not
+intend to really "grow my kingdom" and I think the culture around
+"property" of "packages" is kind of absurd in Debian, and I wish it
+would go away. 
+
+Team maintenance, the [LowNMU](https://wiki.debian.org/LowThresholdNmu) process, and [low threshold
+adoption](https://wiki.debian.org/LowThresholdAdoption) processes are all steps in the good direction, but they
+are all *opt in*. At least the package salvaging process is someone a
+little more ... uh... coercitive? Or at least it allows the community
+to step in and do the right thing, in a sense.
+
+We'll see what happens with the wars around the tech committee
+next. (Hint: our next drama is called "usrmerge".) Hopefully, LWN will
+make a brilliant article to sum it up for us so that I don't have to
+go through the debian-devel flamewar to figure it out. I already
+wrecked havoc on the `#debian-devel` IRC channel asking newbie
+questions so I won't stir that mud any further for now.
+
+[[!tag draft]]

approve comment
diff --git a/blog/2022-03-28-wtf-web-servers/comment_1_fce4ceace14d3c229e5780b7cf85a044._comment b/blog/2022-03-28-wtf-web-servers/comment_1_fce4ceace14d3c229e5780b7cf85a044._comment
new file mode 100644
index 00000000..81378def
--- /dev/null
+++ b/blog/2022-03-28-wtf-web-servers/comment_1_fce4ceace14d3c229e5780b7cf85a044._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ ip="89.160.156.189"
+ subject="comment 1"
+ date="2022-03-28T23:37:12Z"
+ content="""
+Cloudflare runs a modified nginx, so you might want to sum those numbers.
+"""]]

making stuff up with stats
diff --git a/blog/2022-03-28-wtf-web-servers.md b/blog/2022-03-28-wtf-web-servers.md
new file mode 100644
index 00000000..746eb8a1
--- /dev/null
+++ b/blog/2022-03-28-wtf-web-servers.md
@@ -0,0 +1,40 @@
+[[!meta title="What is going on with web servers"]]
+
+I stumbled upon [this graph recently](https://w3techs.com/technologies/history_overview/web_server/ms/y), which is [w3techs.com](https://w3techs.com/)
+graph of "Historical yearly trends in the usage statistics of web
+servers". It seems I hadn't looked at it in a long while because I was
+surprised at *many* levels:
+
+ 1. Apache is now second, behind Nginx, since ~2022 (so that's really
+    new at least)
+
+ 2. Cloudflare "server" is *third* ahead of the traditional third
+    (Microsoft IIS) - I somewhat knew that Cloudflare was hosting a
+    lot of stuff, but I somehow didn't expect to see it there at all
+    for some reason
+
+ 3. I had to lookup what [LiteSpeed](https://en.wikipedia.org/wiki/LiteSpeed_Web_Server) was (and it's not a [bike
+    company](https://en.wikipedia.org/wiki/Litespeed)): it's a drop-in replacement (!?) of the Apache web
+    server (not a fork, a bizarre idea but which seems to be gaining a
+    lot of speed recently, possibly because of its support for QUIC
+    and HTTP/3
+
+So there. I'm surprised. I guess the stats should be taken with a
+grain of salt because they only partially correlate with [Netcraft's
+numbers](https://news.netcraft.com/archives/2022/02/28/february-2022-web-server-survey.html) which barely mention LiteSpeed at all. (But they do point
+at a rising share as well.) 
+
+Netcraft also puts Nginx's first place earlier in time, around April
+2019, which is about when Microsoft's IIS took a massive plunge
+down. That is another thing that doesn't map with w3techs' graphs at
+all either.
+
+So it's all [lies and statistics](https://en.wikipedia.org/wiki/Lies%2C_damned_lies%2C_and_statistics), basically. Moving on.
+
+Oh and of course, the two first most popular web servers, regardless
+of the source, are package in Debian. So while we're working on
+statistics and just making stuff up, I'm going to go ahead and claim
+all of this stuff runs on Linux and that [BSD is dead](https://en.wikipedia.org/wiki/Netcraft_confirms_it). Or
+something like that.
+
+[[!tag web history stats debian-planet python-planet]]

quotes
diff --git a/fortunes.txt b/fortunes.txt
index baa5aa7a..a1c48788 100644
--- a/fortunes.txt
+++ b/fortunes.txt
@@ -1186,3 +1186,8 @@ Only after disaster can we be resurrected.
 It's only after you've lost everything that you're free to doanything.
 Nothing is static, everything is evolving, everything is falling apart.
                         - Chuck Palahniuk, Fight Club
+%
+Do not use your energy to worry.
+Use your energy to believe, to create, to learn, to think, and to grow.
+                        - Richard Feynman
+

approve comment
diff --git a/blog/2022-03-20-20-years-emacs/comment_1_5659a3090176ecb986eaa3480bf53e0b._comment b/blog/2022-03-20-20-years-emacs/comment_1_5659a3090176ecb986eaa3480bf53e0b._comment
new file mode 100644
index 00000000..578d218b
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs/comment_1_5659a3090176ecb986eaa3480bf53e0b._comment
@@ -0,0 +1,34 @@
+[[!comment format=mdwn
+ ip="45.80.168.93"
+ claimedauthor="dac.override"
+ subject="2022-03-20-20-years-emacs"
+ date="2022-03-24T15:13:42Z"
+ content="""
+Thanks.
+
+There are always trade-offs involved but here are two examples of how I (try to) enhance integrity of Emacs with SELinux.
+
+Only gnupg clients (and I guess your login shell) should (ideally) ever access \"GNUPGHOME\" and the various sock files etc. Emacs has GPG support but it (AFAIK) always runs Gnupg. By allowing Emacs to run Gnupg with what is called a \"domain transition\" from Emacs' \"domain\" to the Gnupg \"domain you can effectively tell Linux that only Gnupg should ever be able to access the aforementioned. That enhances integrity because now the Emacs \"domain\" no longer needs permissions to access Gnupg related entities.
+
+Same with for example SSH (things like Tramp/Git). Only SSH clients (and I guess your login shell) should ever access ~/.ssh. You can enforce a \"domain transition\" from the Emacs domain to the SSH client domain and tell Linux that only the SSH client domain should ever be able to access ~/.ssh. Emacs can execute the SSH client but \".ssh\" can only be accessed from SSH client domain (and thus effectively only by the SSH client).
+
+This is generally also how I enforce integrity of my Git repositories. I simply have a domain associated with the Git clients and tell SELinux that only the Git client domain should ever be able to access \".git\" metadata. But that obviously only works if all access to \".git\" goes via the Git client (and Magit needs some direct access and since that is Emacs' domain that means I would have to associate those permissions with the Emacs domain.
+
+(it gets a bit more complicated though since Emacs eventually needs to  be able to execute a shell on your behalf (M-x shell etc), and you probably want to be able to access GNUPGHOME from your shell)
+
+The thing with other more \"generic\" content is that it usually needs to be \"shareable\". That for example your ~/Documents. You might want to edit a document with Emacs. You might want to be able to read or store attachments from ~/Documents. So effectively both the Emacs domain as well as for example any MUA domain effectively needs access to \"shareable\" content.
+
+From that perspective it does not really matter whether you run Gnus or a external mail client (they both probably need access to \"generic\" content)
+
+This is what my Emacs SELinux security policy looks like:
+
+ https://git.defensec.nl/?p=dssp5.git;a=blob;f=src/agent/useragent/e/emacs.cil;h=ab2c0af79f5f8bc0473ae843d86ee8e9b96d00d8;hb=refs/heads/dssp5-debian
+
+Keep in mind though that there are always trade-offs to be made, but the same goes for many other applications. Even something like Firefox needs fairly broad access. You want to be able to read and write ~/Downloads for example (but you dont want it accessing ~/.gnupg, ~/.ssh, ~/Workspace/mystuff/.git or any other stuff that it should not have access to in any case.
+
+The difference between using SELinux versus built-in security, amongst other things,  is that SElinux just relies on a safe kernel whereas built-in security relies on the implementation itself and eventually you still need a safe kernel anyway.
+
+I am not saying that Emacs should not be working to improve security though. Just saying that both Emacs and Mutt probably need access to ~/Downloads eventually and they both probably need to be able to execute scripts on your behalf. From that perspective (at least) might as well run it in the same process.
+
+
+"""]]

clarify delegation and its limits
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index d6164853..df9c58c3 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -153,11 +153,22 @@ network locations easily.
 
 ## Delegations
 
-TODO: .well-known and SRV records:
-https://github.com/matrix-org/synapse/blob/develop/docs/delegate.md
-https://github.com/matrix-org/synapse/issues/8982
-https://spec.matrix.org/v1.2/client-server-api/#server-discovery
-https://spec.matrix.org/v1.2/server-server-api/#server-discovery
+If you do not want to run a Matrix server yourself, it's possible to
+delegate the job to another server. There's a [server discovery
+API](https://spec.matrix.org/v1.2/server-server-api/#server-discovery) which uses the `.well-known` pattern (or SRV records, but
+that's "not recommended" and [a bit confusing](https://github.com/matrix-org/synapse/issues/8982)) to delegate that
+service to another server. Be warned that the server still needs to be
+explicitly configured for your domain. You can't just put:
+
+    { "m.server": "matrix.org:443" }
+
+... on `https://example.com/.well-known/matrix/server` and start using
+`@you:example.com` as a Matrix ID. That's because Matrix doesn't
+support "virtual hosting" and you'd still be connecting to rooms and
+people with your `matrix.org` identity.
+
+TODO: what's the different between server-server and client-server API
+specs? e.g. why is there also <https://spec.matrix.org/v1.2/client-server-api/#server-discovery>?
 
 # Performance
 

add runt, thanks drjones
diff --git a/blog/2021-11-21-mbsync-vs-offlineimap.md b/blog/2021-11-21-mbsync-vs-offlineimap.md
index 9efce95a..b9f66edd 100644
--- a/blog/2021-11-21-mbsync-vs-offlineimap.md
+++ b/blog/2021-11-21-mbsync-vs-offlineimap.md
@@ -953,6 +953,9 @@ Those are all the options I have considered, in alphabetical order
  * [offlineimap3](https://github.com/OfflineIMAP/offlineimap3): requires IMAP, used the py2 version in the past,
    might just still work, first sync painful (IIRC), ways to tunnel
    over SSH, review above, Python
+ * [runt](https://github.com/mordak/runt): IMAP-to-maildir, rust, IDLE and [QRESYNC support](https://github.com/djc/tokio-imap/pull/101#issuecomment-725551532),
+   can run as a daemon to monitor the filesystem (and server) for
+   changes
 
 Most projects were not evaluated due to lack of time.
 

response
diff --git a/blog/2022-03-20-20-years-emacs/comment_6_b2dcae10ac53d6fd07c9ca7045b7e017._comment b/blog/2022-03-20-20-years-emacs/comment_6_b2dcae10ac53d6fd07c9ca7045b7e017._comment
new file mode 100644
index 00000000..5bcb783e
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs/comment_6_b2dcae10ac53d6fd07c9ca7045b7e017._comment
@@ -0,0 +1,23 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""comment 6"""
+ date="2022-03-24T14:05:02Z"
+ content="""
+Hi dac.override, thanks for your comment and welcome here!
+
+> I'd be interested to read what solution you envision to the "sandboxing untrusted input" challenge. I am not convinced that using a separate MUA is all that compelling.
+
+The challenge is that, typically, on a UNIX system, process A cannot read memory from process B. You can break that with a debugger, for example, but it's possible to disable that as well. Web browsers also sandbox their processes so that tab A can't read from tab B as well.
+
+In other words, normally, a mail client (say `mutt`) cannot start editing my files. I can even write apparmor files that keep it from doing so at the kernel level.
+
+I can't do that with Emacs, because it's a single process which does *everything*. This means that if there's a compromise anywhere in my email client stack (and that stack is deep: message-mode, gnus, notmuch-emacs, notmuch...), then I can easily get attacked.
+
+> I do see an issue with Emacs (Magit) requiring direct access to ".git" rather than just do all business in ".git" via git-client, This means that I can;t effectively block Emacs from manipulating the git metadata and thus the integrity of my source codes but other than that practically speaking (and besides that is not Gnus fault I believe that a functional MUA needs fairly broad access (attachments, post processing of e-mails)
+
+Yes, but that access can be scoped quite restrictively. For example, my mail client should be able to access the `notmuch` binary (or its shared libraries, if it's fancier) and only my `Maildir` folder, and nothing else.
+
+> I am a Emacs/Gnus user myself and I use SELinux to enforce at least some integrity when it comes to Emacs (and the user session in general)
+
+I am deeply curious how you can manage that at all. Here I need Emacs to access all my files and basically start any process, not sure how I would sandbox it in any way?
+"""]]

approve comment
diff --git a/blog/2022-03-20-20-years-emacs/comment_1_9f29ffb8749cea17c9eb0ba06caf113b._comment b/blog/2022-03-20-20-years-emacs/comment_1_9f29ffb8749cea17c9eb0ba06caf113b._comment
new file mode 100644
index 00000000..c94ba9d7
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs/comment_1_9f29ffb8749cea17c9eb0ba06caf113b._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="45.80.168.93"
+ claimedauthor="dac.override"
+ subject="2022-03-20-20-years-emacs"
+ date="2022-03-24T08:46:42Z"
+ content="""
+I'd be interested to read what solution you envision to the \"sandboxing untrusted input\" challenge. I am not convinced that using a separate MUA is all that compelling.
+
+I do see an issue with Emacs (Magit) requiring direct access to \".git\" rather than just do all business in \".git\" via git-client, This means that I can;t effectively block Emacs from manipulating the git metadata and thus the integrity of my source codes but other than that practically speaking (and besides that is not Gnus fault I believe that a functional MUA needs fairly broad access (attachments, post processing of e-mails)
+
+I am a Emacs/Gnus user myself and I use SELinux to enforce at least some integrity when it comes to Emacs (and the user session in general)
+
+"""]]

fix tags, again
diff --git a/blog/2022-03-20-20-years-emacs.md b/blog/2022-03-20-20-years-emacs.md
index b400cdce..f7bb2d10 100644
--- a/blog/2022-03-20-20-years-emacs.md
+++ b/blog/2022-03-20-20-years-emacs.md
@@ -76,7 +76,7 @@ up.
    
    I already had `lsp-mode` partially setup in Emacs so I only had to
    do [this small tweak to switch](https://gitlab.com/anarcat/emacs-d/-/commit/753ac702b08850322e92c56c2bbcc9afc70d599f) and [change the prefix key](https://gitlab.com/anarcat/emacs-d/-/commit/68331e54bd43a28fc75b28efb4de7f491ab77b72)
-   (because <kbd>s-l</kdb> or <kbd>mod</kbd> is used by my window
+   (because <kbd>s-l</kbd> or <kbd>mod</kbd> is used by my window
    manager). I also had to pin LSP packages to bookworm [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/lsp.pp) and
    [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/emacs.pp).
 

fix tags
I can't believe I still have to do shit like this in 2022.
diff --git a/blog/2022-03-20-20-years-emacs.md b/blog/2022-03-20-20-years-emacs.md
index 379709f2..b400cdce 100644
--- a/blog/2022-03-20-20-years-emacs.md
+++ b/blog/2022-03-20-20-years-emacs.md
@@ -76,7 +76,7 @@ up.
    
    I already had `lsp-mode` partially setup in Emacs so I only had to
    do [this small tweak to switch](https://gitlab.com/anarcat/emacs-d/-/commit/753ac702b08850322e92c56c2bbcc9afc70d599f) and [change the prefix key](https://gitlab.com/anarcat/emacs-d/-/commit/68331e54bd43a28fc75b28efb4de7f491ab77b72)
-   (because <kbd>s-l</kbd> or <kbd>mod</kdb> is used by my window
+   (because <kbd>s-l</kdb> or <kbd>mod</kbd> is used by my window
    manager). I also had to pin LSP packages to bookworm [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/lsp.pp) and
    [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/emacs.pp).
 

more notes on LSP
diff --git a/blog/2022-03-20-20-years-emacs.md b/blog/2022-03-20-20-years-emacs.md
index 8b8c10e0..379709f2 100644
--- a/blog/2022-03-20-20-years-emacs.md
+++ b/blog/2022-03-20-20-years-emacs.md
@@ -65,6 +65,20 @@ up.
    package](https://elpy.readthedocs.io/en/latest/), but I never got around to it. And now it seems
    lsp-mode is uncool and I should really do eglot instead, and that
    doesn't help.
+   
+   **UPDATE**: I finally got tired and switched to `lsp-mode`. The
+   main reason for choosing it over eglot is that it's in Debian (and
+   eglot is not). (Apparently, eglot has more chance of being
+   upstreamed, "when it's done", but I guess I'll cross that bridge
+   when I get there.) `lsp-mode` feels slower than `elpy` but I
+   haven't done *any* of the [performance tuning](https://emacs-lsp.github.io/lsp-mode/page/performance/) and this will
+   improve even more with native compilation (see below).
+   
+   I already had `lsp-mode` partially setup in Emacs so I only had to
+   do [this small tweak to switch](https://gitlab.com/anarcat/emacs-d/-/commit/753ac702b08850322e92c56c2bbcc9afc70d599f) and [change the prefix key](https://gitlab.com/anarcat/emacs-d/-/commit/68331e54bd43a28fc75b28efb4de7f491ab77b72)
+   (because <kbd>s-l</kbd> or <kbd>mod</kdb> is used by my window
+   manager). I also had to pin LSP packages to bookworm [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/lsp.pp) and
+   [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/emacs.pp).
 
  * I am not using [projectile](https://projectile.mx/). It's on some of my numerous todo
    lists somewhere, surely. I suspect it's important to getting my

awesome gwolf, thanks
diff --git a/blog/2022-03-20-20-years-emacs/comment_1_b1cb7a7cf4d9852df683da403e6a4e3b._comment b/blog/2022-03-20-20-years-emacs/comment_1_b1cb7a7cf4d9852df683da403e6a4e3b._comment
new file mode 100644
index 00000000..cfcfb31c
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs/comment_1_b1cb7a7cf4d9852df683da403e6a4e3b._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""followup from an older Debian Developer"""
+ date="2022-03-24T00:49:22Z"
+ content="""
+... at least I like to think he's older, otherwise I would be even more jealous. Check this shit out:
+
+<https://gwolf.org/2022/03/long-long-long-live-emacs_after_39_years.html>
+"""]]

approve comment
diff --git a/blog/2022-03-20-20-years-emacs/comment_1_6a630d6b4162740087961d716910e317._comment b/blog/2022-03-20-20-years-emacs/comment_1_6a630d6b4162740087961d716910e317._comment
new file mode 100644
index 00000000..f675b367
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs/comment_1_6a630d6b4162740087961d716910e317._comment
@@ -0,0 +1,37 @@
+[[!comment format=mdwn
+ ip="69.156.160.174"
+ claimedauthor="sten"
+ subject="comment 3"
+ date="2022-03-23T00:43:12Z"
+ content="""
+Hi,  I really enjoyed this article!
+
+Interestingly something the Debian Emacsen Team convinced me of is (to summarise my understanding): Don't worry, backwards compat is integral to Emacs culture, and deprecation happens slowly, so Emacs is a good investment of time.  I had already decided that it was probably the most future-proof solution, and it was only a question of assessing how much maintenance work would be required.
+
+Is lsp-mode uncool? ;-)
+
+lsp-mode=98th percentile vs eglot's 89th, 
+
+  https://stable.melpa.org/#/lsp-mode
+  https://stable.melpa.org/#/eglot
+
+
+plus allegedly still more interest, growth, and activity,
+
+  https://www.libhunt.com/compare-eglot-vs-lsp-mode
+
+
+In 2020 eglot didn't work with TRAMP (I'm not sure if it does now)
+
+https://www.reddit.com/r/emacs/comments/do2z6y/i_am_moving_from_lspmode_to_eglot/f5jyury/?utm_source=reddit&utm_medium=web2x&context=3
+
+
+It also sounds like lsp-mode is easier to get working, but that said, if your UI and workflow preferences are still the same as I remember they are, then you'll probably feel like disabling a bunch of stuff in lsp-mode.  If TRAMP compatibility isn't important, then maybe you'd enjoy testing eglot more?
+
+Re: projectile, I wonder if Emacs 27's project.el will be able to catch up?
+
+Re: buffer isolation and security, I think this is currently still blocked by lack of \"real/proper\" threading.  If Emacs 28 has these then I'll agree that a *big* change has occurred, otherwise I think we're still using an editor that is layers of abstraction away from an emulation of a single-threaded 1970s LISP machine :-p
+
+https://www.reddit.com/r/emacs/comments/oclycy/is_emacs_multi_threaded_presently/
+https://www.reddit.com/r/emacs/comments/n13v5l/what_is_the_next_big_feature_after_native_comp/
+"""]]

approve comment
diff --git a/blog/2022-03-20-20-years-emacs/comment_1_9ace528b8ddd96ba5843be32197572a2._comment b/blog/2022-03-20-20-years-emacs/comment_1_9ace528b8ddd96ba5843be32197572a2._comment
new file mode 100644
index 00000000..caae63c8
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs/comment_1_9ace528b8ddd96ba5843be32197572a2._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="213.219.154.49"
+ claimedauthor="Frans"
+ url="https://fransdejonge.com"
+ subject="comment 1"
+ date="2022-03-22T12:42:33Z"
+ content="""
+For what it's worth, I think Vivaldi has a good email client.
+"""]]
diff --git a/blog/2022-03-20-20-years-emacs/comment_1_be0aea52fc9708882551c3c591bee3e5._comment b/blog/2022-03-20-20-years-emacs/comment_1_be0aea52fc9708882551c3c591bee3e5._comment
new file mode 100644
index 00000000..5719d6b7
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs/comment_1_be0aea52fc9708882551c3c591bee3e5._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="76.127.148.229"
+ claimedauthor="Mark T. Kennedy"
+ subject="emacs since 1979(!)"
+ date="2022-03-22T01:36:40Z"
+ content="""
+move over, son :-).  and there are lots of us around and still kicking (coding).
+"""]]

defer to the wise
diff --git a/blog/2021-10-23-neo-colonial-internet.md b/blog/2021-10-23-neo-colonial-internet.md
index d01c05a9..15e8c8a2 100644
--- a/blog/2021-10-23-neo-colonial-internet.md
+++ b/blog/2021-10-23-neo-colonial-internet.md
@@ -568,5 +568,11 @@ emperors][].
 
 [christens emperors]: https://en.wikipedia.org/wiki/2016_United_States_presidential_election
 
+Update: a colleague pointed me at [this reading list][] which refers
+to a lot of interesting people much more qualified than me to speak
+about this topic. I defer to their wisdom, in doubt.
+
+[this reading list]: https://beatricemartini.it/blog/decolonizing-technology-reading-list/
+
 [[!tag debian-planet reflexion politics tech diversity ethics internet history colonialism]]
 

mention (some) file managers
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index 9e3b1991..a1edcf4b 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -245,6 +245,14 @@ have which involves an inventory physical books and directory of
 online articles. See also [[firefox]] (Zotero section) and
 [[services/bookmarks]] for a longer discussion of that problem.
 
+Finally, it could simply be that a file browser could act as a
+collection browser, as long as book covers would be shown in parent
+folders. KDE's [Dolphin][] actually shows a preview of images inside a
+folder, but only one layer deep, so it actually doesn't really work
+for Calibre-styled libraries, where each book has its own directory.
+
+[Dolphin]: https://apps.kde.org/dolphin/
+
 And this of course overlaps with another functionality that Calibre
 provide, which is that it's also... a web server!
 

move webserver section higher up, with the browser
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index e08fafeb..9e3b1991 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -10,10 +10,10 @@ TL;DR: I'm considering replacing those various [Calibre][] compnents with...
  * ebook editor: [Sigil][]
  * file converter: no good alternative
  * collection browser: [Liber][] or [trantor][]? see also [[services/bookmarks]]
+ * ebook web server: [Liber][]?
  * metadata editor: no good alternative
  * device synchronisation: [[Syncthing|hardware/tablet/kobo-clara-hd/#install-syncthing]]
  * RSS reader: [feed2exec][], [koreader][], [wallabako][]
- * ebook web server: [Liber][]?
 
 [git-annex]: https://git-annex.branchable.com/
 
@@ -245,78 +245,28 @@ have which involves an inventory physical books and directory of
 online articles. See also [[firefox]] (Zotero section) and
 [[services/bookmarks]] for a longer discussion of that problem.
 
-## Metadata editor
-
-The "collection browser" is based on a lot of metadata that Calibre
-indexes from the books. It can magically find a lot of stuff in the
-multitude of file formats it supports, something that is pretty
-awesome and impressive. For example, I just added a PDF file, and it
-found the book cover, author, publication date, publisher, language
-and the original mobi book id (!). It also added the book in the right
-directory and dumped that metadata and the cover in a file next to the
-book. And if that's not good enough, it can poll that data from
-various online sources like Amazon, and Google books.
-   
-Maybe the [work Peter Keel did][] could be useful in creating some
-tool which would do this automatically? Or maybe [Sigil][] could help?
-[Liber][] can also fetch metadata from Google books, but not
-interactively.
-   
-I still use Calibre mostly for this.
-
-[work Peter Keel did]: https://seegras.discordia.ch/Blog/life-with-calibre/
-
-## Device synchronization
-
-I used Calibre to synchronize books with an ebook reader (typically a
-Kobo). It can automatically update the database on the ebook with
-relevant metadata (e.g. collection or "shelves"), although I did not
-really use that feature. I did like to use Calibre to quickly search
-and prune books from by ebook reader, however.
-
-I considered using [git-annex][] for this, however, given that I
-already use it to synchronize and backup my ebook collection in the
-first place... But more recently I started just synchronising my
-entire book collection with [Syncthing][]. The collection is small
-enough to fit on a SD card if I judiciously ignore some parts with a
-local `.stignore` file. The details of that setup are in [[this blog
-post|hardware/tablet/kobo-clara-hd/#install-syncthing]].
-
-[Syncthing]: https://syncthing.net/
-
-## RSS reader
-
-I used this for a while to read RSS feeds on my ebook-reader, but it
-was pretty clunky. Calibre would be continously generating new ebooks
-based on those feeds and I would never read them, because I would
-never find the time to transfer them to my ebook viewer in the first
-place. Instead, I use a regular RSS feed reader. I ended up writing my
-own, [feed2exec][]) and when I find an article I like, I add it to
-[Wallabag][] which gets sync'd to my reader using [wallabako][],
-another tool I wrote.
-
-[Wallabag]: https://wallabag.org/en
-[wallabako]: https://gitlab.com/anarcat/wallabako
-[feed2exec]: https://feed2exec.rtfd.io/
+And this of course overlaps with another functionality that Calibre
+provide, which is that it's also... a web server!
 
 ## ebook web server
 
-Calibre can also act as a web server, presenting your entire ebook
+Calibre can indeed also act as a web server, presenting your entire ebook
 collection as a website. It also supports acting as an OPDS directory,
 which is kind of neat. There are, as far as I know, no alternative for
 such a system although there *are* servers to share and store ebooks,
-like [Trantor][] or [Liber][]. Unfortunately, neither support OPDS,
-which is too bad: that protocol is quite useful to browse books on the
-fly from hacked Kobo readers (running [Koreader][], but [not
-Plato][]) or Android devices (running [Document Viewer][] or
-Koreader)... There is an OPDS [test server][], see also my [2016
-analysis][].
-
-There is a web interface called [calibre-web](https://github.com/janeczku/calibre-web) that seems
-independent from the Calibre project and talks directly to the
-database using SQLAlchemy. It does use calibre components to convert
-books but it might be an interesting alternative to the web interface
-shipped with Calibre.
+like [Trantor][] or [Liber][].
+
+Unfortunately, neither of those support OPDS, which is too bad: that
+protocol is quite useful to browse books on the fly from hacked Kobo
+readers (running [Koreader][], but [not Plato][]) or Android devices
+(running [Document Viewer][] or Koreader)... There is an OPDS [test
+server][], see also my [2016 analysis][].
+
+Other alternatives include a web interface called [calibre-web](https://github.com/janeczku/calibre-web)
+that seems independent from the Calibre project and talks directly to
+the database using SQLAlchemy. It does use calibre components to
+convert books but it might be an interesting alternative to the web
+interface shipped with Calibre.
 
 [readarr][] ("arr" stands for "aaargh C#/Windows!") and [Ubooquity][]
 (... Java) are things as well, neither of which are packaged in Debian.
@@ -396,6 +346,60 @@ owned by a shared group and writable:
 I also added that in `.git/hooks/post-checkout` for my future self,
 although `git-annex` might overwrite that eventually...
 
+## Metadata editor
+
+The "collection browser" is based on a lot of metadata that Calibre
+indexes from the books. It can magically find a lot of stuff in the
+multitude of file formats it supports, something that is pretty
+awesome and impressive. For example, I just added a PDF file, and it
+found the book cover, author, publication date, publisher, language
+and the original mobi book id (!). It also added the book in the right
+directory and dumped that metadata and the cover in a file next to the
+book. And if that's not good enough, it can poll that data from
+various online sources like Amazon, and Google books.
+   
+Maybe the [work Peter Keel did][] could be useful in creating some
+tool which would do this automatically? Or maybe [Sigil][] could help?
+[Liber][] can also fetch metadata from Google books, but not
+interactively.
+   
+I still use Calibre mostly for this.
+
+[work Peter Keel did]: https://seegras.discordia.ch/Blog/life-with-calibre/
+
+## Device synchronization
+
+I used Calibre to synchronize books with an ebook reader (typically a
+Kobo). It can automatically update the database on the ebook with
+relevant metadata (e.g. collection or "shelves"), although I did not
+really use that feature. I did like to use Calibre to quickly search
+and prune books from by ebook reader, however.
+
+I considered using [git-annex][] for this, however, given that I
+already use it to synchronize and backup my ebook collection in the
+first place... But more recently I started just synchronising my
+entire book collection with [Syncthing][]. The collection is small
+enough to fit on a SD card if I judiciously ignore some parts with a
+local `.stignore` file. The details of that setup are in [[this blog
+post|hardware/tablet/kobo-clara-hd/#install-syncthing]].
+
+[Syncthing]: https://syncthing.net/
+
+## RSS reader
+
+I used this for a while to read RSS feeds on my ebook-reader, but it
+was pretty clunky. Calibre would be continously generating new ebooks
+based on those feeds and I would never read them, because I would
+never find the time to transfer them to my ebook viewer in the first
+place. Instead, I use a regular RSS feed reader. I ended up writing my
+own, [feed2exec][]) and when I find an article I like, I add it to
+[Wallabag][] which gets sync'd to my reader using [wallabako][],
+another tool I wrote.
+
+[Wallabag]: https://wallabag.org/en
+[wallabako]: https://gitlab.com/anarcat/wallabako
+[feed2exec]: https://feed2exec.rtfd.io/
+
 ## Other functionality and future thoughts
 
 Note that I might have forgotten functionality in Calibre in the above

mention readarr
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index 4bdfe990..e08fafeb 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -312,14 +312,16 @@ Plato][]) or Android devices (running [Document Viewer][] or
 Koreader)... There is an OPDS [test server][], see also my [2016
 analysis][].
 
-Update 2: there is a web interface called [calibre-web](https://github.com/janeczku/calibre-web) that seems
+There is a web interface called [calibre-web](https://github.com/janeczku/calibre-web) that seems
 independent from the Calibre project and talks directly to the
 database using SQLAlchemy. It does use calibre components to convert
 books but it might be an interesting alternative to the web interface
 shipped with Calibre.
 
-[Ubooquity][] is a thing as well.
+[readarr][] ("arr" stands for "aaargh C#/Windows!") and [Ubooquity][]
+(... Java) are things as well, neither of which are packaged in Debian.
 
+[readarr]: https://readarr.com/
 [Ubooquity]: https://vaemendis.net/ubooquity/
 [2016 analysis]:  https://github.com/wallabag/wallabag/issues/1253#issuecomment-204996640
 [test server]: http://feedbooks.github.io/opds-test-catalog/

syncthing killed calibre sync, whoohoo
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index 8175877a..4bdfe990 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -6,13 +6,13 @@
 
 TL;DR: I'm considering replacing those various [Calibre][] compnents with...
 
- * ebook-viewer: using a Kobo or other ebook reader, possibly
-   [Atril][] or [MuPDF][] on the desktop?
- * ebook-editor: [Sigil][].
+ * ebook viewer: [koreader][], [Atril][] on the desktop
+ * ebook editor: [Sigil][]
+ * file converter: no good alternative
  * collection browser: [Liber][] or [trantor][]? see also [[services/bookmarks]]
- * metadata editor: no good alternative.
- * device synchronisation: [git-annex][]?
- * RSS reader: [feed2exec][], [wallabako][]
+ * metadata editor: no good alternative
+ * device synchronisation: [[Syncthing|hardware/tablet/kobo-clara-hd/#install-syncthing]]
+ * RSS reader: [feed2exec][], [koreader][], [wallabako][]
  * ebook web server: [Liber][]?
 
 [git-annex]: https://git-annex.branchable.com/
@@ -21,7 +21,6 @@ My biggest blocker that don't really have good alternatives are:
 
  * collection browser
  * metadata editor
- * device sync
 
 See below why and a deeper discussion on all the features.
 
@@ -267,15 +266,23 @@ I still use Calibre mostly for this.
 
 [work Peter Keel did]: https://seegras.discordia.ch/Blog/life-with-calibre/
 
-## Device synchronization tool
+## Device synchronization
 
-I mostly use Calibre to synchronize books with an ebook-reader. It can
-also automatically update the database on the ebook with relevant
-metadata (e.g. collection or "shelves"), although I do not really use
-that feature. I do like to use Calibre to quickly search and prune
-books from by ebook reader, however. I might be able to use
-[git-annex][] for this, however, given that I already use it to
-synchronize and backup my ebook collection in the first place...
+I used Calibre to synchronize books with an ebook reader (typically a
+Kobo). It can automatically update the database on the ebook with
+relevant metadata (e.g. collection or "shelves"), although I did not
+really use that feature. I did like to use Calibre to quickly search
+and prune books from by ebook reader, however.
+
+I considered using [git-annex][] for this, however, given that I
+already use it to synchronize and backup my ebook collection in the
+first place... But more recently I started just synchronising my
+entire book collection with [Syncthing][]. The collection is small
+enough to fit on a SD card if I judiciously ignore some parts with a
+local `.stignore` file. The details of that setup are in [[this blog
+post|hardware/tablet/kobo-clara-hd/#install-syncthing]].
+
+[Syncthing]: https://syncthing.net/
 
 ## RSS reader
 

move calibre setup under web
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index 49cf7bf3..8175877a 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -321,40 +321,14 @@ shipped with Calibre.
 [Liber]: https://git.autistici.org/ale/liber
 [Trantor]: https://gitlab.com/trantor/trantor
 
-## Other functionality and future thoughts
-
-Note that I might have forgotten functionality in Calibre in the above
-list: I'm only listing the things I have used or am using on a regular
-basis. For example, you can have a USB stick with Calibre on it to
-carry the actual software, along with the book library, around on
-different computers, but I never used that feature.
-
-So there you go. It's a colossal task! And while it's great that
-Calibre does all those things, I can't help but think that it would be
-better if Calibre was split up in multiple components, each maintained
-separately. I would love to use *only* the document converter, for
-example. It's possible to do that on the commandline, but it still
-means I have the entire Calibre package installed.
+### calibre webserver setup
 
-Maybe a simple solution, from Debian's point of view, would be to
-split the package into multiple components, with the GUI and web
-servers packaged separately from the commandline converter. This way I
-would be able to install only the parts of Calibre I need and have
-limited exposure to other security issues. It would also make it
-easier to run Calibre headless, in a virtual machine or remote server
-for extra isoluation, for example.
-
-[[!tag blog debian-planet python ebook python-planet python archive wallabako git-annex wallabag]]
-
-Update: this post generated some activity on Mastodon, [follow the
-conversation here or on your favorite Mastodon instance](https://social.weho.st/@anarcat/102917682883043910).
-
-Update 3: I ended up setting up calibre on the server side of things
-to have an OPDS directory to more easily transfer books from my
-e-reader, now that I have an Android tablet (running "Document Viewer"
-or "Koreader", both of which support OPDS), or Koreader on my Kobo
-(which works much better than before, thanks to NickelMenu. I setup
-the service using this `.service` file:
+I ended up setting up calibre on the server side of things to have an
+OPDS directory to more easily transfer books from my e-reader, now
+that I have an Android tablet (running "Document Viewer" or
+"Koreader", both of which support OPDS), or Koreader on my Kobo (which
+works much better than before, thanks to NickelMenu. I setup the
+service using this `.service` file:
 
     [Service]
     Type=simple
@@ -412,3 +386,31 @@ owned by a shared group and writable:
 
 I also added that in `.git/hooks/post-checkout` for my future self,
 although `git-annex` might overwrite that eventually...
+
+## Other functionality and future thoughts
+
+Note that I might have forgotten functionality in Calibre in the above
+list: I'm only listing the things I have used or am using on a regular
+basis. For example, you can have a USB stick with Calibre on it to
+carry the actual software, along with the book library, around on
+different computers, but I never used that feature.
+
+So there you go. It's a colossal task! And while it's great that
+Calibre does all those things, I can't help but think that it would be
+better if Calibre was split up in multiple components, each maintained
+separately. I would love to use *only* the document converter, for
+example. It's possible to do that on the commandline, but it still
+means I have the entire Calibre package installed.
+
+Maybe a simple solution, from Debian's point of view, would be to
+split the package into multiple components, with the GUI and web
+servers packaged separately from the commandline converter. This way I
+would be able to install only the parts of Calibre I need and have
+limited exposure to other security issues. It would also make it
+easier to run Calibre headless, in a virtual machine or remote server
+for extra isoluation, for example.
+
+[[!tag blog debian-planet python ebook python-planet python archive wallabako git-annex wallabag]]
+
+Update: this post generated some activity on Mastodon, [follow the
+conversation here or on your favorite Mastodon instance](https://social.weho.st/@anarcat/102917682883043910).

merge calibre-web with the other web stuff
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index 4401e9fb..49cf7bf3 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -300,17 +300,24 @@ which is kind of neat. There are, as far as I know, no alternative for
 such a system although there *are* servers to share and store ebooks,
 like [Trantor][] or [Liber][]. Unfortunately, neither support OPDS,
 which is too bad: that protocol is quite useful to browse books on the
-fly from hacked Kobo readers (running
-[Koreader](http://koreader.rocks/), but [not
-Plato](https://github.com/baskerville/plato/issues/69)) or Android
-devices (running [Document
-Viewer](https://f-droid.org/packages/org.sufficientlysecure.viewer/)
-or Koreader)... There is an OPDS [test
-server](http://feedbooks.github.io/opds-test-catalog/), see also my
-[2016
-analysis](https://github.com/wallabag/wallabag/issues/1253#issuecomment-204996640). Update:
-[Ubooquity](https://vaemendis.net/ubooquity/) is a thing as well.
+fly from hacked Kobo readers (running [Koreader][], but [not
+Plato][]) or Android devices (running [Document Viewer][] or
+Koreader)... There is an OPDS [test server][], see also my [2016
+analysis][].
 
+Update 2: there is a web interface called [calibre-web](https://github.com/janeczku/calibre-web) that seems
+independent from the Calibre project and talks directly to the
+database using SQLAlchemy. It does use calibre components to convert
+books but it might be an interesting alternative to the web interface
+shipped with Calibre.
+
+[Ubooquity][] is a thing as well.
+
+[Ubooquity]: https://vaemendis.net/ubooquity/
+[2016 analysis]:  https://github.com/wallabag/wallabag/issues/1253#issuecomment-204996640
+[test server]: http://feedbooks.github.io/opds-test-catalog/
+[Document Viewer]: https://f-droid.org/packages/org.sufficientlysecure.viewer/
+[not Plato]: https://github.com/baskerville/plato/issues/69
 [Liber]: https://git.autistici.org/ale/liber
 [Trantor]: https://gitlab.com/trantor/trantor
 
@@ -342,12 +349,6 @@ for extra isoluation, for example.
 Update: this post generated some activity on Mastodon, [follow the
 conversation here or on your favorite Mastodon instance](https://social.weho.st/@anarcat/102917682883043910).
 
-Update 2: there is a web interface called [calibre-web](https://github.com/janeczku/calibre-web) that seems
-independent from the Calibre project and talks directly to the
-database using SQLAlchemy. It does use calibre components to convert
-books but it might be an interesting alternative to the web interface
-shipped with Calibre.
-
 Update 3: I ended up setting up calibre on the server side of things
 to have an OPDS directory to more easily transfer books from my
 e-reader, now that I have an Android tablet (running "Document Viewer"

split bullet list into sections to make them breathe
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index 2cee39be..4401e9fb 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -117,52 +117,51 @@ exactly? What do I actually use Calibre for anyways?
 
 Calibre is...
 
- * an **ebook viewer**: Calibre ships with the [ebook-viewer][]
-   command, which allows one to browse a vast variety of ebook
-   formats. I *rarely use* this feature, since I read my ebooks on a
-   e-reader, on purpose. There is, besides, a good variety of
-   ebook-readers, on different platforms, that can replace Calibre
-   here:
-
-    * [Atril][], MATE's version of [Evince][], supports ePUBs (Evince
-      doesn't seem to), but fails to load certain ebooks (book #1459
-      for example)
-    * [Bookworm][] looks very promising, not in Debian ([883867][]), but [Flathub][flathub-bookworm]. scans books on exit,
-      and can take a loong time to scan an entire library (took 24+
-      hours here, and had to kill `pdftohtml` a few times) without a
-      progress bar. but has a nice library browser, although it looks
-      like covers are sorted randomly. search works okay,
-      however. unclear what happens when you add a book, it doesn't
-      end up in the chosen on-disk library.
-    * [Buka][] is another "ebook" manager written in Javascript, but
-      only supports PDFs for now.
-    * [coolreader][] is another alternative, [not yet in Debian
-      (#715470)][]
-    * Emacs (of course) supports ebooks through [nov.el][]
-    * [fbreader][] also supports ePUBs, but is much slower than all
-      those others, and turned proprietary so is unmaintained
-    * [Foliate][] looks gorgeous and is built on top of the ePUB.js
-      library, not in Debian, but [Flathub][flathub-foliate].
-    * [GNOME Books][] is interesting, but relies on the GNOME search
-      engine and doesn't find my books (and instead lots of other
-      garbage). it's been described as "basic" and "the least mature"
-      in [this OMG Ubuntu review][]
-    * [koreader][] is a good alternative reader for the Kobo devices
-      and now also has builds for Debian, but no Debian package
-    * [lucidor][] is a Firefox extension that can read an organize
-      books, but is not packaged in Debian either (although upstream
-      provides a .deb). It depends on older Firefox releases (or
-      "[Pale moon][]", a Firefox fork), see also the [[firefox]]
-      XULocalypse for details
-    * [MuPDF][] also reads ePUBs and is really fast, but the user
-      interface is extremely minimal, and copy-paste doesn't work so
-      well (think "Xpdf"). it also failed to load certain books (e.g.
-      1359), fails to render some tables (e.g. book 1608) and warns
-      about 3.0 ePUBs (e.g. book 1162)
-    * [Okular][] supports ePUBs when `okular-extra-backends` is
-      installed
-    * [plato][] is another alternative reader for Kobo readers, not in
-      Debian
+## ebook viewer
+
+Calibre ships with the [ebook-viewer][] command, which allows one to
+browse a vast variety of ebook formats. I *rarely use* this feature,
+since I read my ebooks on a e-reader, on purpose. There is, besides, a
+good variety of ebook-readers, on different platforms, that can
+replace Calibre here:
+
+ * [Atril][], MATE's version of [Evince][], supports ePUBs (Evince
+   doesn't seem to), but fails to load certain ebooks (book #1459 for
+   example)
+ * [Bookworm][] looks very promising, not in Debian ([883867][]), but
+   [Flathub][flathub-bookworm]. scans books on exit, and can take a
+   loong time to scan an entire library (took 24+ hours here, and had
+   to kill `pdftohtml` a few times) without a progress bar. but has a
+   nice library browser, although it looks like covers are sorted
+   randomly. search works okay, however. unclear what happens when you
+   add a book, it doesn't end up in the chosen on-disk library.
+ * [Buka][] is another "ebook" manager written in Javascript, but only
+   supports PDFs for now.
+ * [coolreader][] is another alternative, [not yet in Debian
+   (#715470)][]
+ * Emacs (of course) supports ebooks through [nov.el][]
+ * [fbreader][] also supports ePUBs, but is much slower than all those
+   others, and turned proprietary so is unmaintained
+ * [Foliate][] looks gorgeous and is built on top of the ePUB.js
+   library, not in Debian, but [Flathub][flathub-foliate].
+ * [GNOME Books][] is interesting, but relies on the GNOME search
+   engine and doesn't find my books (and instead lots of other
+   garbage). it's been described as "basic" and "the least mature" in
+   [this OMG Ubuntu review][]
+ * [koreader][] is a good alternative reader for the Kobo devices and
+   now also has builds for Debian, but no Debian package
+ * [lucidor][] is a Firefox extension that can read an organize books,
+   but is not packaged in Debian either (although upstream provides a
+   .deb). It depends on older Firefox releases (or "[Pale moon][]", a
+   Firefox fork), see also the [[firefox]] XULocalypse for details
+ * [MuPDF][] also reads ePUBs and is really fast, but the user
+   interface is extremely minimal, and copy-paste doesn't work so well
+   (think "Xpdf"). it also failed to load certain books (e.g.  1359),
+   fails to render some tables (e.g. book 1608) and warns about 3.0
+   ePUBs (e.g. book 1162)
+ * [Okular][] supports ePUBs when `okular-extra-backends` is installed
+ * [plato][] is another alternative reader for Kobo readers, not in
+   Debian
 
 [883867]: https://bugs.debian.org/883867
 [Evince]: https://wiki.gnome.org/Apps/Evince
@@ -186,120 +185,137 @@ Calibre is...
 [Atril]: https://tracker.debian.org/pkg/atril
 [ebook-viewer]: https://manpages.debian.org/ebook-viewer
 
- * an **ebook editor**: Calibre also ships with an [ebook-edit][]
-   command, which allows you to do all sorts of nasty things to your
-   ebooks. I have rarely used this tool, having found it hard to use
-   and not giving me the results I needed, in my use case (which was
-   to reformat ePUBs before publication). For this purpose, [Sigil][]
-   is a much better option, now packaged in Debian. There are also
-   various tools that render to ePUB: I often use the [Sphinx
-   documentation system][] for that purpose, and have been able to
-   produce ePUBs from LaTeX for some projects.
+## ebook editor
+
+Calibre also ships with an [ebook-edit][] command, which allows you to
+do all sorts of nasty things to your ebooks. I have rarely used this
+tool, having found it hard to use and not giving me the results I
+needed, in my use case (which was to reformat ePUBs before
+publication). For this purpose, [Sigil][] is a much better option, now
+packaged in Debian. There are also various tools that render to ePUB:
+I often use the [Sphinx documentation system][] for that purpose, and
+have been able to produce ePUBs from LaTeX for some projects.
 
 [Sphinx documentation system]: http://www.sphinx-doc.org/
 [Sigil]: https://sigil-ebook.com/
 [ebook-edit]: https://manpages.debian.org/ebook-edit
 
- * a **file converter**: Calibre can convert between many ebook
-   formats, to accomodate the various readers. In my experience, this
-   doesn't work very well: the layout is often broken and I have found
-   it's much better to find pristine copies of ePUB books than fight
-   with the converter. There are, however, very few alternatives to
-   this functionality, unfortunately.
+## File converter
+
+Calibre can convert between many ebook formats, to accomodate the
+various readers. In my experience, this doesn't work very well: the
+layout is often broken and I have found it's much better to find
+pristine copies of ePUB books than fight with the converter. There
+are, however, very few alternatives to this functionality,
+unfortunately.
  
- * a **collection browser**: this is the main functionality I would
-   miss from Calibre. I am constantly adding books to my library, and
-   Calibre does have this incredibly nice functionality of just
-   hitting "add book" and Just Do The Right Thing™ after
-   that. Specifically, what I like is that it:
+## Collection browser
+
+This is the main functionality I would miss from Calibre. I am
+constantly adding books to my library, and Calibre does have this
+incredibly nice functionality of just hitting "add book" and Just Do
+The Right Thing™ after that. Specifically, what I like is that it:
    
-   * sort, view, and search books in folders, per author, date,
-     editor, etc
-   * quick search is especially powerful
-   * allows downloading and editing metadata (like covers) easily
-   * track read/unread status (although that's a custom field *I* had
-     to add)
-
-   Calibre is, as far as I know, the only tool that goes so deep in
-   solving that problem. The [Liber][] web server, however, does
-   provide similar search and metadata functionality. It also supports
-   migrating from an existing Calibre database as it can read the
-   Calibre metadata stores. When no metadata is found, it fetches some
-   from online sources (currently Google Books).
-
-   One major limitation of Liber in this context is that it's solely
-   search-driven: it will not allow you to see (for example) the
-   "latest books added" or "browse by author". It also doesn't support
-   "uploading" books although it will incrementally pick up new books
-   added by hand in the library. It somewhat assumes Calibre already
-   exists, in a way, to properly curate the library and is more
-   designed to be a search engine and book sharing system between
-   liber instances. This is something that [trantor][] might be better
-   at, although it doesn't use the Calibre database, so it might not
-   have as good metadata...
-
-   This also connects with the more general "book inventory" problem I
-   have which involves an inventory physical books and directory of
-   online articles. See also [[firefox]] (Zotero section) and
-   [[services/bookmarks]] for a longer discussion of that problem.
-
- * a **metadata editor**: the "collection browser" is based on a lot
-   of metadata that Calibre indexes from the books. It can magically
-   find a lot of stuff in the multitude of file formats it supports,
-   something that is pretty awesome and impressive. For example, I
-   just added a PDF file, and it found the book cover, author,
-   publication date, publisher, language and the original mobi book id
-   (!). It also added the book in the right directory and dumped that
-   metadata and the cover in a file next to the book. And if that's
-   not good enough, it can poll that data from various online sources
-   like Amazon, and Google books.
+ * sort, view, and search books in folders, per author, date, editor,
+   etc
+ * quick search is especially powerful

(fichier de différences tronqué)
settext/atx
diff --git a/software/desktop/calibre.mdwn b/software/desktop/calibre.mdwn
index 6270bf94..2cee39be 100644
--- a/software/desktop/calibre.mdwn
+++ b/software/desktop/calibre.mdwn
@@ -2,8 +2,7 @@
 
 [[!toc levels=2]]
 
-Summary
-=======
+# Summary
 
 TL;DR: I'm considering replacing those various [Calibre][] compnents with...
 
@@ -26,8 +25,7 @@ My biggest blocker that don't really have good alternatives are:
 
 See below why and a deeper discussion on all the features.
 
-Problems with Calibre
-=====================
+# Problems with Calibre
 
 [Calibre][] is an amazing software: it allows users to manage ebooks
 on your desktop and a multitude of ebook readers. It's used by Linux
@@ -74,6 +72,7 @@ However, it has had many problems over the years:
 [584334]: https://bugs.debian.org/584334
 [640026]: https://bugs.debian.org/640026
 [873795]: https://bugs.debian.org/873795
+
  * **Incomplete Python 3 support**. because of this, Calibre 4.0 was
    [removed from Debian in 2019][] ([936270][]). Now a there
    is [port in progress][] which is going well: only the plugins and
@@ -83,6 +82,7 @@ However, it has had many problems over the years:
    backtracked on that position since then.
 
 [936270]: https://bugs.debian.org/936270
+
 Update: a previous version of that post claimed that all of Calibre
 had been removed from Debian. This was inaccurate, as the [Debian
 Calibre maintainer pointed out][]. What happened was [Calibre 4.0 was
@@ -110,8 +110,7 @@ amazing on the surface, but when you look underneath, it's a monster
 that is impossible to maintain, a liability that is just bound to
 cause more problems in the future.
 
-What does Calibre do anyways
-============================
+# What does Calibre do anyways
 
 So let's say I wanted to get rid of Calibre, what would that mean
 exactly? What do I actually use Calibre for anyways?

link to other firefox configuration resoureces
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index 84e8b1dc..16a3cc73 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -343,6 +343,15 @@ I add some search engines that are misconfigured from [Mycroft](http://mycroftpr
 import my set of [Debian bookmarks](https://salsa.debian.org/debian/debian-bookmarks-shortcuts) for quick access to Debian
 resources.
 
+More similar projects:
+
+ * [arkenfox/user.js](https://github.com/arkenfox/user.js): "Firefox privacy, security and
+   anti-tracking: a comprehensive user.js template for configuration
+   and hardening"
+
+ * [SebastianSimon/firefox-omni-tweaks](https://github.com/SebastianSimon/firefox-omni-tweaks): "A script that disables
+   the clickSelectsAll behavior of Firefox, and more."
+
 # Remaining work
 
 My Firefox configuration is not fully automated yet. The `user.js`

settext/atx
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index 736bd854..84e8b1dc 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -14,8 +14,7 @@ that procedure in [the Debian wiki](https://wiki.debian.org/Firefox#Using_snap).
 buster or later, the quantum version is available as a Debian package
 (now ESR too!) so those hacks are not necessary.
 
-Extensions
-==========
+# Extensions
 
 This section documents the [Firefox add-ons](https://addons.mozilla.org/) I am using, testing,
 or have used in the past.
@@ -195,9 +194,7 @@ hard to use or simply irrelevant.
 
 [it's all text!]: https://addons.mozilla.org/en-US/firefox/addon/its-all-text/
 
-
-Surviving the XULocalypse
-=========================
+# Surviving the XULocalypse
 
 I wasn't very affected by the "XULocalypse", or the removal of older
 "XUL" extensions from Firefox 60. My biggest blocker was [it's all
@@ -291,8 +288,7 @@ the removal of XUL/XPCOM from Firefox 57:
 It is unclear, however, whether those browsers will be sustainable in
 the long term.
 
-Configuration
-==============
+# Configuration
 
 I have set the following configuration options, in a `user.js` file
 that I version-control into git:
@@ -347,8 +343,7 @@ I add some search engines that are misconfigured from [Mycroft](http://mycroftpr
 import my set of [Debian bookmarks](https://salsa.debian.org/debian/debian-bookmarks-shortcuts) for quick access to Debian
 resources.
 
-Remaining work
-==============
+# Remaining work
 
 My Firefox configuration is not fully automated yet. The `user.js`
 hacks above only go so far. For example, the search engine override
@@ -368,8 +363,7 @@ sideloading](https://blog.mozilla.org/addons/2020/03/10/support-for-extension-si
 I miss the times where bookmarks where just that HTML file sitting in
 the profile directory...
 
-History
-=======
+# History
 
 I have been a long time user of the "Mozilla" family of web
 browsers. My first web browser (apart from [[!wikipedia lynx]]) was
@@ -404,8 +398,7 @@ Debian, so not really an option for me right now.
 So long story short, I use firefox now. It's nice to root for the
 [[!wikipedia Browser_wars desc="underdog"]] anyways.
 
-Remaining issues
-================
+# Remaining issues
 
 My remaining concerns with Firefox, right now, are:
 

spell check, some editing
diff --git a/blog/2022-03-20-20-years-emacs.md b/blog/2022-03-20-20-years-emacs.md
index 78d01f36..8b8c10e0 100644
--- a/blog/2022-03-20-20-years-emacs.md
+++ b/blog/2022-03-20-20-years-emacs.md
@@ -6,10 +6,10 @@ know for how long I've been using Emacs. It's lost in the mists of
 [[history|blog/2012-11-01-my-short-computing-history]]. If I would
 have to venture a guess, it was back in the "early days", which in
 that history is mapped around 1996-1997, when I installed my very own
-"PC" with [FreeBSD](https://freebsd.org/) 2.2.x and painstakenly managed to make
+"PC" with [FreeBSD](https://freebsd.org/) 2.2.x and painstakingly managed to make
 [XFree86](https://en.wikipedia.org/wiki/XFree86) run on it.
 
-Modelines. Those were the days... But I disgress.
+[Modelines](https://en.wikipedia.org/wiki/XFree86_Modeline). Those were the days... But I digress.
 
 # I am old...
 
@@ -19,13 +19,13 @@ may be born after that time. This means I'm at least significantly
 older than those people, to put things gently.
 
 Clever history nerds will notice that the commit is obviously fake:
-Git itself was not written before 2005. But ah-ah! I was already
-managing my home directory with [CVS](https://en.wikipedia.org/wiki/Concurrent_Versions_System) in 2001. I eventually
-converted this into git, and therefore you can see all my embarrassing
-history.
+[Git](https://en.wikipedia.org/wiki/Git) itself [did not exist until 2005](https://www.linuxjournal.com/content/git-origin-story). But *ah-ah*! I was
+already managing my home directory with [CVS](https://en.wikipedia.org/wiki/Concurrent_Versions_System) in 2001! I converted
+that repository into git some time in 2009, and therefore you can see
+all my embarrassing history, including changes from two decades ago.
 
 That includes my [first known .emacs file](https://gitlab.com/anarcat/emacs-d/-/raw/05ccd451a6db0d9f7acb3618925f0505bac225f1/.emacs) which is just bizarre to
-read right now: 200 lines, most of which is "customize" stuff.
+read right now: 200 lines, most of which are "customize" stuff.
 Compare with the [current, 1000+ lines init.el](https://gitlab.com/anarcat/emacs-d/-/raw/75446e8860b41214520c260cdd254e6fdeaaed51/init.el) which is also still
 kind of a mess, but actually shares very little with the original,
 thankfully.
@@ -34,7 +34,7 @@ All this to say that in those years (*decades*, really) of using
 Emacs, I have had a very different experience than `credmp` who wrote
 packages, sent patches, and got name dropping from other
 developers. My experience is just struggling to keep up with
-everything, in general, but particularly in Emacs.
+everything, in general, but also in Emacs.
 
 # ... and Emacs is too fast for me
 
@@ -42,67 +42,87 @@ It might sound odd to say, but Emacs is actually moving pretty fast
 right now. A lot of new packages are coming out, and I can hardly keep
 up.
 
-I am not using [org mode](https://orgmode.org/), but did use it for time (and task)
-tracking for a while (and for [invoicing too](https://github.com/anarcat/ledger-timetracking), funky stuff).
-
-I am not using [mu4e](https://www.djcbsoftware.nl/code/mu/), but maybe I'm using something better
-([notmuch](https://notmuchmail.org/)) and yes, I am reading my mail in Emacs, which I find
-questionable from a security perspective. (Sandboxing untrusted
-inputs? Anyone?)
-
-I *am* using [magit](https://magit.vc/), but only when coding, so I do end up using
-git on the command line quite a bit anyways.
-
-I *do* have [which-key](https://github.com/justbur/emacs-which-key) enabled, and reading about it reminded me I
-wanted to turn it off because it's kind of noisy and I never remember
-I can actually use it for anything. Or, in other words, I don't even
-remember the prefix key or, when I do, there's too many possible
-commands after for it to be useful.
-
-I haven't setup [lsp-mode](https://emacs-lsp.github.io/lsp-mode/), let alone [Eglot](https://github.com/joaotavora/eglot), which I just
-learned about reading the article. I thought I would be super shiny
-and cool by setting up LSP instead of the (dying?) [elpy package](https://elpy.readthedocs.io/en/latest/),
-but I never got around to it. And now it seems lsp-mode is uncool and
-I should really do eglot instead, and that doesn't help.
-
-I am not using [projectile](https://projectile.mx/). It's on some queue somewhere. I
-suspect it's important to getting my projects organised, but I still
-live halfway between the terminal and Emacs, so it's not quite clear
-what I would gain.
-
-I had to ask what [native compilation](https://www.emacswiki.org/emacs/GccEmacs) was or why it mattered the
-first time I heard of it. And when I saw it again in the article, I
-had to click through to remember. 
+ * I am not using [org mode](https://orgmode.org/), but did use it for time (and task)
+   tracking for a while (and for [invoicing too](https://github.com/anarcat/ledger-timetracking), funky stuff).
+
+ * I am not using [mu4e](https://www.djcbsoftware.nl/code/mu/), but maybe I'm using something better
+   ([notmuch](https://notmuchmail.org/)) and yes, I am reading my mail in Emacs, which I find
+   questionable from a security perspective. (Sandboxing untrusted
+   inputs? Anyone?)
+
+ * I *am* using [magit](https://magit.vc/), but only when coding, so I do end up using
+   git on the command line quite a bit anyways.
+
+ * I *do* have [which-key](https://github.com/justbur/emacs-which-key) enabled, and reading about it reminded
+   me I wanted to turn it off because it's kind of noisy and I never
+   remember I can actually use it for anything. Or, in other words, I
+   don't even remember the prefix key or, when I do, there's too many
+   possible commands after for it to be useful.
+
+ * I haven't setup [lsp-mode](https://emacs-lsp.github.io/lsp-mode/), let alone [Eglot](https://github.com/joaotavora/eglot), which I just
+   learned about reading the article. I thought I would be super shiny
+   and cool by setting up LSP instead of the (dying?) [elpy
+   package](https://elpy.readthedocs.io/en/latest/), but I never got around to it. And now it seems
+   lsp-mode is uncool and I should really do eglot instead, and that
+   doesn't help.
+
+ * I am not using [projectile](https://projectile.mx/). It's on some of my numerous todo
+   lists somewhere, surely. I suspect it's important to getting my
+   projects organised, but I still live halfway between the terminal
+   and Emacs, so it's not quite clear what I would gain.
+
+ * I had to ask what [native compilation](https://www.emacswiki.org/emacs/GccEmacs) was or why it mattered
+   the first time I heard of it. And when I saw it again in the
+   article, I had to click through to remember.
+
+Overall, I feel there's a lot of cool stuff in Emacs out there. But I
+can't quite tell what's the best of which. I can barely remember which
+completion mechanism I use (company, maybe?) or what makes my
+mini-buffer completion work the way it does. Everything is lost in
+piles of customize and `.emacs` hacks that is constantly
+changing. Because a lot is in third-party packages, there are often
+many different options and it's hard to tell which one we should be
+using.
 
 # ... or at least fast enough
 
-Frankly, Emacs feels fast enough for me. Back in my days, I was
+And really, Emacs feels fast enough for me. When I started, I was
 running Emacs on a Pentium I, 166MHz, with 8MB of RAM (eventually
 upgraded to 32MB, whoohoo!). Back in those days, the joke was that
 EMACS was an acronym for "Eight Megs, Always Scratching" and now that
-I write this down, I realize it's actually "Eight Megs, Always
-Constantly Swapping", which doesn't sound as nice because you could
-actually *hear* Emacs running on those old clikety hard drives back in
-the days.
-
-So now Emacs is pretty far down the list of processes in `top(1)`
+I write this down, I realize it's actually "[Eight Megs, and
+Constantly Swapping](https://www.gnu.org/fun/jokes/gnuemacs.acro.exp.html)", which doesn't sound as nice because you
+could actually *hear* Emacs running on those old hard drives back in
+the days. It would make a "scratching" noise as the hard drive heads
+would scramble maniacally to swap pages in and out of swap to make
+room for the memory-hungry editor.
+
+Now Emacs is pretty far down the list of processes in `top(1)`
 regardless of how you look at it. It's using 97MB of resident memory
-and close to 400MB of virtual, which does sound like an awful lot
-compared to my first computer, but it's absolutely nothing compared to
-things like [Signal](https://signal.org/)-desktop, which somehow manages to map a
-whopping 20.5GB virtual memory. (That's twenty Gigabytes of memory for
-old timers or time travelers, and yes, that is now a thing.) I'm not
-exactly sure how much resident memory it uses, probably somewhere
-around 300MB of resident memory. Firefox also uses gigabytes of that
-good stuff, spread around the multiple processes that each tab takes.
-
-But all my old stuff still works in Emacs, amazingly. (Good luck with
-your old [Netscape](https://en.wikipedia.org/wiki/Netscape) or [ICQ](https://en.wikipedia.org/wiki/ICQ) configuration from 2000.) I feel
-like an oldie, using Emacs, but I'm really happy to see younger people
-using it, and learning it, and especially improving it. If anything,
-one direction I would like to see it go is closer to what web browsers
-are doing (yes, I know how bad that sounds) and get better isolation
-between tasks.
+and close to 400MB of virtual memory, which does sound like an awful
+lot compared to my first computer... But it's absolutely nothing
+compared to things like [Signal](https://signal.org/)-desktop, which somehow manages to
+map a whopping 20.5GB virtual memory. (That's twenty Gigabytes of
+memory for old timers or time travelers from the past, and yes, that
+is now a thing.) I'm not exactly sure how much resident memory it uses
+(because it forks multiple processes), probably somewhere around 300MB
+of resident memory. Firefox also uses gigabytes of that good stuff,
+also spread around the multiple processes, per tab.
+
+Emacs "feels" super fast. Typing latency is noticeably better in Emacs
+than my web browser, and even [[beats most terminal
+emulators|blog/2018-05-04-terminal-emulators-2]]. It gets a little
+worse when font-locking is enabled, unfortunately, but it's still
+feels much better.
+
+And all my old stuff still works in Emacs, amazingly. (Good luck with
+your old [Netscape](https://en.wikipedia.org/wiki/Netscape) or [ICQ](https://en.wikipedia.org/wiki/ICQ) configuration from 2000.)
+
+I feel like an oldie, using Emacs, but I'm really happy to see younger
+people using it, and learning it, and especially improving it. If
+anything, one direction I would like to see it go is closer to what
+web browsers are doing (yes, I know how bad that sounds) and get
+better isolation between tasks.
 
 An attack on my email client shouldn't be able to edit my Puppet code,
 and/or all files on my system, for example. And I know, fundamentally,
@@ -114,5 +134,4 @@ where we are now that there's an [Emacs Window Manager](https://github.com/ch11n
 Otherwise I'll have to find a new mail client, and that's really
 something I try to limit to once a decade or so.
 
-
 [[!tag debian-planet history emacs python-planet]]

emacs and i are old
diff --git a/blog/2022-03-20-20-years-emacs.md b/blog/2022-03-20-20-years-emacs.md
new file mode 100644
index 00000000..78d01f36
--- /dev/null
+++ b/blog/2022-03-20-20-years-emacs.md
@@ -0,0 +1,118 @@
+[[!meta title="20+ years of Emacs"]]
+
+I enjoyed reading [this article named "22 years of Emacs"](https://arjenwiersma.nl/writeups/emacs/22-years-of-emacs/)
+recently. It's kind of fascinating, because I realised I don't exactly
+know for how long I've been using Emacs. It's lost in the mists of
+[[history|blog/2012-11-01-my-short-computing-history]]. If I would
+have to venture a guess, it was back in the "early days", which in
+that history is mapped around 1996-1997, when I installed my very own
+"PC" with [FreeBSD](https://freebsd.org/) 2.2.x and painstakenly managed to make
+[XFree86](https://en.wikipedia.org/wiki/XFree86) run on it.
+
+Modelines. Those were the days... But I disgress.
+
+# I am old...
+
+The only formal timestamp I can put is that my rebuilt [.emacs.d git
+repository](https://gitlab.com/anarcat/emacs-d/) has its first commit in 2002. Some people reading this
+may be born after that time. This means I'm at least significantly
+older than those people, to put things gently.
+
+Clever history nerds will notice that the commit is obviously fake:
+Git itself was not written before 2005. But ah-ah! I was already
+managing my home directory with [CVS](https://en.wikipedia.org/wiki/Concurrent_Versions_System) in 2001. I eventually
+converted this into git, and therefore you can see all my embarrassing
+history.
+
+That includes my [first known .emacs file](https://gitlab.com/anarcat/emacs-d/-/raw/05ccd451a6db0d9f7acb3618925f0505bac225f1/.emacs) which is just bizarre to
+read right now: 200 lines, most of which is "customize" stuff.
+Compare with the [current, 1000+ lines init.el](https://gitlab.com/anarcat/emacs-d/-/raw/75446e8860b41214520c260cdd254e6fdeaaed51/init.el) which is also still
+kind of a mess, but actually shares very little with the original,
+thankfully.
+
+All this to say that in those years (*decades*, really) of using
+Emacs, I have had a very different experience than `credmp` who wrote
+packages, sent patches, and got name dropping from other
+developers. My experience is just struggling to keep up with
+everything, in general, but particularly in Emacs.
+
+# ... and Emacs is too fast for me
+
+It might sound odd to say, but Emacs is actually moving pretty fast
+right now. A lot of new packages are coming out, and I can hardly keep
+up.
+
+I am not using [org mode](https://orgmode.org/), but did use it for time (and task)
+tracking for a while (and for [invoicing too](https://github.com/anarcat/ledger-timetracking), funky stuff).
+
+I am not using [mu4e](https://www.djcbsoftware.nl/code/mu/), but maybe I'm using something better
+([notmuch](https://notmuchmail.org/)) and yes, I am reading my mail in Emacs, which I find
+questionable from a security perspective. (Sandboxing untrusted
+inputs? Anyone?)
+
+I *am* using [magit](https://magit.vc/), but only when coding, so I do end up using
+git on the command line quite a bit anyways.
+
+I *do* have [which-key](https://github.com/justbur/emacs-which-key) enabled, and reading about it reminded me I
+wanted to turn it off because it's kind of noisy and I never remember
+I can actually use it for anything. Or, in other words, I don't even
+remember the prefix key or, when I do, there's too many possible
+commands after for it to be useful.
+
+I haven't setup [lsp-mode](https://emacs-lsp.github.io/lsp-mode/), let alone [Eglot](https://github.com/joaotavora/eglot), which I just
+learned about reading the article. I thought I would be super shiny
+and cool by setting up LSP instead of the (dying?) [elpy package](https://elpy.readthedocs.io/en/latest/),
+but I never got around to it. And now it seems lsp-mode is uncool and
+I should really do eglot instead, and that doesn't help.
+
+I am not using [projectile](https://projectile.mx/). It's on some queue somewhere. I
+suspect it's important to getting my projects organised, but I still
+live halfway between the terminal and Emacs, so it's not quite clear
+what I would gain.
+
+I had to ask what [native compilation](https://www.emacswiki.org/emacs/GccEmacs) was or why it mattered the
+first time I heard of it. And when I saw it again in the article, I
+had to click through to remember. 
+
+# ... or at least fast enough
+
+Frankly, Emacs feels fast enough for me. Back in my days, I was
+running Emacs on a Pentium I, 166MHz, with 8MB of RAM (eventually
+upgraded to 32MB, whoohoo!). Back in those days, the joke was that
+EMACS was an acronym for "Eight Megs, Always Scratching" and now that
+I write this down, I realize it's actually "Eight Megs, Always
+Constantly Swapping", which doesn't sound as nice because you could
+actually *hear* Emacs running on those old clikety hard drives back in
+the days.
+
+So now Emacs is pretty far down the list of processes in `top(1)`
+regardless of how you look at it. It's using 97MB of resident memory
+and close to 400MB of virtual, which does sound like an awful lot
+compared to my first computer, but it's absolutely nothing compared to
+things like [Signal](https://signal.org/)-desktop, which somehow manages to map a
+whopping 20.5GB virtual memory. (That's twenty Gigabytes of memory for
+old timers or time travelers, and yes, that is now a thing.) I'm not
+exactly sure how much resident memory it uses, probably somewhere
+around 300MB of resident memory. Firefox also uses gigabytes of that
+good stuff, spread around the multiple processes that each tab takes.
+
+But all my old stuff still works in Emacs, amazingly. (Good luck with
+your old [Netscape](https://en.wikipedia.org/wiki/Netscape) or [ICQ](https://en.wikipedia.org/wiki/ICQ) configuration from 2000.) I feel
+like an oldie, using Emacs, but I'm really happy to see younger people
+using it, and learning it, and especially improving it. If anything,
+one direction I would like to see it go is closer to what web browsers
+are doing (yes, I know how bad that sounds) and get better isolation
+between tasks.
+
+An attack on my email client shouldn't be able to edit my Puppet code,
+and/or all files on my system, for example. And I know, fundamentally,
+that's a really hard challenge in Emacs. But if you're going to treat
+your editor as your operating system (or vice versa, I lost track of
+where we are now that there's an [Emacs Window Manager](https://github.com/ch11ng/exwm), which I do
+*not* use), at least we should get that kind of security.
+
+Otherwise I'll have to find a new mail client, and that's really
+something I try to limit to once a decade or so.
+
+
+[[!tag debian-planet history emacs python-planet]]

more matrix notes
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index cc0d5da7..d6164853 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -19,6 +19,34 @@ TODO
 TODO: vector.im privacy policy, GDPR compliance, metadata leaks,
 message expiry, discoverability of leakage.
 
+# Moderation
+
+In Matrix like elsewhere, Moderation is a hard problem. There is a
+[detailed moderation guide](https://matrix.org/docs/guides/moderation), but much of this problem space is
+actively worked on in Matrix right now. A fundamental problem with
+moderating a federated space is that a user banned from a room can
+rejoin the room from another server. This is why spam is such a
+problem in Email, and why IRC networks have stopped federating ages
+ago (see [the IRC history](https://en.wikipedia.org/wiki/Internet_Relay_Chat#History) for that fascinating story).
+
+The [mjolnir moderation bot](https://github.com/matrix-org/mjolnir) is designed to help with some of those
+things. It can kick and ban users, redact all of a user's message (as
+opposed to one by one), all of this across multiple rooms. It can also
+subscribe to the federated block list, which is published by
+matrix.org to block known abusers (users or servers). It's a good idea
+to make the bot admin of your channels, because you can't take back
+admin from a user once given.
+
+Matrix doesn't have tor/vpn-specific moderation mechanisms. It has
+the concept of guest accounts, not very used, and virtually no client
+support it. matrix is like +R by default. 
+
+TODO: rate limiting https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L833
+
+TODO: irc vs email vs mastodon federation / forks
+
+TODO: you can use the admin API to impersonate a room admin?
+
 # Availability
 
 While Matrix has a strong advantage over Signal in that it's
@@ -77,6 +105,28 @@ such an alias (in Element), you need to go in the room settings'
 name (e.g. `foo`), and then that room will be available on your
 `example.com` homeserver as `#foo:example.com`.)
 
+TODO: new users on certain can be added to a space automatically in
+synapse. existing users can be told about the space with a server
+notice. https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1387
+
+TODO: by default the room belongs to the public federation, anyone
+invited (or joins if public) can join from any server. spaces as
+directory. the room doesn't belong to a server. you can create a room
+on serverA, admin from serverB join and belong as admin. the room will
+be replicated on the two server. if serverA falls, the serverB will be
+picked up. a room doesn't have an FQDN, it has a Matrix ID (basically
+a random number). it has a server name, but that's just to avoid
+collision. you can have server-specific aliases. each server needs to
+have admin.
+
+TODO: room namespaces eg #fractal:gnome.org (alias) room id is
+HASH:matrix.org room was created on matrix.org, but admins are on
+gnome.org... room is primarily a gnome room.
+
+TODO: [tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) (no GUI for it), fait partie de
+[MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") allows a room admin to close a
+room, with a message and a pointer to another room.
+
 ## Home server
 
 So while you can workaround a home server going down at the room
@@ -154,6 +204,29 @@ basically 2 round-trips.)
 Possible improvements to this include [support for websocket](https://github.com/matrix-org/matrix-doc/issues/1148) (to
 reduce latency and overhead) and the [CoAP proxy](https://matrix.org/docs/projects/iot/coap-proxy) work [from
 2019](https://matrix.org/blog/2019/03/12/breaking-the-100-bps-barrier-with-matrix-meshsim-coap-proxy) (which allows communication over 100bps links), both of which
-are stalled at the time of writing.
+seem stalled at the time of writing. The Matrix people have also
+[announced](https://matrix.org/blog/2021/05/06/introducing-the-pinecone-overlay-network) the [pinecone p2p overlay network](https://github.com/matrix-org/pinecone) which aims at
+solving large, internet-scale routing problems Matrix is coming
+to. See also [this talk at FOSDEM 2022](https://www.youtube.com/watch?v=diwzQtGgxU8&list=PLl5dnxRMP1hW7HxlJiHSox02MK9_KluLH&index=19).
+
+# Usability
+
+The workflow for joining a room, when you use Element web, is not
+great:
+
+ 1. click on a link in a web browser
+ 2. land on (say) <https://matrix.to/#/#matrix-dev:matrix.org>
+ 3. offers "Element", yeah that's sounds great, let's click "Continue"
+ 4. land on
+    <https://app.element.io/#/room%2F%23matrix-dev%3Amatrix.org> and
+    then you need to register, aaargh
+
+As you might have guessed by now, there is a [proposed
+specification](https://github.com/matrix-org/matrix-spec-proposals/blob/f295e828dc3107260a7979c40175442bf3a7fdc4/proposals/2312-matrix-uri.md) to solve this, but web browsers need to adopt it
+as well, so that's far from actually being solved.
+
+TODO: registration and discovery workflow compared with signal
+
+TODO: admin API https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html
 
 [[!tag draft]]

mention the security issue is fixed in rxvt
diff --git a/blog/2018-04-12-terminal-emulators-1.mdwn b/blog/2018-04-12-terminal-emulators-1.mdwn
index 08abca35..7a2376a3 100644
--- a/blog/2018-04-12-terminal-emulators-1.mdwn
+++ b/blog/2018-04-12-terminal-emulators-1.mdwn
@@ -177,6 +177,11 @@ urxvt terminal, which simply prompts before allowing any paste with a
 newline character. I haven't found another terminal with such definitive
 protection against the attack described by Horn.
 
+Update: `confirm-paste` also has issues, because not only newline
+characters can cause a commandline to execute. For example,
+<kbd>control-o</kbd> also causes execution in bash. This was fixed in
+newer versions ([9.25](http://cvs.schmorp.de/rxvt-unicode/Changes?revision=1.1279&view=markup#l139)) however, available in bookworm.
+
 Tabs and profiles
 =================
 

Archival link:

The above link creates a machine-readable RSS feed that can be used to easily archive new changes to the site. It is used by internal scripts to do sanity checks on new entries in the wiki.

Created . Edited .