Recent changes to this wiki. Not to be confused with my history.

Complete source to the wiki is available on gitweb or by cloning this site.

mnt reform launched a new product
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index e024a55d..51a9053b 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -224,6 +224,18 @@ with a quad-core ARM CPU.
 There was a possibility for an e-ink screen and hot-swappable
 keyboard, but that was scraped during production.
 
+Update: I haven't bought a MNT reform, on two grounds:
+
+ * it's not very powerful, for the price
+ * it's bulky, so not ideal for a travel laptop (which is why I own a
+   laptop in the first place)
+
+That said, MNT now launched a new product called the [MNT Pocket
+Reform](https://mntre.com/media/reform_md/2022-06-20-introducing-mnt-pocket-reform.html). Now it all makes sense: the MNT Reform parts can be
+reused in the Pocket, and it seems the original Reform was a good
+prototype to the end goal, a pocketable computer. That, then, becomes
+really interesting as a travel laptop. Maybe. :)
+
 Wootbook
 --------
 

zutty crash
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index e6769956..68a6b8b3 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -98,7 +98,7 @@ Those are the projects I am considering.
    port support, Sixel, Kitty, iTerm graphics, built-in SSH client (!?)
  * xterm
  * [zutty](https://github.com/tomszilagyi/zutty): OpenGL rendering, true color, clipboard support, small
-   codebase, no wayland support
+   codebase, no wayland support, [crashes on bremner's](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014080)
 
 Stay tuned for more happy days.
 

vagrant likes sakura, worth investigating
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index 3eab1cbb..e6769956 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -90,6 +90,8 @@ Those are the projects I am considering.
    search, true color, font resize, URLs not clickable, but
    keyboard-driven selection, proper clipboard support
  * [kitty](https://github.com/kovidgoyal/kitty)
+ * [sakura](https://www.pleyades.net/david/projects/sakura) - libvte, wayland support, tabs, no menu bar, original
+   libvte gangster, dynamic font size
  * [termonad](https://github.com/cdepillabout/termonad) - Haskell?
  * [wez](https://wezfurlong.org/wezterm/) - Rust, Wayland, multiplexer, ligatures, scrollback
    search, clipboard support, bracketed paste, panes, tabs, serial

link to an example of bridge hell
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index f588635c..2e082dcc 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -384,7 +384,7 @@ IRC works: by default, anyone can join an IRC network even without
 authentication. Some channels require registration, but in general you
 are free to join and look around (until you get blocked, of course).
 
-I have heard anecdotal evidence that "moderating bridges is hell", and
+I have [seen anecdotal evidence](https://twitter.com/matrixdotorg/status/1542065092048654337) (CW: Twitter, [nitter link](https://nitter.it/matrixdotorg/status/1542065092048654337)) that "moderating bridges is hell", and
 I can imagine why. Moderation is already hard enough on one
 federation, when you bridge a room with another network, you inherit
 all the problems from *that* network but without the entire abuse

fix typo, thanks jvoisin
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index 9a40edf0..f588635c 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -375,7 +375,7 @@ register their own homeserver, which makes this limited.
 
 Server admins can block IP addresses and home servers, but those tools
 are not easily available to room admins. There is an API
-(`m.room.server_acl` in `/devtools`) but the it is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928)
+(`m.room.server_acl` in `/devtools`) but it is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928)
 (thanks Austin Huang for the clarification).
 
 Matrix has the concept of guest accounts, but it is not used very

add TODOs
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index 769bf2f6..3eab1cbb 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -100,4 +100,7 @@ Those are the projects I am considering.
 
 Stay tuned for more happy days.
 
+TODO: update https://gitlab.com/anarcat/terms-benchmarks
+TODO: cross-ref from previous articles
+
 [[!tag draft]]

more details
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index a5e16fe6..769bf2f6 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -18,18 +18,18 @@ https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)#Adoption
 
 In the previous article, I touched on those projects:
 
-| Terminal           | Changes since review                                                                                                                                                                                          |
-|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [Alacritty][]      | releases! scrollback, better latency, URL launcher, clipboard support, still [not in Debian](http://bugs.debian.org/851639), but close                                                                        |
-| [GNOME Terminal][] | not much? couldn't find a changelog                                                                                                                                                                           |
-| [Konsole][]        | [not much](https://konsole.kde.org/changelog.html)                                                                                                                                                            |
-| [mlterm][]         | [long changelog](https://raw.githubusercontent.com/arakiken/mlterm/3.9.2/doc/en/ReleaseNote) but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!) |
-| [pterm][]          | [changes](https://www.chiark.greenend.org.uk/~sgtatham/putty/changes.html): Wayland support                                                                                                                   |
-| [st][]             | [unparseable changelog](https://git.suckless.org/st/), might include scrollback support through a third-party `scroll(1)` command I couldn't find                                                             |
-| [Terminator][]     | moved to GitHub, Python 3 support, not being dead                                                                                                                                                             |
-| [urxvt][]          | main rxvt fork, also known as rxvt-unicode                                                                                                                                                                    |
-| [Xfce Terminal][]  | uses GTK3, VTE                                                                                                                                                                                                |
-| [xterm][]          | the original X terminal                                                                                                                                                                                       |
+| Terminal           | Changes since review                                                                                                                                                                                                                           |
+|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Alacritty][]      | releases! scrollback, better latency, URL launcher, clipboard support, still [not in Debian](http://bugs.debian.org/851639), but close                                                                                                         |
+| [GNOME Terminal][] | not much? couldn't find a changelog                                                                                                                                                                                                            |
+| [Konsole][]        | [not much](https://konsole.kde.org/changelog.html)                                                                                                                                                                                             |
+| [mlterm][]         | [long changelog](https://raw.githubusercontent.com/arakiken/mlterm/3.9.2/doc/en/ReleaseNote) but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!)                                  |
+| [pterm][]          | [changes](https://www.chiark.greenend.org.uk/~sgtatham/putty/changes.html): Wayland support                                                                                                                                                    |
+| [st][]             | [unparseable changelog](https://git.suckless.org/st/), might include scrollback support through a third-party `scroll(1)` command I couldn't find                                                                                              |
+| [Terminator][]     | moved to GitHub, Python 3 support, not being dead                                                                                                                                                                                              |
+| [urxvt][]          | no significant changes, a single release, still in CVS!                                                                                                                                                                                        |
+| [Xfce Terminal][]  | [hard to parse changelog](https://gitlab.xfce.org/apps/xfce4-terminal/-/blob/master/NEWS), presumably some improvements to paste safety?                                                                                                       |
+| [xterm][]          | notoriously [hard to parse changelog](https://invisible-island.net/xterm/xterm.log.html), improvements to paste safety (`disallowedPasteControls`), fonts, [clipboard improvements](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=901249)? |
 
 [xterm]: http://invisible-island.net/xterm/
 [Xfce Terminal]: https://docs.xfce.org/apps/terminal/start
@@ -44,4 +44,60 @@ In the previous article, I touched on those projects:
 [GNOME Terminal]: https://wiki.gnome.org/Apps/Terminal
 [Alacritty]: https://github.com/jwilm/alacritty
 
+At this point I'm still using urxvt, bizarrely. I briefly played
+around with Konsole and xterm, but somehow reverted back to it.
+
+I would really, really like to like Alacritty, but it's still not
+packaged in Debian, and they [haven't fully addressed the latency
+issues](https://github.com/alacritty/alacritty/issues/673#issuecomment-658784144) although, to be fair, maybe it's just an impossible
+task. Once it's in Debian, maybe I'll reconsider.
+
+# Requirements
+
+Figuring out my requirements is actually a pretty hard thing to do. In
+my last reviews, I just tried a bunch of stuff and collected
+*everything*, but a lot of things (like tab support) I don't actually
+care about. So here's a set of things I actually do care about:
+
+ * latency
+ * resource usage
+ * proper clipboard support, that is:
+   * mouse selection and middle button uses PRIMARY
+   * <kbd>control-shift-c</kbd> and <kbd>control-shift-v</kbd> for
+     CLIPBOARD
+ * true color support
+ * no known security issues
+ * active project
+ * paste protection
+ * clickable URLs
+ * scrollback
+ * font resize
+ * non-destructive text-wrapping (ie. resizing a window doesn't drop
+   scrollback history)
+ * proper unicode support (at least latin-1, ideally "everything")
+ * good emoji support (at least showing them, ideally "nicely"), which
+   involves font fallbacks
+
+# Candidates
+
+Those are the projects I am considering.
+
+ * [alacritty][]
+ * [darktile](https://github.com/liamg/darktile) - GPU rendering, Unicode support, themable, ligatures
+   (optional), Sixel, window transparency, clickable URLs, true color
+   support
+ * [foot](https://codeberg.org/dnkl/foot) - Wayland only, daemon-mode, sixel images, scrollback
+   search, true color, font resize, URLs not clickable, but
+   keyboard-driven selection, proper clipboard support
+ * [kitty](https://github.com/kovidgoyal/kitty)
+ * [termonad](https://github.com/cdepillabout/termonad) - Haskell?
+ * [wez](https://wezfurlong.org/wezterm/) - Rust, Wayland, multiplexer, ligatures, scrollback
+   search, clipboard support, bracketed paste, panes, tabs, serial
+   port support, Sixel, Kitty, iTerm graphics, built-in SSH client (!?)
+ * xterm
+ * [zutty](https://github.com/tomszilagyi/zutty): OpenGL rendering, true color, clipboard support, small
+   codebase, no wayland support
+
+Stay tuned for more happy days.
+
 [[!tag draft]]

start documenting current state of terminal emulators
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
new file mode 100644
index 00000000..a5e16fe6
--- /dev/null
+++ b/blog/wayland-terminal-emulators.md
@@ -0,0 +1,47 @@
+Back in 2018, I made a [two part series](https://anarc.at/blog/2018-04-12-terminal-emulators-1/) about terminal emulators
+that was actually pretty painful to write. So I'm not going to retry
+this here, at all, especially since I'm not submitting this to the
+excellent [LWN editors](https://lwn.net/) so I can get away with not being very good
+at writing. Phew.
+
+Still, it seems my future self will thank me for collecting my
+thoughts on the terminal emulators I have found out about since I
+wrote that article. Back then, Wayland was not quite at the level
+where it is now, being the default in Fedora (2016), Debian (2019),
+RedHat (2019), and Ubuntu (2021). Also, a bunch of folks thought they
+would solve everything by using OpenGL for rendering. Let's see how
+things stack up.
+
+https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)#Adoption
+
+# Recap
+
+In the previous article, I touched on those projects:
+
+| Terminal           | Changes since review                                                                                                                                                                                          |
+|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Alacritty][]      | releases! scrollback, better latency, URL launcher, clipboard support, still [not in Debian](http://bugs.debian.org/851639), but close                                                                        |
+| [GNOME Terminal][] | not much? couldn't find a changelog                                                                                                                                                                           |
+| [Konsole][]        | [not much](https://konsole.kde.org/changelog.html)                                                                                                                                                            |
+| [mlterm][]         | [long changelog](https://raw.githubusercontent.com/arakiken/mlterm/3.9.2/doc/en/ReleaseNote) but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!) |
+| [pterm][]          | [changes](https://www.chiark.greenend.org.uk/~sgtatham/putty/changes.html): Wayland support                                                                                                                   |
+| [st][]             | [unparseable changelog](https://git.suckless.org/st/), might include scrollback support through a third-party `scroll(1)` command I couldn't find                                                             |
+| [Terminator][]     | moved to GitHub, Python 3 support, not being dead                                                                                                                                                             |
+| [urxvt][]          | main rxvt fork, also known as rxvt-unicode                                                                                                                                                                    |
+| [Xfce Terminal][]  | uses GTK3, VTE                                                                                                                                                                                                |
+| [xterm][]          | the original X terminal                                                                                                                                                                                       |
+
+[xterm]: http://invisible-island.net/xterm/
+[Xfce Terminal]: https://docs.xfce.org/apps/terminal/start
+[urxvt]: http://software.schmorp.de/pkg/rxvt-unicode.html
+[Terminator]: https://github.com/gnome-terminator/terminator/
+[st]: https://st.suckless.org/
+[PuTTY]: https://www.chiark.greenend.org.uk/%7Esgtatham/putty/
+[pterm]: https://manpages.debian.org/pterm
+[mlterm]: http://mlterm.sourceforge.net/
+[Konsole]: https://konsole.kde.org/
+[VTE]: https://github.com/GNOME/vte
+[GNOME Terminal]: https://wiki.gnome.org/Apps/Terminal
+[Alacritty]: https://github.com/jwilm/alacritty
+
+[[!tag draft]]

fix some typos, thanks marvil!
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index b40b3fba..9a40edf0 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -443,7 +443,7 @@ On IRC, it's quite easy to setup redundant nodes. All you need is:
 That's it: the node will join the network and people can connect to it
 as usual and share the same user/namespace as the rest of the
 network. The servers take care of synchronizing state: you do not need
-about replicating a database server.
+to worry about replicating a database server.
 
 (Now, experienced IRC people will know there's a catch here: IRC
 doesn't have authentication built in, and relies on "services" which
@@ -591,7 +591,7 @@ Matrix itself, but let's now dig into that.
 There were serious scalability issues of the main Matrix server,
 [Synapse](https://github.com/matrix-org/synapse/), in the past. So the Matrix team has been working hard to
 improve its design. Since Synapse 1.22 the home server can
-horizontally to multiple workers (see [this blog post for details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability))
+horizontally scale to multiple workers (see [this blog post for details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability))
 which can make it easier to scale large servers.
 
 ## Other implementations

note some GDPR research
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index 9b372ab1..b40b3fba 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -112,6 +112,10 @@ Also keep in mind that, in the brave new [peer-to-peer](https://github.com/matri
 Matrix is heading towards, the boundary between server and client is
 likely to be fuzzier, which would make applying the GDPR even more difficult.
 
+Update: [this comment](https://lobste.rs/s/ixa4vr/matrix_notes#c_nzqaqb) links to [this post (in german)](https://www.cr-online.de/blog/2022/06/02/ein-fehler-in-der-matrix/) which
+apparently studied the question and concluded that Matrix is not
+GDPR-compliant.
+
 In fact, maybe Synapse should be designed so that there's no
 configurable flag to turn off data retention. A bit like how most
 system loggers in UNIX (e.g. syslog) come with a log retention system
@@ -970,6 +974,8 @@ dead, just like IRC is "dead" now.
 I wonder which path Matrix will take. Could it liberate us from these
 vicious cycles?
 
+Update: this generated some discussions on [lobste.rs](https://lobste.rs/s/ixa4vr/matrix_notes).
+
 [[!tag matrix irc history debian-planet python-planet review internet]]
 
 [^1]: [According to Wikipedia](https://en.wikipedia.org/wiki/Internet_Relay_Chat#Modern_IRC), there are currently about 500

another interesting build
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index b0f57245..2b0901e5 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -437,4 +437,8 @@ slots but strangely it's somewhat rare in user-level
 hardware. Most SATA controllers and disks support hot-swapping, but it
 needs to be double-checked.
 
+## Other builds
+
+See also <https://mtlynch.io/budget-nas/>.
+
 [[!tag node]]

document some shops
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index dd8eff36..693611fe 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -969,6 +969,15 @@ Conclusion:
 Looks like Fuji is targeting a more high-end market, Sony is all over
 the place, and Olympus is aiming at a lower-range.
 
+# Shops
+
+ * <https://royalphoto.com/>
+ * <https://photoservice.ca/>
+ * <https://www.camtecphoto.com/>
+ * <https://lozeau.com/>, acheté par Henry's
+ * <https://www.henrys.com/>
+ * <https://www.bhphotovideo.com/>
+
 Autres pages
 ============
 

mention the surface
diff --git a/hardware/tablet.mdwn b/hardware/tablet.mdwn
index 40bac923..70dd2f58 100644
--- a/hardware/tablet.mdwn
+++ b/hardware/tablet.mdwn
@@ -507,6 +507,18 @@ magnetic keyboard), it looks real promising.
 
 https://en.jingos.com/jingpad-a1/
 
+## Microsoft
+
+I feel really odd suggesting people buy *anything* from Microsoft, but
+there you have it, some fellow Debian Developer did, so I can't help
+but adding it to the pile:
+
+https://changelog.complete.org/archives/10396-i-finally-found-a-solid-debian-tablet-the-surface-go-2
+
+Pretty bad iFixit score (3/10):
+
+https://www.ifixit.com/Device/Surface_Go_2
+
 Phones
 ======
 

note that matrix seem aware they cannot expire messages
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index 0abd60bd..9b372ab1 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -212,7 +212,8 @@ and from where is not well protected. Compared to a tool like Signal,
 which goes through great lengths to anonymize that data with features
 like [private contact discovery][], [disappearing messages](https://signal.org/blog/disappearing-messages/),
 [sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
-behind.
+behind. (Note: there is an [issue open about message lifetimes in
+Element](https://github.com/vector-im/element-meta/issues/364) since 2020, but it's not at even at the MSC stage yet.)
 
 [private contact discovery]: https://signal.org/blog/private-contact-discovery/
 

obviously rust has its own filesystem monitoring thing
diff --git a/blog/2019-11-20-file-monitoring-tools.mdwn b/blog/2019-11-20-file-monitoring-tools.mdwn
index d8807abb..f4c924fe 100644
--- a/blog/2019-11-20-file-monitoring-tools.mdwn
+++ b/blog/2019-11-20-file-monitoring-tools.mdwn
@@ -135,6 +135,18 @@ https://github.com/tinkershack/fluffy
  * somewhat [difficult commandline interface](https://manpages.debian.org/buster/inotify-tools/inotifywait.1.en.html)
  * no event deduplication
 
+## notify-rs
+
+<https://github.com/notify-rs/notify>
+
+ * 2016-2022
+ * Rust
+ * CC0 / Artistic
+ * [Debian package](https://tracker.debian.org/pkg/rust-notify) since 2022
+ * cross-platform library, not a commandline tool
+ * used by `cargo watch`, [watchexec](https://github.com/watchexec/watchexec) ([RFP](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=946546)), and Python's
+   [watchfiles](https://watchfiles.helpmanual.io/) which features a [CLI tool](https://watchfiles.helpmanual.io/cli/)
+
 ## systemd .path units
 
 <https://www.freedesktop.org/software/systemd/man/systemd.path.html>
@@ -402,4 +414,15 @@ old inotify wrappers because I don't find them as interesting as the
 newer ones - they're really hard to use! - but I guess it's worth
 mentioning them even if just to criticise them. ;)
 
+## timetrack
+
+<https://github.com/joshmcguigan/timetrack>
+
+ * 2018-2019
+ * Rust
+ * Apache-2.0, MIT
+ * No Debian package
+ * tracks filesystem changes to report time spent on different things,
+   see also [this discussion on selfspy for other alternatives](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=873955#53)
+
 [[!tag debian debian-planet software review programming]]

will not use selfspy
diff --git a/blog/2017-10-02-free-software-activities-september-2017.mdwn b/blog/2017-10-02-free-software-activities-september-2017.mdwn
index ee64491c..e4769250 100644
--- a/blog/2017-10-02-free-software-activities-september-2017.mdwn
+++ b/blog/2017-10-02-free-software-activities-september-2017.mdwn
@@ -171,6 +171,9 @@ for people that encrypt their database.
 Next step is to [package selfspy in Debian](https://bugs.debian.org/873955) which should hopefully
 be simple enough...
 
+Update, 2022: I decided not to use Selfspy, too much of a security
+liability. See instead [this discussion on alternatives](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=873955#53).
+
 Restic documentation security
 -----------------------------
 

respond to Austin Huang
diff --git a/blog/2022-06-17-matrix-notes/comment_2_3e1ab50e7ef7fd9e4e451bd51f8f1cc0._comment b/blog/2022-06-17-matrix-notes/comment_2_3e1ab50e7ef7fd9e4e451bd51f8f1cc0._comment
new file mode 100644
index 00000000..926e4b2d
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_2_3e1ab50e7ef7fd9e4e451bd51f8f1cc0._comment
@@ -0,0 +1,65 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""some responses"""
+ date="2022-06-18T16:23:06Z"
+ content="""
+Allo!
+
+So first off, I should note that I actually put some poor Matrix.org through the pain of reading a draft of this article before publishing it, so I consider it somewhat accurate, or at least as much as reasonably possible considering the sheer size of the thing.
+
+Now, as to your specific comments...
+
+> Writing all the third parties together is quite misleading and you should definitely separate them.
+
+My reviewer also had that objection, but I will point out that the distinction is *not* made in the privacy policy. So I think it would actually be unfair to split it out here: it's not like you can actually pick and choose which part of the privacy policy you accept when you start using Matrix services.
+
+In fact I'd even say it's a problem that all of those are indiscrimenatly present in the policy.
+
+> > As an aside, I also appreciate that Matrix.org has a fairly decent code of conduct
+
+> [response summary: they ban people too easily]
+
+I err on the side of banning more people than less, to be honest, so I'm perfectly fine with this. Maybe it would be better to have two rooms, one for explicit code violations and one, more "liberal" room where anything goes?
+
+You also made comments regarding Mastodon and moderation policies which I don't quite grasp the point of.
+
+[...]
+
+Thanks for the clarifications on Mjolnir, room blocking, and guest accounts, I updated the article accordingly.
+
+> > tombstone event
+>
+> It does have a GUI in Element:
+>
+> * `/upgraderoom`
+
+That's literally not a GUI: it's a text command. 
+
+> > but Matrix.org people are trying to deprecate it in favor of \"Spaces\"
+>
+> Citation required. Also Spaces are rooms and so they can also be included in room directories.
+
+The reviewer didn't wish to be cited, so I can't actually provide a quote here. Happy to stand corrected, but it does feel like a large feature overlap, no?
+
+> > New users can be added to a space or room automatically in Synapse
+>
+> In public homeservers, this may leak account age.
+
+Considering the entire history of everything is available forever on home servers, that seems like a minor compromise to get higher room availability in a distributed cluster (which is one of my use cases, that, granted, I did not make very clear).
+
+> Only public aliases (local aliases are unrestricted afaik). Also they're not required for listing on room directories.
+
+Now I'm even more confused than I was before. Local addresses, aliases, public aliases, wtf?
+
+> Given that [this](https://arewep2pyet.com/) is a thing, it is likely to be the goal.
+
+added a link.
+
+> > register (with a password! aargh!)
+> 
+> Don't you also have a password on Signal?
+
+Not really. They really want you to set a PIN so that you can do account recovery when you lose your phone, but I am not sure it's mandatory. Even if it was, it's not something you get prompted for all the time, only when you lose a device. In Element, I frequently had to login again, including in the Android app. I never had to use my Signal PIN so far (although for a while they used to prompt for it so that you wouldn't forget it, but they thankfully stopped that annoying practice).
+
+Thanks for the review!
+"""]]

add some corrections from Austin Huang, thanks!
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index bd645ed0..0abd60bd 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -369,12 +369,12 @@ registered users can join") by default, except that anyone can
 register their own homeserver, which makes this limited.
 
 Server admins can block IP addresses and home servers, but those tools
-are not currently available to room admins. So it would be nice to
-have room admins have that capability, just like IRC channel admins
-can block users based on their IP address.
+are not easily available to room admins. There is an API
+(`m.room.server_acl` in `/devtools`) but the it is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928)
+(thanks Austin Huang for the clarification).
 
 Matrix has the concept of guest accounts, but it is not used very
-much, and virtually no client supports it. This contrasts with the way
+much, and virtually no client or homeserver supports it. This contrasts with the way
 IRC works: by default, anyone can join an IRC network even without
 authentication. Some channels require registration, but in general you
 are free to join and look around (until you get blocked, of course).
@@ -597,7 +597,7 @@ performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), Gola
 are feature-complete so there's a trade-off to be made there.  Synapse
 is also adding a lot of feature fast, so it's an open question whether
 the others will ever catch up. (I have heard that Dendrite might
-actually surpass Synapse in features within a few years, which would
+actually [surpass Synapse in features within a few years](https://arewep2pyet.com/), which would
 put Synapse in a more "LTS" situation.)
 
 ## Latency

approve comment
diff --git a/blog/2022-06-17-matrix-notes/comment_1_38498fa7053818f0dd619d58aed2b92c._comment b/blog/2022-06-17-matrix-notes/comment_1_38498fa7053818f0dd619d58aed2b92c._comment
new file mode 100644
index 00000000..c655669c
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_1_38498fa7053818f0dd619d58aed2b92c._comment
@@ -0,0 +1,122 @@
+[[!comment format=mdwn
+ ip="206.180.245.232"
+ claimedauthor="Austin Huang"
+ url="https://austinhuang.me"
+ subject="Some comments and whatnot from another Montréaler"
+ date="2022-06-18T04:29:02Z"
+ content="""
+Hello there. Some comments from an active Matrix user (aussi un Montréalais, but I don't work for matrix.org or New Vector), from top to bottom. (For other readers: they're AFAIK so please correct me if necessary.)
+
+> Element 2.2.1: mentions many more third parties
+
+Writing all the third parties together is quite misleading and you should definitely separate them. Specifically, according to my understanding of the text:
+
+* Twillo, if you register with a phone number on an EMS-operated homeserver
+* Stripe and Quaderno if you're a paying customer
+* LinkedIn, Twitter, Google, Outplay, Salesforce, and Pipedrive if you click on an ad of Element from a third-party platform (presumably they're not used if you land on Element directly, so maybe they shouldn't be included)
+* Hubspot, Matomo (selfhosted) and Posthog for website analytics
+
+So I don't think they're applicable to the *client*.
+
+Whether privacy policies are actually followed is a different thing, but assumptions need to be clearly indicated, in my opinion.
+
+> As an aside, I also appreciate that Matrix.org has a fairly decent code of conduct
+
+The enforcement of it (bans in #matrix-org-coc-bl:matrix.org whose reason is `code of conduct violations`) is commonly believed to be dubious at times. Specifically, they ban people (including those not exhibiting bad faith) too easily (often without warning) and there are no real appeal process (AFAIK abuse@matrix.org doesn't manually respond to most emails). Since #matrix-org-coc-bl:matrix.org is also used outside of official Matrix rooms (it applies to public communities directly related to EMS-operated homeservers as well, eg. FOSDEM, Arch Linux, etc.), there definitely should be some constraints w/r/t the use of bans.
+
+Third-party homeservers seem to be much better moderated.
+
+> The mjolnir bot
+
+Unfortunately, *in practice*, Mjolnir requires a homeserver to be run (given that the bot expects to be free of ratelimit, which requires an exception in the homeserver, which is unlikely to be granted to someone who's not related to the homeserver's administration). This makes it inaccessible to people who cannot run a homeserver (and cannot trust one who can to handle moderation). See also [here](https://www.aminda.eu/blog/english/2021/12/05/matrix-community-abuse-security-by-obscurity.html).
+
+> Server admins can block IP addresses and home servers, but those tools are not currently available to room admins.
+
+Room admins can block homeservers through ACL, it's not intuitive (`/devtools` => Explore Room State => `m.room.server_acl`) but it is indeed *available*. But ACL itself is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928).
+
+Also, Mjolnir user bans support wildcard (`@*:example.com`).
+
+> Matrix has the concept of guest accounts, but it is not used very much, and virtually no client supports it. 
+
+Element does, but virtually no *homeserver* supports it (since it is easily abusable).
+
+> as servers could refuse parts of the updates
+
+pretty sure you can't do that without affecting all future updates
+
+> I have heard anecdotal evidence that \"moderating bridges is hell\"
+
+Since bridge puppets are still users, you could ban them or redact their messages. If the platform can be accessed through multiple bridges running the same bridge software, you could use Mjolnir to ban `@bridge_username:*` with the server name as wildcard. Of course, that doesn't prevent the situation where multiple public bridges exist that run different software and do not require explicit approval (eg. Bifrost for XMPP), but that's *rare*.
+
+> and then that room will be available on your `example.com` homeserver as `#foo:example.com`
+
+But the room is already available as, say, `#foo:matrix.org`, and if that is the selected main alias then that's what's gonna be shown even when you add the room to `example.com`'s room directory. So local aliases are only for someone on *your* homeserver to link this room, and non-main public aliases are only for someone on *any* homeserver to link this room, which means neither have any actual uses other than 1. vanity, and 2. to act as a redundancy in case all other public aliases fail (due to homeserver outage).
+
+> tombstone event
+
+It does have a GUI in Element:
+
+* `/upgraderoom`
+* For rooms before version 9, room settings => Security & Privacy => \"Space members\" has an \"upgrade required\" which is clickable if you have the permission to upgrade it (IIRC)
+
+> but Matrix.org people are trying to deprecate it in favor of \"Spaces\"
+
+Citation required. Also Spaces are rooms and so they can also be included in room directories.
+
+> New users can be added to a space or room automatically in Synapse
+
+In public homeservers, this may leak account age.
+
+> It's possible to restrict who can add aliases
+
+Only public aliases (local aliases are unrestricted afaik). Also they're not required for listing on room directories.
+
+> I have heard that Dendrite might actually surpass Synapse in features within a few years
+
+Given that [this](https://arewep2pyet.com/) is a thing, it is likely to be the goal.
+
+> In Matrix, you need to learn about home servers, pick one,
+
+How Element markets effectively forces everyone onto matrix.org (or EMS) and that's a problem
+
+> register (with a password! aargh!)
+
+Don't you also have a password on Signal?
+
+> but I don't feel confident sharing my phone number there
+
+You could share email addresses, but yes, I get your point.
+
+Use of identity servers became opt-in in late? 2020 amid concerns that it makes Riot.im (then) a spyware by forcing a call-home. In fact a selling point of Matrix is the non-requirement of email or phone number (if applicable; but it is obvious that this will lead to abuse).
+
+> It does not support large multimedia rooms
+
+Rooms with more than 2 users don't have native VoIP [yet](https://element.io/blog/introducing-native-matrix-voip-with-element-call/).
+
+> Working on Matrix
+
+That's what happens when the same people effectively own both the protocol and the first client... But then you said of IRC, that
+
+> If I were to venture a guess, I'd say that infighting, lack of a standardization body, and a somewhat annoying protocol meant the network could not grow.
+
+So it could also be an advantage that Matrix has a standardization body from the get-go (whether the body itself is good is a different question).
+
+> I just want secure, simple messaging. Possibly with good file transfers, and video calls.
+
+I don't think that has been Matrix's goal from the get-go, though you could say they're *now* working towards that.
+
+For me, Matrix is still only for two purposes:
+
+1. For individuals, to run a community (replacing Telegram and Discord), and
+2. For organizational communication (replacing Slack and MS Teams).
+
+Sure, interoperability (which Matrix has probably lobbied for), but the cost of bridging is still quite high.
+
+> Mastodon has started working on a global block list of fascist servers
+
+Some (certainly not mainstream) consider \"not blocking abusive servers\" as grounds for blocking so it's already happening on the fediverse.
+
+> but matrix.org publishes a (federated) block list of hostile servers
+
+[Element effectively encourages the use of blocklists](https://element.io/blog/moderation-needs-a-radical-change/), so abuse of which will bound to happen. Look, of course I support having people choosing their own moderation policy, but that only works *in theory*: in practice greater power does not lead to greater responsibility.
+"""]]

creating tag page tag/matrix
diff --git a/tag/matrix.mdwn b/tag/matrix.mdwn
new file mode 100644
index 00000000..5e0e8bbd
--- /dev/null
+++ b/tag/matrix.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged matrix"]]
+
+[[!inline pages="tagged(matrix)" actions="no" archive="yes"
+feedshow=10]]

publish
diff --git a/blog/matrix-notes.md b/blog/2022-06-17-matrix-notes.md
similarity index 100%
rename from blog/matrix-notes.md
rename to blog/2022-06-17-matrix-notes.md

one last review
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 8b3faca6..bd645ed0 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -1,22 +1,25 @@
+[[!meta title="Matrix notes"]]
+
 I have some concerns about Matrix (the protocol, not the movie that
 came out recently, although I do have concerns about that as
 well). I've been watching the project for a long time, and it seems
 more a promising alternative to many protocols like IRC, XMPP, and
 Signal.
 
-This review will sound a bit negative, because it focuses on the
-concerns I have. I am the operator of an IRC network and people keep
-asking me to bridge it with Matrix. I have myself considered a few
-times the idea of just giving up and converting to Matrix. This space
-is a living document exploring my research of that problem space.
+This review may sound a bit negative, because it focuses on those
+concerns. I am the operator of an IRC network and people keep asking
+me to bridge it with Matrix. I have myself considered just giving up
+on IRC and converting to Matrix. This space is a living document
+exploring my research of that problem space. The TL;DR: is that no,
+I'm not setting up a bridge just yet, and I'm still on IRC.
 
 This article was written over the course of the last three months, but
 I have been watching the Matrix project for years (my logs seem to say
-2016 at least), and is rather long. It will likely take you half an
-hour to read, so [copy this over to your ebook reader](https://gitlab.com/anarcat/wallabako), your
-tablet, dead trees, and sit back comfortably. Or, alternatively, just
-jump to a section that interest you or, more likely, the
-[conclusion](#conclusion).
+2016 at least). The article is rather long. It will likely take you
+half an hour to read, so [copy this over to your ebook reader](https://gitlab.com/anarcat/wallabako),
+your tablet, or dead trees, and lean back and relax as I show you
+around the Matrix. Or, alternatively, just jump to a section that
+interest you, most likely the [conclusion](#conclusion).
 
 [[!toc levels=2]]
 
@@ -60,7 +63,8 @@ normally tightly controlled. So, if you trust your IRC operators, you
 should be fairly safe. Obviously, clients *can* (and often do, even if
 [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all messages, but this is generally not
 the default. [Irssi](https://irssi.org/), for example, does [not log by
-default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412). Some IRC bouncers *do* log do disk however...
+default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412). IRC bouncers are more likely to log to disk, of course,
+to be able to do what they do.
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -68,8 +72,8 @@ database. Then it will transmit that message to all clients connected
 to that server and room, and to all other servers that have clients
 connected to that room. Those remote servers, in turn, will keep a
 copy of that message and all its metadata in their own database, by
-default forever. On encrypted rooms, thankfully, those messages are
-encrypted, but not their metadata.
+default forever. On encrypted rooms those messages are encrypted, but
+not their metadata.
 
 There is a mechanism to expire entries in Synapse, but it is [not
 enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one should generally assume that a message
@@ -84,7 +88,7 @@ will log all content and metadata from that room. That includes
 private, one-on-one conversations, since those are essentially rooms
 as well.
 
-In the context of the GDPR, this is really tricky: who's the
+In the context of the GDPR, this is really tricky: who is the
 responsible party (known as the "data controller") here? It's
 basically any yahoo who fires up a home server and joins a room.
 
@@ -100,7 +104,9 @@ enforce your right to be forgotten in a given room, you would have to:
 I recognize this is a hard problem to solve while still keeping an
 open ecosystem. But I believe that Matrix should have much stricter
 defaults towards data retention than right now. Message expiry should
-be enforced *by default*, for example.
+be enforced *by default*, for example. (Note that there are also
+redaction policies that could be used to implement part of the GDPR
+automatically, see the privacy policy discussion below on that.)
 
 Also keep in mind that, in the brave new [peer-to-peer](https://github.com/matrix-org/pinecone) world that
 Matrix is heading towards, the boundary between server and client is
@@ -134,7 +140,7 @@ When I first looked at Matrix, five years ago, Element.io was called
 
 When I asked Matrix people about why they were using Google Analytics,
 they explained this was for development purposes and they were aiming
-for velocity at the time, not privacy.
+for velocity at the time, not privacy (paraphrasing here).
 
 They also included a "free to snitch" clause:
 
@@ -143,16 +149,17 @@ They also included a "free to snitch" clause:
 > obligation, the instructions or requests of a governmental authority
 > or regulator, including those outside of the UK.
 
-Those are really *broad* terms.
+Those are really *broad* terms, above and beyond what is typically
+expected legally.
 
 Like the current retention policies, such user tracking and
-... "liberal" collaboration practices with the state sets a bad
+... "liberal" collaboration practices with the state set a bad
 precedent for other home servers.
 
 Thankfully, since the above policy was published (2017), the GDPR was
 "implemented" ([2018](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation)) and it seems like both the [Element.io
 privacy policy](https://element.io/privacy) and the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been
-somewhat improved since then.
+somewhat improved since.
 
 Notable points of the new privacy policies:
 
@@ -209,22 +216,22 @@ behind.
 
 [private contact discovery]: https://signal.org/blog/private-contact-discovery/
 
-This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (open in 2019) in Synapse, but this is not
-just an implementation issue, it's a flaw in the protocol itself. Home
-servers keep join/leave of all rooms, which gives clear information
-about who is talking to. Synapse logs are also quite verbose and may
+This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (opened in 2019) in Synapse, but this is
+not just an implementation issue, it's a flaw in the protocol
+itself. Home servers keep join/leave of all rooms, which gives clear
+text information about who is talking to. Synapse logs may also
 contain privately identifiable information that home server admins
-might not be aware of in the first place. Those logs rotation are also
-separate from the server-level retention policy, which may be
+might not be aware of in the first place. Those log rotation policies
+are separate from the server-level retention policy, which may be
 confusing for a novice sysadmin.
 
 Combine this with the federation: even if you trust your home server
 to do the right thing, the second you join a public room with
 third-party home servers, those ideas kind of get thrown out because
 those servers can do whatever they want with that information. Again,
-a problem that is hard to solve in a federation.
+a problem that is hard to solve in any federation.
 
-To be fair, IRC doesn't have a great story here either: any client
+To be fair, IRC doesn't have a great story here either: any *client*
 knows not only who's talking to who in a room, but also typically
 their client IP address. Servers *can* (and often do) *obfuscate*
 this, but often that obfuscation is trivial to reverse. Some servers
@@ -246,10 +253,10 @@ servers because some people connect to IRC using Matrix. This, in
 turn, means that Matrix will connect to that URL to generate a link
 preview.
 
-I feel this is a security issue, especially because those sockets
-would be kept open seemingly *forever*. I tried to warn the Matrix
-security team but somehow, I don't think this issue was taken very
-seriously. Here's the disclosure timeline:
+I feel this outlines a security issue, especially because those
+sockets would be kept open seemingly *forever*. I tried to warn the
+Matrix security team but somehow, I don't think this issue was taken
+very seriously. Here's the disclosure timeline:
 
  * January 18: contacted Matrix security
  * January 19: response: already [reported as a bug](https://github.com/matrix-org/synapse/issues/8302)
@@ -269,14 +276,18 @@ There are a couple of problems here:
  1. the bug was publicly disclosed in September 2020, and not
     considered a security issue until I notified them, and even then,
     I had to insist
+
  2. no clear disclosure policy timeline was proposed or seems
     established in the project (there is a [security disclosure
     policy](https://matrix.org/security-disclosure-policy/) but it doesn't include any predefined timeline)
+
  3. I wasn't informed of the disclosure
+
  4. the actual solution is a size limit (10MB, already implemented), a
     time limit (30 seconds, implemented in [PR 11784][]), and a
     content type allow list (HTML, "media" or JSON, implemented in [PR
     11936][]), and I'm not sure it's adequate
+
  5. (pure vanity:) I did not make it to their [Hall of fame](https://matrix.org/security-disclosure-policy/)
 
 [PR 11784]: https://github.com/matrix-org/synapse/pull/11784
@@ -285,12 +296,12 @@ There are a couple of problems here:
 I'm not sure those solutions are adequate because they all seem to
 assume a single home server will pull that one URL for a little while
 then stop. But in a federated network, *many* (possibly thousands)
-home servers may connected to a single room at once. If an attacker
-would drop a link into such a room, *all* those servers would connect
-to that link *all at once*. This is basically an amplification attack:
-a small packet will generate a lot of traffic to a single target. It
-doesn't matter there are size or time limits: the amplification is
-what matters here.
+home servers may be connected in a single room at once. If an attacker
+drops a link into such a room, *all* those servers would connect to
+that link *all at once*. This is an amplification attack: a small
+amount of traffic will generate a lot more traffic to a single
+target. It doesn't matter there are size or time limits: the
+amplification is what matters here.
 
 It should also be noted that *clients* that generate link previews
 have more amplification because they are more numerous than
@@ -300,18 +311,16 @@ generate link previews as well.
 That said, this is possibly not a problem specific to Matrix: any
 federated service that generates link previews may suffer from this.
 
-I'm honestly not sure what the solution is here.

(Diff truncated)
details about message signing
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 856ed39a..8b3faca6 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -386,12 +386,22 @@ themselves.
 
 That said, if server B administrator hijack user `joe` on server B,
 they will hijack that room *on that specific server*. This will not
-(necessarily) affect users on the other servers. It does seem like a
-major flaw that room credentials are bound to Matrix identifiers, as
-opposed to the E2E encryption credentials. This means that even in an
-encrypted room with fully verified members, a compromised or hostile
-home server could take over the room, inject an hostile party and
-start injecting hostile content or listen in on the conversations.
+(necessarily) affect users on the other servers, as servers could
+refuse parts of the updates. In practice, it's not clear how to block
+such an attack to me at the moment.
+
+It does seem like a major flaw that room credentials are bound to
+Matrix identifiers, as opposed to the E2E encryption credentials. So
+in an encrypted room even with fully verified members, a compromised
+or hostile home server can still take over the room by impersonating
+an admin and injecting an hostile user. That user can then send events
+or listen on the conversations.
+
+This is even more frustrating when you consider that Matrix events are
+actually [signed](https://spec.matrix.org/latest/#architecture) and therefore have *some* authentication attached to
+them. That signature, however, is made from the homeserver PKI keys,
+*not* the client's E2E keys, which makes E2E feel like it has been
+"bolted on" later.
 
 # Availability
 

spell check, one last TODO left (architecture)
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index b5c5f1ad..856ed39a 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -132,7 +132,7 @@ When I first looked at Matrix, five years ago, Element.io was called
 > browse our Website and use our Service and also allows us to improve
 > our Website and our Service. 
 
-When I asked Matrix people about why they were using Google analytics,
+When I asked Matrix people about why they were using Google Analytics,
 they explained this was for development purposes and they were aiming
 for velocity at the time, not privacy.
 
@@ -202,7 +202,7 @@ As an aside, I also appreciate that Matrix.org has a fairly decent
 Overall, privacy protections in Matrix mostly concern message
 contents, not metadata. In other words, who's talking with who, when
 and from where is not well protected. Compared to a tool like Signal,
-which goes through great lengths to anonymise that data with features
+which goes through great lengths to anonymize that data with features
 like [private contact discovery][], [disappearing messages](https://signal.org/blog/disappearing-messages/),
 [sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
 behind.
@@ -559,9 +559,9 @@ blog post for details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-
 servers.
 
 There are other promising home servers implementations from a
-performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), golang, [entered beta in late
+performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), Golang, [entered beta in late
 2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit), Rust, beta; [others](https://matrix.org/faq/#can-i-write-a-matrix-homeserver%3F)), but none of those
-are feature-complete so there's a tradeoff to be made there.  Synapse
+are feature-complete so there's a trade-off to be made there.  Synapse
 is also adding a lot of feature fast, so it's an open question whether
 the others will ever catch up.
 
@@ -590,7 +590,7 @@ time each message has to take.
 
 (I assume, here, that each Matrix message is delivered through at
 least two new HTTP sessions, which therefore require up to 8 packet
-roundtrips whereas in IRC, the existing socket is reused so it's
+round-trips whereas in IRC, the existing socket is reused so it's
 basically 2 round-trips.)
 
 Some [courageous person](https://blog.lewman.com/) actually made some [tests of various
@@ -649,7 +649,7 @@ pretty cool that it works, and they actually did it [pretty well][private contac
 
 Registration is also less obvious: in Signal, the app just needs to
 confirm your phone number and it's generally automated. It's
-frictionless and quick. In Matrix, you need to learn about home
+friction-less and quick. In Matrix, you need to learn about home
 servers, pick one, register (with a password! aargh!), and then setup
 encryption keys (not default), etc. It's really a lot more friction.
 
@@ -823,7 +823,7 @@ something that *anyone* working on a federated system should study in
 detail, because they are *bound* to make the same mistakes if they are
 not familiar with it. The short version is:
 
- * 1988: Finish researcher publishes first IRC codebase publicly
+ * 1988: Finish researcher publishes first IRC source code
  * 1989: 40 servers worldwide, mostly universities
  * 1990: EFnet ("eris-free network") fork which blocks the "open
    relay", named [Eris][] - followers of Eris form the A-net, which
@@ -832,7 +832,7 @@ not familiar with it. The short version is:
    routing improvements and timestamp-based channel synchronisation
  * 1994: DALnet fork, from Undernet, again on a technical disagreement
  * 1995: Freenode founded
- * 1996: IRCnet forks from EFnet, following a flamewar of historical
+ * 1996: IRCnet forks from EFnet, following a flame war of historical
    proportion, splitting the network between Europe and the Americas
  * 1997: Quakenet founded
  * 1999: (XMPP founded)
@@ -892,7 +892,7 @@ more machine-learning tools to sort through email and those systems
 are, fundamentally, unknowable.
 
 HTTP has somehow managed to live in a parallel universe, as it's
-technically still completely federated: anyone can start a webserver
+technically still completely federated: anyone can start a web server
 if they have a public IP address and anyone can connect to it. The
 catch, of course, is how you find the darn thing. Which is how Google
 became one of the most powerful corporations on earth, and how they
@@ -928,8 +928,6 @@ dead. Just like IRC is dead now.
 I wonder which path Matrix will take. Could it liberate us from those
 vicious cycles?
 
-TODO: spellcheck
-
 [[!tag draft]]
 
 [^1]: [According to Wikipedia](https://en.wikipedia.org/wiki/Internet_Relay_Chat#Modern_IRC), there are currently about 500

fix more TODOs, minor tweaks
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 2a6eedf1..b5c5f1ad 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -32,9 +32,9 @@ It's also (when [compared with XMPP](https://matrix.org/faq/#what-is-the-differe
 global JSON database with an HTTP API and pubsub semantics - whilst
 XMPP can be thought of as a message passing protocol."
 
-TODO: expand with
-https://matrix.org/faq/#what-is-the-current-project-status
-TODO: watch out for dupes with the numbers in conclusion
+[According to their FAQ](https://matrix.org/faq/#what-is-the-current-project-status), the project started in 2014, has about
+20,000 servers, and millions of users. Matrix works over HTTPS but
+over a [special port](https://matrix.org/faq/#what-ports-do-i-have-to-open-up-to-join-the-global-matrix-federation%3F): 8448.
 
 # Security and privacy
 
@@ -386,10 +386,12 @@ themselves.
 
 That said, if server B administrator hijack user `joe` on server B,
 they will hijack that room *on that specific server*. This will not
-(necessarily) affect users on the other servers.
-
-TODO: so what happens here, a fork? how does Matrix resolve this?
-TODO: why isn't this bound to E2E credentials?
+(necessarily) affect users on the other servers. It does seem like a
+major flaw that room credentials are bound to Matrix identifiers, as
+opposed to the E2E encryption credentials. This means that even in an
+encrypted room with fully verified members, a compromised or hostile
+home server could take over the room, inject an hostile party and
+start injecting hostile content or listen in on the conversations.
 
 # Availability
 
@@ -544,7 +546,7 @@ on the other hand, use the [client-server discovery API](https://spec.matrix.org
 what allows a given client to find your home server when you type your
 Matrix ID on login.
 
-TODO: review FAQ
+TODO: review architecture: https://spec.matrix.org/latest/#architecture
 
 # Performance
 
@@ -948,7 +950,8 @@ TODO: spellcheck
        * Pinterest: 480M
        * Twitter: 397M
 
-      Notable omission: Youtube, with 2.6B users...
+      Notable omission from that list: Youtube, with its mind-boggling
+      2.6 billion users...
 
       Those are not the kind of numbers you just "need to convince a
       brother or sister" to grow the network...

forgot about bots and voip
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f6015277..2a6eedf1 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -716,6 +716,39 @@ actually fit well in there. Going with `gomuks`, on the other hand,
 would mean running it in parallel with Irssi or ... ditching IRC,
 which is a leap I'm not quite ready to take just yet.
 
+Oh, and basically none of those clients (except Nheko and Element)
+support VoIP, which is still kind of a second-class citizen in
+Matrix. It does not support large rooms, for example: [Jitsi was used
+for FOSDEM](https://matrix.org/blog/2022/02/07/hosting-fosdem-2022-on-matrix/) instead of the native videoconferencing system.
+
+## Bots
+
+This falls a little aside the "usability" section, but I didn't know
+where to put this... There's a few Matrix bots out there, and you are
+likely going to be able to replace your existing bots with Matrix
+bots. It's true that IRC has a long and impressive history with lots
+of various bots doing various things, but given how young Matrix is,
+there's still a good variety:
+
+ * [maubot](https://github.com/maubot/maubot): generic bot with tons of usual plugins like sed, dice,
+   karma, xkcd, echo, rss, reminder, translate, react, exec,
+   gitlab/github webhook receivers, weather, etc
+ * [opsdroid](https://github.com/opsdroid/opsdroid): framework to implement "chat ops" in Matrix,
+   connects with Matrix, GitHub, GitLab, Shell commands, Slack, etc
+ * [matrix-nio](https://github.com/poljar/matrix-nio): another framework, used to build [lots more
+   bots](https://matrix-nio.readthedocs.io/en/latest/examples.html) like:
+   * [hemppa](https://github.com/vranki/hemppa): generic bot with various functionality like weather,
+     RSS feeds, calendars, cron jobs, OpenStreetmaps lookups, URL
+     title snarfing, wolfram alpha, astronomy pic of the day, Mastodon
+     bridge, room bridging, oh dear
+   * [devops](https://github.com/rdagnelie/devops-bot): ping, curl, etc
+   * [podbot](https://github.com/interfect/podbot): play podcast episodes from AntennaPod
+   * [cody](https://gitlab.com/carlbordum/matrix-cody): Python, Ruby, Javascript REPL
+   * [eno](https://github.com/8go/matrix-eno-bot): generic bot, "personal assistant"
+ * [mjolnir](https://github.com/matrix-org/mjolnir): moderation bot
+ * [hookshot](https://github.com/Half-Shot/matrix-hookshot): bridge with GitLab/GitHub
+ * [matrix-monitor-bot](https://github.com/turt2live/matrix-monitor-bot): latency monitor
+
 ## Working on Matrix
 
 As a developer, I find Matrix kind of intimidating. The specification

forgot youtube
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 82f30502..f6015277 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -915,5 +915,7 @@ TODO: spellcheck
        * Pinterest: 480M
        * Twitter: 397M
 
+      Notable omission: Youtube, with 2.6B users...
+
       Those are not the kind of numbers you just "need to convince a
       brother or sister" to grow the network...

note about sbuild-qemu-boot: not in bullseye, and typo in argument
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index 89fc0ccf..4b6ebebf 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -94,6 +94,10 @@ account with an empty password.
 
 ## Other useful tasks
 
+Note that some of the commands below (namely the ones depending on
+`sbuild-qemu-boot`) assume you are running Debian 12 (bookworm) or
+later.
+
  * enter the VM to make test, changes will be discarded  (thanks Nick
    Brown for the `sbuild-qemu-boot` tip!):
  
@@ -109,7 +113,7 @@ account with an empty password.
  * enter the VM to make *permanent* changes, which will *not* be
    discarded:
 
-        sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
+        sudo sbuild-qemu-boot --read-write /srv/sbuild/qemu/unstable-amd64.img
 
    Equivalent command:
 

lvm copy
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 3a518f8a..82627eac 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -871,6 +871,22 @@ Once that's done, export the pools to disconnect the drive:
     zpool export bpool-tubman
     zpool export rpool-tubman
 
+## LVM benchmark
+
+Copied the 512GB SSD/M.2 device to *another* 1024GB NVMe/M.2 device:
+
+    anarcat@curie:~$ sudo dd if=/dev/sdb of=/dev/sdc bs=4M status=progress conv=fdatasync
+    499944259584 octets (500 GB, 466 GiB) copiés, 1713 s, 292 MB/s
+    119235+1 enregistrements lus
+    119235+1 enregistrements écrits
+    500107862016 octets (500 GB, 466 GiB) copiés, 1719,93 s, 291 MB/s
+
+... while both over USB, whoohoo 300MB/s!
+
+TODO: Next step is to make a benchmark of LVM vs ZFS, since have (in
+theory) both the same hardware now (although the LVM copy is lagging
+behind the ZFS one, naturally).
+
 # Remaining issues
 
 TODO: move send/receive backups to offsite host

i3.conf: switch to fira, bumblebee, fix tray output
diff --git a/software/desktop/i3.conf b/software/desktop/i3.conf
index 71d66751..3dbd5d08 100644
--- a/software/desktop/i3.conf
+++ b/software/desktop/i3.conf
@@ -52,7 +52,7 @@ set $mod Mod4
 # is used in the bar {} block below.
 # This font is widely installed, provides lots of unicode glyphs, right-to-left
 # text rendering and scalability on retina/hidpi displays (thanks to pango).
-font pango:DejaVu Sans Mono 10
+font pango:Fira mono 10
 # Before i3 v4.8, we used to recommend this one as the default:
 # font -misc-fixed-medium-r-normal--13-120-75-75-C-70-iso10646-1
 # The font above is very space-efficient, that is, it looks good, sharp and
@@ -374,13 +374,20 @@ client.background       $black
 # Start i3bar to display a workspace bar (plus the system information i3status
 # finds out, if available)
 bar {
-        status_command py3status
+        # pango-list can help finding fonts here
+        font pango:FontAwesome, Fira mono 10
+        # window sound cpu memory load date
+        status_command bumblebee-status --iconset awesome-fonts
+        #status_command py3status
         position top
         # obey Fitt's law, ie. reduce the empty space
         tray_padding 0
+        # show tray, and on primary workspace
+        tray_output primary
         # colors are documented here:
         # https://i3wm.org/docs/userguide.html#_colors
-        # there's also some colors in ~/.config/i3status/config
+        # that reuses the colors set above, and might not actually
+        # affect the status bar for anything else than i3statu
         colors {
               background $black
               statusline $white

another note
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index be2a5a2e..89fc0ccf 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -182,11 +182,14 @@ autopkgtest:
       sudo qemu-img create -f qcow2 -o backing_file=/srv/sbuild/qemu/unstable-autopkgtest-amd64.img,backing_fmt=qcow2  /var/lib/libvirt/images/unstable-autopkgtest-amd64.img 10G
       sudo chown qemu-libvirt '/var/lib/libvirt/images/unstable-autopkgtest-amd64.img'
 
-Then this VM can be adopted fairly normally in virt-manager. One twist
-I found is that the "normal" networking doesn't seem to work anymore,
-possibly because I messed it up with vagrant. Using the bridge doesn't
-work either out of the box, but that can be fixed with the following
-`sysctl` changes:
+Then this VM can be adopted fairly normally in virt-manager. Note that
+it's possible that you can set that up through the libvirt XML as
+well, but I haven't quite figured it out.
+
+One twist I found is that the "normal" networking doesn't seem to work
+anymore, possibly because I messed it up with vagrant. Using the
+bridge doesn't work either out of the box, but that can be fixed with
+the following `sysctl` changes:
 
     net.bridge.bridge-nf-call-ip6tables=0
     net.bridge.bridge-nf-call-iptables=0

figured out unification
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index b5cd6290..be2a5a2e 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -169,6 +169,56 @@ The `nc` socket interface is ... not great, but it works well
 enough. And you can probably fire up an SSHd to get a better shell if
 you feel like it.
 
+## Unification with libvirt
+
+Those images created by autopkgtest can actually be used by libvirt to
+boot real, [fully operational battle stations](https://www.youtube.com/watch?v=v5lDKjA_7I0), sorry, virtual
+machines. But it needs some tweaking.
+
+First, we need a snapshot image to work with, because we don't want
+libvirt to work directly on the pristine images created by
+autopkgtest:
+
+      sudo qemu-img create -f qcow2 -o backing_file=/srv/sbuild/qemu/unstable-autopkgtest-amd64.img,backing_fmt=qcow2  /var/lib/libvirt/images/unstable-autopkgtest-amd64.img 10G
+      sudo chown qemu-libvirt '/var/lib/libvirt/images/unstable-autopkgtest-amd64.img'
+
+Then this VM can be adopted fairly normally in virt-manager. One twist
+I found is that the "normal" networking doesn't seem to work anymore,
+possibly because I messed it up with vagrant. Using the bridge doesn't
+work either out of the box, but that can be fixed with the following
+`sysctl` changes:
+
+    net.bridge.bridge-nf-call-ip6tables=0
+    net.bridge.bridge-nf-call-iptables=0
+    net.bridge.bridge-nf-call-arptables=0
+
+That trick was found in [this good libvirt networking guide](https://jamielinux.com/docs/libvirt-networking-handbook/bridged-network.html#initial-steps).
+
+Finally, networking should work transparently inside the VM now. To
+share files, autopkgtest expects a 9p filesystem called
+`sbuild-qemu`. It might be difficult to get it just right in
+virt-manager, so here's the XML:
+
+    <filesystem type="mount" accessmode="passthrough">
+      <source dir="/home/anarcat/dist"/>
+      <target dir="sbuild-qemu"/>
+      <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
+    </filesystem>
+
+The above shares the `/home/anarcat/dist` folder with the VM. Inside
+the VM, it will be mounted because there's this `/etc/fstab` line:
+
+    sbuild-qemu /shared 9p trans=virtio,version=9p2000.L,auto,nofail 0 0
+
+By hand, that would be:
+
+    mount -t 9p -o trans=virtio,version=9p2000.L sbuild-qemu /shared
+
+I probably forgot something else important here, but surely I will
+remember to put it back here when I do.
+
+Note that this at least partially overlaps with [[services/hosting]].
+
 # Nitty-gritty details no one cares about
 
 ## Fixing hang in sbuild cleanup
diff --git a/services/hosting.mdwn b/services/hosting.mdwn
index 9aa93349..cd0b367c 100644
--- a/services/hosting.mdwn
+++ b/services/hosting.mdwn
@@ -220,6 +220,15 @@ will show the right MAC:
 And obviously, connecting to the console and running `ip a` will show
 the right IP address, see below for console usage.
 
+Note that netfilter might be firewalling the bridge. To disable, use:
+
+    sysctl net.bridge.bridge-nf-call-ip6tables=0
+    sysctl net.bridge.bridge-nf-call-iptables=0
+    sysctl net.bridge.bridge-nf-call-arptables=0
+
+See also [[the sbuild / qemu blog post|blog/2022-04-27-sbuild-qemu]]
+for details on how to integrate sbuild images with libvirt.
+
 Maintenance
 -----------
 
@@ -268,6 +277,8 @@ References
 
  * [libvirt handbook bridge configuration](https://jamielinux.com/docs/libvirt-networking-handbook/bridged-network.html)
  * [libvirt wiki networking configuration](https://wiki.libvirt.org/page/Networking#Creating_network_initscripts)
+ * a good [libvirt networking handbook](https://jamielinux.com/docs/libvirt-networking-handbook/)
+ * [Arch Linux wiki page](https://wiki.archlinux.org/title/Libvirt)
  * [Debian wiki KVM reference](https://wiki.debian.org/KVM) - also includes tuning options for
    disks, CPU, I/O
  * [nixCraft guide](https://www.cyberciti.biz/faq/install-kvm-server-debian-linux-9-headless-server/) - which gave me the `virt-builder` shortcut

more matrix edits
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 58bf5154..82f30502 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -20,6 +20,22 @@ jump to a section that interest you or, more likely, the
 
 [[!toc levels=2]]
 
+# Introduction to Matrix
+
+Matrix is an "open standard for interoperable, decentralised,
+real-time communication over IP. It can be used to power Instant
+Messaging, VoIP/WebRTC signalling, Internet of Things communication -
+or anywhere you need a standard HTTP API for publishing and
+subscribing to data whilst tracking the conversation history".
+
+It's also (when [compared with XMPP](https://matrix.org/faq/#what-is-the-difference-between-matrix-and-xmpp%3F)) "an eventually consistent
+global JSON database with an HTTP API and pubsub semantics - whilst
+XMPP can be thought of as a message passing protocol."
+
+TODO: expand with
+https://matrix.org/faq/#what-is-the-current-project-status
+TODO: watch out for dupes with the numbers in conclusion
+
 # Security and privacy
 
 I have some concerns about the security promises of Matrix. It's
@@ -302,13 +318,19 @@ rejoin the room from another server. This is why spam is such a
 problem in Email, and why IRC networks have stopped federating ages
 ago (see [the IRC history](https://en.wikipedia.org/wiki/Internet_Relay_Chat#History) for that fascinating story).
 
+## The mjolnir bot
+
 The [mjolnir moderation bot](https://github.com/matrix-org/mjolnir) is designed to help with some of those
 things. It can kick and ban users, redact all of a user's message (as
 opposed to one by one), all of this across multiple rooms. It can also
 subscribe to a federated block list published by `matrix.org` to block
-known abusers (users or servers). It's suggested by Matrix people to
-make the bot admin of your channels, because you can't take back admin
-from a user once given.
+known abusers (users or servers). Bans are [pretty flexible](https://github.com/matrix-org/mjolnir/blob/main/docs/moderators.md#bans) and
+can operate at the user, room, or server level.
+
+It's suggested by Matrix people to make the bot admin of your
+channels, because you can't take back admin from a user once given.
+
+## The command-line tool
 
 This is based on an [admin API](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html) built into Synapse. There's also a
 [new command line tool](https://git.fout.re/pi/matrixadminhelpers) designed to do things like:
@@ -320,6 +342,15 @@ This is based on an [admin API](https://matrix-org.github.io/synapse/latest/usag
 > * purge history of theses rooms
 > * shutdown rooms
 
+## Rate limiting
+
+Synapse has pretty good [built-in rate-limiting](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L879-L996) which blocks
+repeated login, registration, joining, or messaging attempts. It may
+also end up throttling servers on the federation based on those
+settings.
+
+## Fundamental federation problems
+
 Because users joining a room may come from another server, room
 moderators are at the mercy of the registration and moderation
 policies of those servers. Matrix is like IRC's `+R` mode ("only
@@ -343,13 +374,22 @@ federation, when you bridge a room with another network, you inherit
 all the problems from *that* network and the bridge is unlikely to
 have as many tools as the original network's API to control abuse...
 
-Synapse has pretty good [built-in rate-limiting](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L879-L996) which blocks
-repeated login, registration, joining, or messaging attempts. It may
-also end up throttling servers on the federation based on those
-settings.
+## Room admins
+
+Matrix, in particular, has the problem that room administrators (which
+have the power to redact messages, ban users, and promote other users)
+are bound to their Matrix ID which is, in turn, bound to their home
+servers. This implies that a home server administrators could (1)
+impersonate a given user and (2) use that to hijack the room. So in
+practice, the home server is the trust anchor for rooms, not the user
+themselves.
+
+That said, if server B administrator hijack user `joe` on server B,
+they will hijack that room *on that specific server*. This will not
+(necessarily) affect users on the other servers.
 
-TODO: you can use the admin API to impersonate a room admin? YES! see also
-the other TODO below
+TODO: so what happens here, a fork? how does Matrix resolve this?
+TODO: why isn't this bound to E2E credentials?
 
 # Availability
 
@@ -423,13 +463,6 @@ user from server B joins, the room will be replicated on server B as
 well. If server A fails, server B will keep relaying traffic to
 connected users and servers.
 
-TODO: how does admin work again? can server B hijack a room on server
-A? I had noted "admin from serverB join and belong as admin. each
-server needs to have admin." answer: no. server B would need to
-impersonate a room admin on server B to hijack the room. they can
-basically fork the room to modify it, but that only affects users on
-that server.
-
 a room is therefore not fundamentally addressed with the above alias,
 instead ,it has a internal Matrix ID, which basically a random
 string. It has a server name attached to it, but that was made just to
@@ -458,10 +491,12 @@ notice.) The point here is to have a way to pre-populate a list of
 rooms on the server, even if they are not necessarily present on that
 server directly, in case another server that has connected users hosts it.
 
-Rooms, by default, live forever, even after the last user quits. There
-is a [tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
-part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows a room
-admin to close a room, with a message and a pointer to another room.
+Rooms, by default, live forever, even after the last user
+quits. There's an [admin API to delete rooms](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#version-2-new-version) and a [tombstone
+event](https://spec.matrix.org/v1.2/client-server-api/#events-17) to redirect to another one, but neither have a GUI yet. The
+latter is part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows
+a room admin to close a room, with a message and a pointer to another
+room.
 
 ## Home server
 
@@ -501,15 +536,14 @@ explicitly configured for your domain. You can't just put:
 `@you:example.com` as a Matrix ID. That's because Matrix doesn't
 support "virtual hosting" and you'd still be connecting to rooms and
 people with your `matrix.org` identity, not `example.com` as you would
-normally expect.
+normally expect. This is also why you cannot [rename your home
+server](https://matrix.org/faq/#why-can't-i-rename-my-homeserver%3F) after the fact.
 
 That specification is what allows servers to find each other. Clients,
 on the other hand, use the [client-server discovery API](https://spec.matrix.org/v1.2/client-server-api/#server-discovery): this is
 what allows a given client to find your home server when you type your
 Matrix ID on login.
 
-TODO: https://matrix.org/faq/#why-can't-i-rename-my-homeserver%3F
-
 TODO: review FAQ
 
 # Performance
@@ -524,10 +558,10 @@ servers.
 
 There are other promising home servers implementations from a
 performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), golang, [entered beta in late
-2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit), Rust, beta), but none of those are
-feature-complete so there's a tradeoff to be made there.  Synapse is
-also adding a lot of feature fast, so it's unlikely those other
-servers will ever catch up.
+2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit), Rust, beta; [others](https://matrix.org/faq/#can-i-write-a-matrix-homeserver%3F)), but none of those
+are feature-complete so there's a tradeoff to be made there.  Synapse
+is also adding a lot of feature fast, so it's an open question whether
+the others will ever catch up.
 
 Matrix can feel slow sometimes. For example, joining the "Matrix HQ"
 room in Element (from matrix.debian.social) takes a few *minutes* and

edits...
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 291228bd..3a518f8a 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -760,32 +760,16 @@ Main pool creation is:
 
 ## first sync
 
-sanoid... syncoid... probably better to do it by hand, but this is
-easier. slooow everything feels like it has ~30-50ms latency extra:
+I used syncoid to copy all pools over to the external device. syncoid
+is a thing that's part of the [sanoid project](https://github.com/jimsalterjrs/sanoid) which is
+specifically designed to sync snapshots between pool, typically over
+SSH links but it can also operate locally.
 
-    anarcat@curie:sanoid$ LANG=C top -b  -n 1 | head -20
-    top - 13:07:05 up 6 days,  4:01,  1 user,  load average: 16.13, 16.55, 11.83
-    Tasks: 606 total,   6 running, 598 sleeping,   0 stopped,   2 zombie
-    %Cpu(s): 18.8 us, 72.5 sy,  1.2 ni,  5.0 id,  1.2 wa,  0.0 hi,  1.2 si,  0.0 st
-    MiB Mem :  15898.4 total,   1387.6 free,  13170.0 used,   1340.8 buff/cache
-    MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1319.8 avail Mem 
+The `sanoid` command had a `--readonly` argument to simulate changes,
+but `syncoid` didn't so I [tried to fix that with an upstream PR](https://github.com/jimsalterjrs/sanoid/pull/748).
 
-        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
-         70 root      20   0       0      0      0 S  83.3   0.0   6:12.67 kswapd0
-    4024878 root      20   0  282644  96432  10288 S  44.4   0.6   0:11.43 puppet
-    3896136 root      20   0   35328  16528     48 S  22.2   0.1   2:08.04 mbuffer
-    3896135 root      20   0   10328    776    168 R  16.7   0.0   1:22.93 zfs
-    3896138 root      20   0   10588    788    156 R  16.7   0.0   1:49.30 zfs
-        350 root       0 -20       0      0      0 R  11.1   0.0   1:03.53 z_rd_int
-        351 root       0 -20       0      0      0 S  11.1   0.0   1:04.15 z_rd_int
-    3896137 root      20   0    4384    352    244 R  11.1   0.0   0:44.73 pv
-    4034094 anarcat   30  10   20028  13960   2428 S  11.1   0.1   0:00.70 mbsync
-    4036539 anarcat   20   0    9604   3464   2408 R  11.1   0.0   0:00.04 top
-        352 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
-        353 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
-        354 root       0 -20       0      0      0 S   5.6   0.0   1:04.01 z_rd_int
-
-The full first sync was:
+It seems it would be better to do this by hand, but this was much 
+easier. The full first sync was:
 
     root@curie:/home/anarcat# ./bin/syncoid -r  bpool bpool-tubman
 
@@ -850,8 +834,37 @@ The full first sync was:
 
 Funny how the `CRITICAL ERROR` doesn't actually stop `syncoid` and it
 just carries on merrily doing when it's telling you it's "cowardly
-refusing to destroy your existing target"... Maybe that's [my pull
-request that broke something though](https://github.com/jimsalterjrs/sanoid/pull/748).
+refusing to destroy your existing target"... Maybe that's because my pull
+request broke something though...
+
+During the transfer, the computer was very sluggish: everything feels
+like it has ~30-50ms latency extra:
+
+    anarcat@curie:sanoid$ LANG=C top -b  -n 1 | head -20
+    top - 13:07:05 up 6 days,  4:01,  1 user,  load average: 16.13, 16.55, 11.83
+    Tasks: 606 total,   6 running, 598 sleeping,   0 stopped,   2 zombie
+    %Cpu(s): 18.8 us, 72.5 sy,  1.2 ni,  5.0 id,  1.2 wa,  0.0 hi,  1.2 si,  0.0 st
+    MiB Mem :  15898.4 total,   1387.6 free,  13170.0 used,   1340.8 buff/cache
+    MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1319.8 avail Mem 
+
+        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
+         70 root      20   0       0      0      0 S  83.3   0.0   6:12.67 kswapd0
+    4024878 root      20   0  282644  96432  10288 S  44.4   0.6   0:11.43 puppet
+    3896136 root      20   0   35328  16528     48 S  22.2   0.1   2:08.04 mbuffer
+    3896135 root      20   0   10328    776    168 R  16.7   0.0   1:22.93 zfs
+    3896138 root      20   0   10588    788    156 R  16.7   0.0   1:49.30 zfs
+        350 root       0 -20       0      0      0 R  11.1   0.0   1:03.53 z_rd_int
+        351 root       0 -20       0      0      0 S  11.1   0.0   1:04.15 z_rd_int
+    3896137 root      20   0    4384    352    244 R  11.1   0.0   0:44.73 pv
+    4034094 anarcat   30  10   20028  13960   2428 S  11.1   0.1   0:00.70 mbsync
+    4036539 anarcat   20   0    9604   3464   2408 R  11.1   0.0   0:00.04 top
+        352 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        353 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        354 root       0 -20       0      0      0 S   5.6   0.0   1:04.01 z_rd_int
+
+I wonder how much of that is due to syncoid, particularly because I
+often saw `mbuffer` and `pv` in there which are not strictly necessary
+to do those kind of operations, as far as I understand.
 
 Once that's done, export the pools to disconnect the drive:
 

fix some typos, seems like i need a spellcheck
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 5789fbb1..58bf5154 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -459,7 +459,7 @@ rooms on the server, even if they are not necessarily present on that
 server directly, in case another server that has connected users hosts it.
 
 Rooms, by default, live forever, even after the last user quits. There
-is a[tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
+is a [tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
 part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows a room
 admin to close a room, with a message and a pointer to another room.
 
@@ -548,7 +548,7 @@ that's a first problem.
 But even in conversations, I "feel" people don't immediately respond
 as fast. In fact, an interesting double-blind experiment that could be
 made would be to have people guess whether the person they are talking
-to is on IRC or Matrix. My theory would be thatq people could notice
+to is on IRC or Matrix. My theory would be that people could notice
 that Matrix users are slower, if only because of the TCP round-trip
 time each message has to take.
 
@@ -703,7 +703,7 @@ Just taking the [latest weekly Matrix report](https://matrix.org/blog/2022/05/27
 *three* new MSCs proposed, just last week! There's even a graph that
 shows the number of MSCs is progressing steadily, at 600+ proposals
 total, with the majority (300+) "new". I would guess the "merged" ones
-are at about 150. That's a lot of of text. 
+are at about 150. That's a lot of of text.
 
 That includes kind of useless stuff like [3D worlds](https://github.com/matrix-org/matrix-spec-proposals/pull/3815) which,
 frankly, I don't think you should be working on when you have such
@@ -859,6 +859,8 @@ dead. Just like IRC is dead now.
 I wonder which path Matrix will take. Could it liberate us from those
 vicious cycles?
 
+TODO: spellcheck
+
 [[!tag draft]]
 
 [^1]: [According to Wikipedia](https://en.wikipedia.org/wiki/Internet_Relay_Chat#Modern_IRC), there are currently about 500

prendre des commentaires de Thib, merci!
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 81d46acf..5789fbb1 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -43,7 +43,8 @@ is a separate, if valid, concern.)  Obviously, an hostile server
 normally tightly controlled. So, if you trust your IRC operators, you
 should be fairly safe. Obviously, clients *can* (and often do, even if
 [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all messages, but this is generally not
-the default. [Irssi](https://irssi.org/), for example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
+the default. [Irssi](https://irssi.org/), for example, does [not log by
+default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412). Some IRC bouncers *do* log do disk however...
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -51,7 +52,8 @@ database. Then it will transmit that message to all clients connected
 to that server and room, and to all other servers that have clients
 connected to that room. Those remote servers, in turn, will keep a
 copy of that message and all its metadata in their own database, by
-default forever.
+default forever. On encrypted rooms, thankfully, those messages are
+encrypted, but not their metadata.
 
 There is a mechanism to expire entries in Synapse, but it is [not
 enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one should generally assume that a message
@@ -101,7 +103,7 @@ log retention policies well defined for installed packages, and those
 
 ## Matrix.org privacy policy
 
-When I first looked at Matrix, a long time ago, Element.io was called
+When I first looked at Matrix, five years ago, Element.io was called
 [Riot.im](https://riot.im/) and had a [rather dubious privacy policy](https://web.archive.org/web/20170317115535/https://riot.im/privacy):
 
 > We currently use cookies to support our use of Google Analytics on
@@ -116,7 +118,7 @@ When I first looked at Matrix, a long time ago, Element.io was called
 
 When I asked Matrix people about why they were using Google analytics,
 they explained this was for development purposes and they were aiming
-for velocity at this point, not privacy.
+for velocity at the time, not privacy.
 
 They also included a "free to snitch" clause:
 
@@ -153,7 +155,7 @@ Notable points of the new privacy policies:
  * [Element 2.2.1](https://element.io/privacy): mentions many more third parties (Twilio,
    Stripe, [Quaderno](https://www.quaderno.io/), LinkedIn, Twitter, Google, [Outplay](https://www.outplayhq.com/),
    [PipeDrive](https://www.pipedrive.com/), [HubSpot](https://www.hubspot.com/), [Posthog](https://posthog.com/), Sentry, and [Matomo](https://matomo.org/)
-   (phew!)
+   (phew!) used when you are paying Matrix.org for hosting
 
 I'm not super happy with all the trackers they have on the Element
 platform, but then again you don't have to use that service. Your
@@ -318,23 +320,16 @@ This is based on an [admin API](https://matrix-org.github.io/synapse/latest/usag
 > * purge history of theses rooms
 > * shutdown rooms
 
-Matrix doesn't have IP-specific moderation mechanisms to block users
-from (say) Tor or known VPNs to limit abuse. Furthermore, because
-users joining a room may come from another server, room moderators are
-at the mercy of the registration and moderation policies of those
-servers. Matrix is like IRC's `+R` mode ("only registered users can
-join") by default, except that anyone can register their own
-homeserver, which makes this limited. There's no API to block a
-specific homeserver, so this must be done at the system
-(e.g. netfilter / firewall) level, which, again, might not be obvious
-to a novice.
-
-Furthermore, it can be tricky to block a hostile homeserver if they
-are ready to move around the IP space. It would be better to be *also*
-able to block server *names* altogether, and have those tools
-available to room admins, just like IRC channel admins can block users
-based on their "netmask" (basically their reverse IP address lookup)
-or IP address.
+Because users joining a room may come from another server, room
+moderators are at the mercy of the registration and moderation
+policies of those servers. Matrix is like IRC's `+R` mode ("only
+registered users can join") by default, except that anyone can
+register their own homeserver, which makes this limited.
+
+Server admins can block IP addresses and home servers, but those tools
+are not currently available to room admins. So it would be nice to
+have room admins have that capability, just like IRC channel admins
+can block users based on their IP address.
 
 Matrix has the concept of guest accounts, but it is not used very
 much, and virtually no client supports it. This contrasts with the way
@@ -353,7 +348,7 @@ repeated login, registration, joining, or messaging attempts. It may
 also end up throttling servers on the federation based on those
 settings.
 
-TODO: you can use the admin API to impersonate a room admin? see also
+TODO: you can use the admin API to impersonate a room admin? YES! see also
 the other TODO below
 
 # Availability
@@ -430,7 +425,10 @@ connected users and servers.
 
 TODO: how does admin work again? can server B hijack a room on server
 A? I had noted "admin from serverB join and belong as admin. each
-server needs to have admin."
+server needs to have admin." answer: no. server B would need to
+impersonate a room admin on server B to hijack the room. they can
+basically fork the room to modify it, but that only affects users on
+that server.
 
 a room is therefore not fundamentally addressed with the above alias,
 instead ,it has a internal Matrix ID, which basically a random
@@ -440,17 +438,19 @@ avoid collisions.
 This can get a little confusing. For example, the `#fractal:gnome.org`
 room is an alias on the `gnome.org` server, but the room ID is
 `!hwiGbsdSTZIwSRfybq:matrix.org`. That's because the room was created
-on `matrix.org`, but admins are on `gnome.org` now.
+on `matrix.org`, but the preferred branding is `gnome.org` now.
 
 Discovering rooms can therefore be tricky: there *is* a per-server room
 directory, but Matrix.org people are trying to deprecate it in favor
 of "Spaces". Room directories were ripe for abuse: anyone can create a
-room, so anyone can show up in there. In contrast, a "Space" is
-basically a room that's an index of other rooms (including other
-spaces), so existing moderation and administration mechanism that work
-in rooms can (somewhat) work in spaces as well. This also allows rooms
-to work across federation, regardless on which server they were
-originally created.
+room, so anyone can show up in there. It's possible to restrict who
+can add aliases, but directories were still seen as too limited.
+
+In contrast, a "Space" is basically a room that's an index of other
+rooms (including other spaces), so existing moderation and
+administration mechanism that work in rooms can (somewhat) work in
+spaces as well. This also allows rooms to work across federation,
+regardless on which server they were originally created.
 
 New users can be added to a space or room [automatically](https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1378-L1388) in
 Synapse. (Existing users can be told about the space with a server
@@ -580,6 +580,10 @@ seem stalled at the time of writing. The Matrix people have also
 solving large, internet-scale routing problems Matrix is coming
 to. See also [this talk at FOSDEM 2022](https://www.youtube.com/watch?v=diwzQtGgxU8&list=PLl5dnxRMP1hW7HxlJiHSox02MK9_KluLH&index=19).
 
+Room join performance improvements are also coming down the pipeline,
+with [sliding sync](https://github.com/matrix-org/matrix-spec-proposals/blob/kegan/sync-v3/proposals/3575-sync.md), [lazy loading over federation](https://github.com/matrix-org/matrix-spec-proposals/pull/2775), and [fast
+room joins](https://github.com/matrix-org/synapse/milestone/6). So there's hope there as well.
+
 # Usability
 
 ## Onboarding and workflow
@@ -594,9 +598,11 @@ great:
     <https://app.element.io/#/room%2F%23matrix-dev%3Amatrix.org> and
     then you need to register, aaargh
 
-As you might have guessed by now, there is a [proposed
-specification](https://github.com/matrix-org/matrix-spec-proposals/blob/f295e828dc3107260a7979c40175442bf3a7fdc4/proposals/2312-matrix-uri.md) to solve this, but web browsers need to adopt it
-as well, so that's far from actually being solved.
+As you might have guessed by now, there is a [specification](https://github.com/matrix-org/matrix-spec-proposals/blob/f295e828dc3107260a7979c40175442bf3a7fdc4/proposals/2312-matrix-uri.md) to
+solve this, but web browsers need to adopt it as well, so that's far
+from actually being solved. At least browsers generally know about the
+`matrix:` scheme, it's just not exactly clear what they should do with
+it.
 
 In general, when compared with tools like Signal or Whatsapp, Matrix
 doesn't fare as well in terms of user discovery. I probably have some
@@ -688,6 +694,11 @@ literally *hundreds* of MSCs that are flying around. It's hard to tell
 what's been adopted and what hasn't, and even harder to figure out if
 *your* specific client has implemented it.
 
+As a lot of people, one answer is "rewrite it in rust": Matrix are
+working to implement a lot of those specifications in a
+[matrix-rust-sdk](https://github.com/matrix-org/matrix-rust-sdk) library that's designed to take the
+implementation details away from users. But it's a lot of work!
+
 Just taking the [latest weekly Matrix report](https://matrix.org/blog/2022/05/27/this-week-in-matrix-2022-05-27#dept-of-spec-), you find that
 *three* new MSCs proposed, just last week! There's even a graph that
 shows the number of MSCs is progressing steadily, at 600+ proposals

more todos
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index fa35eb3c..291228bd 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -858,14 +858,14 @@ Once that's done, export the pools to disconnect the drive:
     zpool export bpool-tubman
     zpool export rpool-tubman
 
-TODO: move to offsite host
-TODO: setup cron job (or timer?)
+# Remaining issues
+
+TODO: move send/receive backups to offsite host
+TODO: setup backup cron job (or timer?)
 
 TODO: consider alternatives to syncoid, considering the code issues
 (large functions, lots of `system` calls without arrays...)
 
-# Remaining issues
-
 TODO: swap. how do we do it?
 
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
@@ -873,11 +873,18 @@ TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
 here.
 
-TODO: send/recv, automated snapshots
-
 TODO: merge this documentation with the [[hardware/tubman]]
 documentation. maybe create a separate zfs primer?
 
+TODO: debugging tools:
+
+    tail -f /proc/spl/kstat/zfs/dbgmsg
+    zpool iostat 1 -l
+    -q queues
+    -r size histogram per vdev
+    -w latency histogram
+    -v verbose include vdevq
+
 TODO: review this blog post
 https://github.com/djacu/nixos-on-zfs/blob/main/blog/2022-03-24.md
 which seems to explain a bit the layout behind the installer

incorporate lots of edits
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 32d555fd..81d46acf 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -1,35 +1,49 @@
-I have some concerns about Matrix (the protocol, not the apparently
-horrible movie that came out recently, although I do have concerns
-about that as well). I've been watching the project for a long time,
-and it seems more and more like a truly promising alternative to many
-protocols like IRC, XMPP, and Signal.
+I have some concerns about Matrix (the protocol, not the movie that
+came out recently, although I do have concerns about that as
+well). I've been watching the project for a long time, and it seems
+more a promising alternative to many protocols like IRC, XMPP, and
+Signal.
 
 This review will sound a bit negative, because it focuses on the
 concerns I have. I am the operator of an IRC network and people keep
 asking me to bridge it with Matrix. I have myself considered a few
-times the idea of just giving up and converting it. This space
-documents why neither of those have happened yet.
+times the idea of just giving up and converting to Matrix. This space
+is a living document exploring my research of that problem space.
+
+This article was written over the course of the last three months, but
+I have been watching the Matrix project for years (my logs seem to say
+2016 at least), and is rather long. It will likely take you half an
+hour to read, so [copy this over to your ebook reader](https://gitlab.com/anarcat/wallabako), your
+tablet, dead trees, and sit back comfortably. Or, alternatively, just
+jump to a section that interest you or, more likely, the
+[conclusion](#conclusion).
 
 [[!toc levels=2]]
 
 # Security and privacy
 
+I have some concerns about the security promises of Matrix. It's
+advertised as a "secure" with "E2E [end-to-end] encryption", but how
+does it actually work?
+
 ## Data retention defaults
 
-One of my main concerns with Matrix is data retention.
-
-In IRC, servers don't actually keep messages all that long: they pass
-them along to other servers and clients basically as fast as they can,
-only keep them from memory, and move on to the next message. There are
-no concerns about data retention on messages (and their metadata)
-other than the network layer. (I'm ignoring the issues with user
-registration here, which is a separate, if valid, concern.)
-Obviously, an hostile server *could* start everything it gets of
-course, but typically IRC federations are tightly controlled and, if
-you trust your IRC server, you should be fairly safe. Obviously,
-clients *can* (and often do, even if [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all
-messages, but this is typically not the default. [Irssi](https://irssi.org/), for
-example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
+One of my main concerns with Matrix is data retention, which is a key
+part of security in a threat model where (for example) an hostile
+state actor wants to surveil your communications and can seize your
+devices.
+
+On IRC, servers don't actually keep messages all that long: they pass
+them along to other servers and clients as fast as they can, only keep
+them in memory, and move on to the next message. There are no concerns
+about data retention on messages (and their metadata) other than the
+network layer. (I'm ignoring the issues with user registration, which
+is a separate, if valid, concern.)  Obviously, an hostile server
+*could* log everything passing through it, but IRC federations are
+normally tightly controlled. So, if you trust your IRC operators, you
+should be fairly safe. Obviously, clients *can* (and often do, even if
+[OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all messages, but this is generally not
+the default. [Irssi](https://irssi.org/), for example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -37,30 +51,28 @@ database. Then it will transmit that message to all clients connected
 to that server and room, and to all other servers that have clients
 connected to that room. Those remote servers, in turn, will keep a
 copy of that message and all its metadata in their own database, by
-default basically forever.
+default forever.
 
-Indeed, there is a mechanism to expire entries in Synapse, but it is
-[not enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one can safely assume that a message
+There is a mechanism to expire entries in Synapse, but it is [not
+enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one should generally assume that a message
 sent on Matrix is never expired.
 
 ## GDPR in the federation
 
 But even if that setting was enabled by default, how do you control
-it? This is a fundamental problem of the federation: if anyone is
-allowed to join a given room (which is basically the default
-configuration of any room), anyone will log (deliberately or
-inadvertently) all content and metadata in that room.
+it? This is a fundamental problem of the federation: if any user is
+allowed to join a room (which is the default), those user's servers
+will log all content and metadata from that room. That includes
+private, one-on-one conversations, since those are essentially rooms
+as well.
 
 In the context of the GDPR, this is really tricky: who's the
-responsible party (know as the "data controller") here? It's basically
-any yahoo who fires up a home server and joins a room. Good luck
-enforcing the GDPR on those folks. In the brave new "peer-to-peer"
-world that Matrix is heading towards, it's, also, basically any client
-whatsoever, which also brings its own set of problems. 
+responsible party (known as the "data controller") here? It's
+basically any yahoo who fires up a home server and joins a room.
 
 In a federated network, one has to wonder whether GDPR enforcement is
-even possible at all. Assuming you want to enforce your right to be
-forgotten in a given room, you would have to:
+even possible at all. But in Matrix in particular, if you want to
+enforce your right to be forgotten in a given room, you would have to:
 
  1. enumerate all the users that ever joined the room while you were
     there
@@ -70,7 +82,11 @@ forgotten in a given room, you would have to:
 I recognize this is a hard problem to solve while still keeping an
 open ecosystem. But I believe that Matrix should have much stricter
 defaults towards data retention than right now. Message expiry should
-be enforce *by default*. 
+be enforced *by default*, for example.
+
+Also keep in mind that, in the brave new [peer-to-peer](https://github.com/matrix-org/pinecone) world that
+Matrix is heading towards, the boundary between server and client is
+likely to be fuzzier, which would make applying the GDPR even more difficult.
 
 In fact, maybe Synapse should be designed so that there's no
 configurable flag to turn off data retention. A bit like how most
@@ -80,27 +96,47 @@ this was designed to keep hard drives from filling up, but it also has
 the added benefit of limiting the amount of personal information kept
 on disk in this modern day. (Arguably, syslog doesn't rotate logs on
 its own, but, say, Debian GNU/Linux, as an installed system, does have
-log retention policies well defined for installed packages. And "no
-expiry" is basically a bug.)
+log retention policies well defined for installed packages, and those
+[can be discussed](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=759382). And "no expiry" is definitely a bug.
 
 ## Matrix.org privacy policy
 
 When I first looked at Matrix, a long time ago, Element.io was called
-[Vector.im](https://vector.im/) and had a rather dubious privacy policy. I
-unfortunately cannot find a copy of it now on the internet archive,
-but it openly announced it was collecting (Google!) analytics on its
-users. When I asked Matrix people about this, they explained this was
-for development purposes and they were aiming for velocity at this
-point, not privacy. I am paraphrasing: I am sorry I lost track of that
-conversation that happened so long ago, you will have just to trust me
-on this.
-
-I think that, like the current retention policies, this set a bad
-precedent. Thankfully, since that policy was drafted, the GDPR
-happened and it seems like both the [Element.io privacy policy](https://element.io/privacy) and
-the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been somewhat improved.
-
-Notable points of the privacy policies:
+[Riot.im](https://riot.im/) and had a [rather dubious privacy policy](https://web.archive.org/web/20170317115535/https://riot.im/privacy):
+
+> We currently use cookies to support our use of Google Analytics on
+> the Website and Service. Google Analytics collects information about
+> how you use the Website and Service.
+>
+> [...]
+>
+> This helps us to provide you with a good experience when you
+> browse our Website and use our Service and also allows us to improve
+> our Website and our Service. 
+
+When I asked Matrix people about why they were using Google analytics,
+they explained this was for development purposes and they were aiming
+for velocity at this point, not privacy.
+
+They also included a "free to snitch" clause:
+
+> If we are or believe that we are under a duty to disclose or share
+> your personal data, we will do so in order to comply with any legal
+> obligation, the instructions or requests of a governmental authority
+> or regulator, including those outside of the UK.
+
+Those are really *broad* terms.
+
+Like the current retention policies, such user tracking and
+... "liberal" collaboration practices with the state sets a bad
+precedent for other home servers.
+
+Thankfully, since the above policy was published (2017), the GDPR was
+"implemented" ([2018](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation)) and it seems like both the [Element.io
+privacy policy](https://element.io/privacy) and the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been
+somewhat improved since then.
+
+Notable points of the new privacy policies:
 
  * [2.3.1.1](https://matrix.org/legal/privacy-notice#2311-federation): the "federation" section actually outlines that
    "*Federated homeservers and Matrix clients which respect the Matrix
@@ -112,17 +148,17 @@ Notable points of the privacy policies:
    `matrix.org` service
  * [2.10](https://matrix.org/legal/privacy-notice#210-who-else-has-access-to-my-data): Upcloud, Mythic Beast, Amazon, and CloudFlare possibly
    have access to your data (it's nice to at least mention this in the

(Diff truncated)
monitoring
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 6f3e2b19..fa35eb3c 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -886,6 +886,11 @@ TODO: bpool and rpool are both pools and datasets. that's pretty
 confusing, but also very useful because it allows for pool-wide
 recursive snapshots, which are used for the backup system
 
+TODO: ZFS monitoring?
+https://pieterbakker.com/monitoring-zfs-with-zed/ mentions
+email... something deployed on tubman, probably needs deploy or at
+least testing on curie as well.
+
 ## fio improvements
 
 I really want to improve my experience with `fio`. Right now, I'm just

some more TODOs
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 4185290e..6f3e2b19 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -878,6 +878,14 @@ TODO: send/recv, automated snapshots
 TODO: merge this documentation with the [[hardware/tubman]]
 documentation. maybe create a separate zfs primer?
 
+TODO: review this blog post
+https://github.com/djacu/nixos-on-zfs/blob/main/blog/2022-03-24.md
+which seems to explain a bit the layout behind the installer
+
+TODO: bpool and rpool are both pools and datasets. that's pretty
+confusing, but also very useful because it allows for pool-wide
+recursive snapshots, which are used for the backup system
+
 ## fio improvements
 
 I really want to improve my experience with `fio`. Right now, I'm just

exported the pools, technically ready to move offsite
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 13c005a8..4185290e 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -853,7 +853,13 @@ just carries on merrily doing when it's telling you it's "cowardly
 refusing to destroy your existing target"... Maybe that's [my pull
 request that broke something though](https://github.com/jimsalterjrs/sanoid/pull/748).
 
-TODO: move to offsite host, setup cron job / timer?
+Once that's done, export the pools to disconnect the drive:
+
+    zpool export bpool-tubman
+    zpool export rpool-tubman
+
+TODO: move to offsite host
+TODO: setup cron job (or timer?)
 
 TODO: consider alternatives to syncoid, considering the code issues
 (large functions, lots of `system` calls without arrays...)

some more "benchmarks"
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 1d0427c8..13c005a8 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -596,6 +596,87 @@ Another test was performed while in "rescue" mode but was ultimately
 lost. It's actually still in the old M.2 drive, but I cannot mount
 that device with the external USB controller I have right now.
 
+## Real world experience
+
+This section document not synthetic backups, but actual real world
+workloads, comparing before and after I switched my workstation to
+ZFS.
+
+### Docker performance
+
+I had the feeling that running some git hook (which was firing a
+Docker container) was "slower" somehow. It seems that, at runtime, ZFS
+backends are significant slower than their overlayfs/ext4 equivalent:
+
+    May 16 14:42:52 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87\x2dinit-merged.mount: Succeeded.
+    May 16 14:42:52 curie systemd[5161]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87\x2dinit-merged.mount: Succeeded.
+    May 16 14:42:52 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+    May 16 14:42:53 curie dockerd[1723]: time="2022-05-16T14:42:53.087219426-04:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10 pid=151170
+    May 16 14:42:53 curie systemd[1]: Started libcontainer container af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10.
+    May 16 14:42:54 curie systemd[1]: docker-af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10.scope: Succeeded.
+    May 16 14:42:54 curie dockerd[1723]: time="2022-05-16T14:42:54.047297800-04:00" level=info msg="shim disconnected" id=af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10
+    May 16 14:42:54 curie dockerd[998]: time="2022-05-16T14:42:54.051365015-04:00" level=info msg="ignoring event" container=af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
+    May 16 14:42:54 curie systemd[2444]: run-docker-netns-f5453c87c879.mount: Succeeded.
+    May 16 14:42:54 curie systemd[5161]: run-docker-netns-f5453c87c879.mount: Succeeded.
+    May 16 14:42:54 curie systemd[2444]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+    May 16 14:42:54 curie systemd[5161]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+    May 16 14:42:54 curie systemd[1]: run-docker-netns-f5453c87c879.mount: Succeeded.
+    May 16 14:42:54 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+
+Translating this:
+
+ * container setup: ~1 second
+ * container runtime: ~1 second
+ * container teardown: ~1 second
+ * total runtime: 2-3 seconds
+
+Obviously, those timestamps are not quite accurate enough to make
+precise measurements...
+
+After I switched to ZFS:
+
+    mai 30 15:31:39 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf\x2dinit.mount: Succeeded. 
+    mai 30 15:31:39 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf\x2dinit.mount: Succeeded. 
+    mai 30 15:31:40 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
+    mai 30 15:31:40 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
+    mai 30 15:31:41 curie dockerd[3199]: time="2022-05-30T15:31:41.551403693-04:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 pid=141080 
+    mai 30 15:31:41 curie systemd[1]: run-docker-runtime\x2drunc-moby-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142-runc.ZVcjvl.mount: Succeeded. 
+    mai 30 15:31:41 curie systemd[5287]: run-docker-runtime\x2drunc-moby-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142-runc.ZVcjvl.mount: Succeeded. 
+    mai 30 15:31:41 curie systemd[1]: Started libcontainer container 42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142. 
+    mai 30 15:31:45 curie systemd[1]: docker-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142.scope: Succeeded. 
+    mai 30 15:31:45 curie dockerd[3199]: time="2022-05-30T15:31:45.883019128-04:00" level=info msg="shim disconnected" id=42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 
+    mai 30 15:31:45 curie dockerd[1726]: time="2022-05-30T15:31:45.883064491-04:00" level=info msg="ignoring event" container=42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 
+    mai 30 15:31:45 curie systemd[1]: run-docker-netns-e45f5cf5f465.mount: Succeeded. 
+    mai 30 15:31:45 curie systemd[5287]: run-docker-netns-e45f5cf5f465.mount: Succeeded. 
+    mai 30 15:31:45 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
+    mai 30 15:31:45 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded.
+
+That's double or triple the run time, from 2 seconds to 6
+seconds. Most of the time is spent in run time, inside the
+container. Here's the breakdown:
+
+ * container setup: ~2 seconds
+ * container run: ~4 seconds
+ * container teardown: ~1 second
+ * total run time: about ~6-7 seconds
+
+That's a two- to three-fold increase! Clearly something is going on
+here that I should tweak. It's possible that code path is less
+optimized in Docker. I also worry about podman, but apparently [it
+also supports ZFS backends](https://www.jwillikers.com/podman-with-btrfs-and-zfs). Possibly it would perform better, but
+at this stage I wouldn't have a good comparison: maybe it would have
+performed better on non-ZFS as well...
+
+### Interactivity
+
+While doing the offsite backups (below), the system became somewhat
+"sluggish". I felt everything was slow, and I estimate it introduced
+~50ms latency in any input device.
+
+Arguably, those are all USB and the external drive was connected
+through USB, but I suspect the ZFS drivers are not as well tuned with
+the scheduler as the regular filesystem drivers...
+
 # Recovery procedures
 
 For test purposes, I unmounted all systems during the procedure:

first sync done
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2201eeb6..1d0427c8 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -618,6 +618,165 @@ then you mount the root filesystem and all the others:
     mount -t tmpfs tmpfs /mnt/run &&
     mkdir /mnt/run/lock
 
+# Offsite backup
+
+TODO: explain why I'm doing, and how it works broadly.
+
+## Partitioning
+
+The above partitioning procedure used `sgdisk`, but I couldn't figure
+out how to do this with `sgdisk`, so this uses `sfdisk` to dump the
+partition from the first disk to an external, identical drive:
+
+    sfdisk -d /dev/nvme0n1 | sfdisk --no-reread /dev/sda --force
+
+## Pool creation
+
+This is similar to the main pool creation, except we tweaked a few
+bits after changing the upstream procedure:
+
+    zpool create \
+            -o cachefile=/etc/zfs/zpool.cache \
+            -o ashift=12 -d \
+            -o feature@async_destroy=enabled \
+            -o feature@bookmarks=enabled \
+            -o feature@embedded_data=enabled \
+            -o feature@empty_bpobj=enabled \
+            -o feature@enabled_txg=enabled \
+            -o feature@extensible_dataset=enabled \
+            -o feature@filesystem_limits=enabled \
+            -o feature@hole_birth=enabled \
+            -o feature@large_blocks=enabled \
+            -o feature@lz4_compress=enabled \
+            -o feature@spacemap_histogram=enabled \
+            -o feature@zpool_checkpoint=enabled \
+            -O acltype=posixacl -O xattr=sa \
+            -O compression=lz4 \
+            -O devices=off \
+            -O relatime=on \
+            -O canmount=off \
+            -O mountpoint=/boot -R /mnt \
+            bpool-tubman /dev/sdb3
+
+The change from the main boot pool are:
+
+ * [no unicode normalization](https://github.com/openzfs/openzfs-docs/pull/306)
+ * different device path (`sdb` used to be the M.2 device, it's now
+   `nvme0n1`)
+ * [reordered parameters](https://github.com/openzfs/openzfs-docs/pull/308)
+
+Main pool creation is:
+
+    zpool create \
+            -o ashift=12 \
+            -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
+            -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
+            -O compression=zstd \
+            -O relatime=on \
+            -O canmount=off \
+            -O mountpoint=/ -R /mnt \
+            rpool-tubman /dev/sdb4
+
+## first sync
+
+sanoid... syncoid... probably better to do it by hand, but this is
+easier. slooow everything feels like it has ~30-50ms latency extra:
+
+    anarcat@curie:sanoid$ LANG=C top -b  -n 1 | head -20
+    top - 13:07:05 up 6 days,  4:01,  1 user,  load average: 16.13, 16.55, 11.83
+    Tasks: 606 total,   6 running, 598 sleeping,   0 stopped,   2 zombie
+    %Cpu(s): 18.8 us, 72.5 sy,  1.2 ni,  5.0 id,  1.2 wa,  0.0 hi,  1.2 si,  0.0 st
+    MiB Mem :  15898.4 total,   1387.6 free,  13170.0 used,   1340.8 buff/cache
+    MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1319.8 avail Mem 
+
+        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
+         70 root      20   0       0      0      0 S  83.3   0.0   6:12.67 kswapd0
+    4024878 root      20   0  282644  96432  10288 S  44.4   0.6   0:11.43 puppet
+    3896136 root      20   0   35328  16528     48 S  22.2   0.1   2:08.04 mbuffer
+    3896135 root      20   0   10328    776    168 R  16.7   0.0   1:22.93 zfs
+    3896138 root      20   0   10588    788    156 R  16.7   0.0   1:49.30 zfs
+        350 root       0 -20       0      0      0 R  11.1   0.0   1:03.53 z_rd_int
+        351 root       0 -20       0      0      0 S  11.1   0.0   1:04.15 z_rd_int
+    3896137 root      20   0    4384    352    244 R  11.1   0.0   0:44.73 pv
+    4034094 anarcat   30  10   20028  13960   2428 S  11.1   0.1   0:00.70 mbsync
+    4036539 anarcat   20   0    9604   3464   2408 R  11.1   0.0   0:00.04 top
+        352 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        353 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        354 root       0 -20       0      0      0 S   5.6   0.0   1:04.01 z_rd_int
+
+The full first sync was:
+
+    root@curie:/home/anarcat# ./bin/syncoid -r  bpool bpool-tubman
+
+    CRITICAL ERROR: Target bpool-tubman exists but has no snapshots matching with bpool!
+                    Replication to target would require destroying existing
+                    target. Cowardly refusing to destroy your existing target.
+
+              NOTE: Target bpool-tubman dataset is < 64MB used - did you mistakenly run
+                    `zfs create bpool-tubman` on the target? ZFS initial
+                    replication must be to a NON EXISTENT DATASET, which will
+                    then be CREATED BY the initial replication process.
+
+    INFO: Sending oldest full snapshot bpool/BOOT@test (~ 42 KB) to new target filesystem:
+    44.2KiB 0:00:00 [4.19MiB/s] [========================================================================================================================] 103%            
+    INFO: Updating new target filesystem with incremental bpool/BOOT@test ... syncoid_curie_2022-05-30:12:50:39 (~ 4 KB):
+    2.13KiB 0:00:00 [ 114KiB/s] [===============================================================>                                                         ] 53%            
+    INFO: Sending oldest full snapshot bpool/BOOT/debian@install (~ 126.0 MB) to new target filesystem:
+     126MiB 0:00:00 [ 308MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Updating new target filesystem with incremental bpool/BOOT/debian@install ... syncoid_curie_2022-05-30:12:50:39 (~ 113.4 MB):
+     113MiB 0:00:00 [ 315MiB/s] [=======================================================================================================================>] 100%
+
+    root@curie:/home/anarcat# ./bin/syncoid -r  rpool rpool-tubman
+
+    CRITICAL ERROR: Target rpool-tubman exists but has no snapshots matching with rpool!
+                    Replication to target would require destroying existing
+                    target. Cowardly refusing to destroy your existing target.
+
+              NOTE: Target rpool-tubman dataset is < 64MB used - did you mistakenly run
+                    `zfs create rpool-tubman` on the target? ZFS initial
+                    replication must be to a NON EXISTENT DATASET, which will
+                    then be CREATED BY the initial replication process.
+
+    INFO: Sending oldest full snapshot rpool/ROOT@syncoid_curie_2022-05-30:12:50:51 (~ 69 KB) to new target filesystem:
+    44.2KiB 0:00:00 [2.44MiB/s] [===========================================================================>                                             ] 63%            
+    INFO: Sending oldest full snapshot rpool/ROOT/debian@install (~ 25.9 GB) to new target filesystem:
+    25.9GiB 0:03:33 [ 124MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Updating new target filesystem with incremental rpool/ROOT/debian@install ... syncoid_curie_2022-05-30:12:50:52 (~ 3.9 GB):
+    3.92GiB 0:00:33 [ 119MiB/s] [======================================================================================================================>  ] 99%            
+    INFO: Sending oldest full snapshot rpool/home@syncoid_curie_2022-05-30:12:55:04 (~ 276.8 GB) to new target filesystem:
+     277GiB 0:27:13 [ 174MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/home/root@syncoid_curie_2022-05-30:13:22:19 (~ 2.2 GB) to new target filesystem:
+    2.22GiB 0:00:25 [90.2MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var@syncoid_curie_2022-05-30:13:22:47 (~ 5.6 GB) to new target filesystem:
+    5.56GiB 0:00:32 [ 176MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/cache@syncoid_curie_2022-05-30:13:23:22 (~ 627.3 MB) to new target filesystem:
+     627MiB 0:00:03 [ 169MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/lib@syncoid_curie_2022-05-30:13:23:28 (~ 69 KB) to new target filesystem:
+    44.2KiB 0:00:00 [1.40MiB/s] [===========================================================================>                                             ] 63%            
+    INFO: Sending oldest full snapshot rpool/var/lib/docker@syncoid_curie_2022-05-30:13:23:28 (~ 442.6 MB) to new target filesystem:
+     443MiB 0:00:04 [ 103MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254 (~ 6.3 MB) to new target filesystem:
+    6.49MiB 0:00:00 [12.9MiB/s] [========================================================================================================================] 102%            
+    INFO: Updating new target filesystem with incremental rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254 ... syncoid_curie_2022-05-30:13:23:34 (~ 4 KB):
+    1.52KiB 0:00:00 [27.6KiB/s] [============================================>                                                                            ] 38%            
+    INFO: Sending oldest full snapshot rpool/var/lib/flatpak@syncoid_curie_2022-05-30:13:23:36 (~ 2.0 GB) to new target filesystem:
+    2.00GiB 0:00:17 [ 115MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/tmp@syncoid_curie_2022-05-30:13:23:55 (~ 57.0 MB) to new target filesystem:
+    61.8MiB 0:00:01 [45.0MiB/s] [========================================================================================================================] 108%            
+    INFO: Clone is recreated on target rpool-tubman/var/lib/docker/ed71ddd563a779ba6fb37b3b1d0cc2c11eca9b594e77b4b234867ebcb162b205 based on rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254
+    INFO: Sending oldest full snapshot rpool/var/lib/docker/ed71ddd563a779ba6fb37b3b1d0cc2c11eca9b594e77b4b234867ebcb162b205@syncoid_curie_2022-05-30:13:23:58 (~ 218.6 MB) to new target filesystem:
+     219MiB 0:00:01 [ 151MiB/s] [=======================================================================================================================>] 100%
+
+Funny how the `CRITICAL ERROR` doesn't actually stop `syncoid` and it
+just carries on merrily doing when it's telling you it's "cowardly
+refusing to destroy your existing target"... Maybe that's [my pull
+request that broke something though](https://github.com/jimsalterjrs/sanoid/pull/748).
+
+TODO: move to offsite host, setup cron job / timer?
+
+TODO: consider alternatives to syncoid, considering the code issues
+(large functions, lots of `system` calls without arrays...)
+
 # Remaining issues
 
 TODO: swap. how do we do it?

clarify how to change dscverify behavior, fix typo
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 1d7fe18e..96ffe4a2 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -212,10 +212,15 @@ ran by hand.
     need to find the key yourself, add it to your keyring, and then
     adding the following to `~/.devscripts` will leverage your
     personal keys into the web of trust:
-    
+
         DSCVERIFY_KEYRINGS=~/.gnupg/pubring.gpg
 
-    You can also use `dscvrify --keyring key.gpg *.dsc` to check the
+    Note that this is *NOT* an environment variable, it needs to be
+    put in the file... An alternative is to inject keys into the
+    `~/.gnupg/trustedkeys.gpg` files which is checked by `dscverify`
+    by default.
+
+    You can also use `dscverify --keyring key.gpg *.dsc` to check the
     signature by hand against a given key file.
 
 [debian-keyring package]: https://packages.debian.org/debian-keyring

more TODOs
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index ddf1cf3b..32d555fd 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -446,6 +446,10 @@ on the other hand, use the [client-server discovery API](https://spec.matrix.org
 what allows a given client to find your home server when you type your
 Matrix ID on login.
 
+TODO: https://matrix.org/faq/#why-can't-i-rename-my-homeserver%3F
+
+TODO: review FAQ
+
 # Performance
 
 This brings us to the performance of Matrix itself. Many people feel

that todo was done
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index e66882a0..ddf1cf3b 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -720,8 +720,4 @@ be dead.
 I wonder which path Matrix will take. Could it liberate us from those
 vicious cycles?
 
-TODO: irc vs email vs mastodon federation / forks
-
-
-
 [[!tag draft]]

try to finish this, still have to do a final edit
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f24c2f35..e66882a0 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -149,10 +149,12 @@ Overall, privacy protections in Matrix mostly concern message
 contents, not metadata. In other words, who's talking with who, when
 and from where is not well protected. Compared to a tool like Signal,
 which goes through great lengths to anonymise that data with features
-like [private contact discovery](https://signal.org/blog/private-contact-discovery/), [disappearing messages](https://signal.org/blog/disappearing-messages/),
+like [private contact discovery][], [disappearing messages](https://signal.org/blog/disappearing-messages/),
 [sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
 behind.
 
+[private contact discovery]: https://signal.org/blog/private-contact-discovery/
+
 This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (open in 2019) in Synapse, but this is not
 just an implementation issue, it's a flaw in the protocol itself. Home
 servers keep join/leave of all rooms, which gives clear information
@@ -262,19 +264,47 @@ matrix.org to block known abusers (users or servers). It's a good idea
 to make the bot admin of your channels, because you can't take back
 admin from a user once given.
 
-Matrix doesn't have tor/vpn-specific moderation mechanisms. It has
-the concept of guest accounts, not very used, and virtually no client
-support it. matrix is like +R by default. 
-
-TODO: rate limiting https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L833
-
-TODO: irc vs email vs mastodon federation / forks
+This is basically based on an [admin API](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html) built into
+Synapse. There's also a [new commandline tool](https://git.fout.re/pi/matrixadminhelpers) designed to do
+things like:
+
+> * System notify users (all users/users from a list, specific user)
+> * delete sessions/devices not seen for X days
+> * purge the remote media cache
+> * select rooms with various criteria (external/local/empty/created by/encrypted/cleartext)
+> * purge history of theses rooms
+> * shutdown rooms
+
+Matrix doesn't have IP-specific moderation mechanisms which would
+allow one to block users from Tor or known VPNs to limit abuse, for
+example. Furthermore, because users joining a room may come from
+another server, room moderators are at the mercy of the registration
+policies of those servers. Matrix is like IRC's `+R` mode ("only
+registered users can join") by default, except that anyone can
+register their own homeserver, which makes this very limited. There's
+no API to block a specific homeserver, so this must be done at the
+system (e.g. netfilter / firewall) level.
+
+Matrix has the concept of guest accounts, but it is not very used, and
+virtually no client supports it. This contrasts with the way IRC
+works: by default, anyone can join an IRC network even without
+authentication. Some channels require registration, but in general you
+are free to join and look around (until you get blocked, of course).
+
+I have heard anecdotal evidence that "moderating bridges is hell", and
+I can somewhat imagine why. Moderation is already hard enough on one
+federation, when you bridge a room with another network, you inherit
+all the problems from *that* network and the bridge is unlikely to
+have as many tools as the original network's API to control abuse...
+
+Synapse has pretty good [built-in rate-limiting](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L879-L996) which blocks
+repeated login, registration, joining, or messaging attempts. It may
+also end up throttling servers on the federation based on those
+settings.
 
 TODO: you can use the admin API to impersonate a room admin? see also
 the other TODO below
 
-TODO: bridge moderation is hell
-
 # Availability
 
 While Matrix has a strong advantage over Signal in that it's
@@ -333,9 +363,6 @@ such an alias (in Element), you need to go in the room settings'
 name (e.g. `foo`), and then that room will be available on your
 `example.com` homeserver as `#foo:example.com`.)
 
-TODO: by default the room belongs to the public federation,. spaces as
-directory. the 
-
 A room doesn't belong to a server, it belongs to the federation.
 Anyone invited to a room (if private) can join from the room on any
 server. You can create a room on server A and when a user from server
@@ -353,6 +380,16 @@ room is an alias on the `gnome.org` server, but the room ID is
 `HASH:matrix.org`. That's because the room was created on matrix.org,
 but admins are on `gnome.org` now.
 
+Discovering rooms can therefore be tricky: there *is* a room
+directory, but Matrix.org people are trying to deprecate it in favor
+of "Spaces". Room directories were ripe for abuse: anyone can create a
+room, so anyone can show up in there. In contrast, a "Space" is
+basically a room that's an index of other rooms (including other
+spaces), so existing moderation and administration mechanism that work
+in rooms can (somewhat) work in spaces as well. This also allows rooms
+to work across federation, regardless on which server they were
+originally created.
+
 New users can be added to a space or room [automatically](https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1378-L1388) in
 Synapse. (Existing users can be told about the space with a server
 notice.) The point here is to have a way to pre-populate a list of
@@ -446,13 +483,26 @@ person they are talking to is on IRC or Matrix in the backend. I bet
 people could notice that Matrix users are slower, if only because of
 the TCP round-trip time each message has to take.
 
-TODO: https://blog.lewman.com/internet-messaging-versus-congested-network.html
-
 (I assume, here, that each Matrix message is delivered through at
 least two new HTTP sessions, which therefore require up to 8 packet
 roundtrips whereas in IRC, the existing socket is reused so it's
 basically 2 round-trips.)
 
+Some [courageous person](https://blog.lewman.com/) actually made some [tests of various
+messaging platforms on a congested network](https://blog.lewman.com/internet-messaging-versus-congested-network.html). His evaluation was
+basically:
+
+ * [Briar](https://briarproject.org/): uses Tor, so unusable except locally
+ * Matrix: "struggled to send and receive messages", joining a room
+   takes forever as it has to sync all history, "took 20-30 seconds
+   for my messages to be sent and another 20 seconds for further
+   responses"
+ * [XMPP](https://xmpp.org/): "worked in real-time, full encryption, with nearly zero
+   lag"
+
+So that was interesting. I suspect IRC would have also fared better,
+but that's just a feeling.
+
 Possible improvements to this include [support for websocket](https://github.com/matrix-org/matrix-doc/issues/1148) (to
 reduce latency and overhead) and the [CoAP proxy](https://matrix.org/docs/projects/iot/coap-proxy) work [from
 2019](https://matrix.org/blog/2019/03/12/breaking-the-100-bps-barrier-with-matrix-meshsim-coap-proxy) (which allows communication over 100bps links), both of which
@@ -463,6 +513,8 @@ to. See also [this talk at FOSDEM 2022](https://www.youtube.com/watch?v=diwzQtGg
 
 # Usability
 
+## Onboarding and workflow
+
 The workflow for joining a room, when you use Element web, is not
 great:
 
@@ -477,10 +529,199 @@ As you might have guessed by now, there is a [proposed
 specification](https://github.com/matrix-org/matrix-spec-proposals/blob/f295e828dc3107260a7979c40175442bf3a7fdc4/proposals/2312-matrix-uri.md) to solve this, but web browsers need to adopt it
 as well, so that's far from actually being solved.
 
-TODO: registration and discovery workflow compared with signal
+In general, when compared with tools like Signal or Whatsapp, Matrix
+doesn't fare as well in terms of user discovery. I probably have a lot
+of my contacts on Matrix, but I wouldn't know because there's *no way*
+to know. It's kind of creepy when Signal tells you "hey, this person
+is on Signal!" but it's also pretty cool that it works, and they
+actually did it [pretty well][private contact discovery].
+
+Registration is also less obvious: in Signal, the app just needs to
+confirm your phone number and it's generally automated. It's
+friction-less and quick. In Matrix, you need to learn about home
+servers, pick one, register (with a password! aargh!), and then setup
+encryption keys (not default), etc. It's really a lot more friction.
+
+And look, I understand: giving away your phone number is a *huge*
+tradeoff. I don't like it either. But it solves a real problem and
+makes encryption accessible to a *ton* more people. Matrix *does* have
+"identity servers" that can serve that purpose, but somehow I don't
+feel confident giving away my phone number there. There's a catch-22
+here too: because no one feels like giving away their phone numbers,
+no one does, and everyone assumes that stuff doesn't work
+anyways. Like it or not, Signal *forcing* people to divulge their
+phone number actually gave them critical mass that means actually a
+lot of my relatives *are* on Signal and I don't have to install crap
+like Whatsapp to talk with them.
+
+## 5 minute clients evaluation
+
+Throughout all my tests I evaluated a handful of Matrix clients,
+mostly from Flatpak because basically none of them are packaged in
+Debian. I cannot even begin to pretend to have done a proper review,
+but here's my main takeaway: I'm using none of them. I'm still using
+Element, the flagship client from Matrix.org, in a web browser window,
+with the [PopUp Window extension](https://github.com/ettoolong/PopupWindow). This makes it look almost like a
+native app, and opens links in my main browser window, which is
+nice. But yeah, it's a web app, which is kind of meh.
+
+Coming from Irssi, Element is really "GUI-y" (pronounced
+"goo-wee"). Lots of clickety happening. To mark conversations as read,
+in particular, I need to click-click-click on *all* the tabs that have
+some activity. There's no "jump to latest message" or "mark all as
+read" functionality as far as I could tell. In Irssi the former is
+built-in (<kbd>alt-a</kbd>) and I made a custom `/READ` command for
+the latter:
+
+    /ALIAS READ script exec \$_->activity(0) for Irssi::windows
+
+And yes, that's a Perl script in my IRC client. I am not aware of any
+Matrix client that does stuff like that.
+
+As for other clients, I have looked through the [Client Matrix](https://matrix.org/clients-matrix/)
+(confusing right?) to try to figure out which one to try, and, even
+after selecting `Linux` as a filter, the chart is just too wide to
+figure out anything. So I tried those, kind of randomly:
+
+ * Fractal

(Diff truncated)
try to move forward a little more, thanks jvoisin for edits
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index a26450b2..f24c2f35 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -16,13 +16,20 @@ documents why neither of those have happened yet.
 
 ## Data retention defaults
 
-One of my main concerns with Matrix at this point is data
-retention. In IRC, servers don't actually keep messages all that long:
-they pass them along to other servers and clients basically as fast as
-they can, only keep them from memory, and move on to the next
-message. There are no concerns about data retention on messages (and
-their metadata) other than the network layer. (I'm ignoring the issues
-with user registration here, which is a separate, if valid, concern.)
+One of my main concerns with Matrix is data retention.
+
+In IRC, servers don't actually keep messages all that long: they pass
+them along to other servers and clients basically as fast as they can,
+only keep them from memory, and move on to the next message. There are
+no concerns about data retention on messages (and their metadata)
+other than the network layer. (I'm ignoring the issues with user
+registration here, which is a separate, if valid, concern.)
+Obviously, an hostile server *could* start everything it gets of
+course, but typically IRC federations are tightly controlled and, if
+you trust your IRC server, you should be fairly safe. Obviously,
+clients *can* (and often do, even if [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all
+messages, but this is typically not the default. [Irssi](https://irssi.org/), for
+example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -178,9 +185,9 @@ turm, meant that Matrix would try to connect to that URL to generate a
 link preview.
 
 I felt this was a security issue, especially because they would
-basically keep the socket open *forever*. I tried to warn the Matrix
-security team but somehow, I don't think it was taken very
-seriously. Here's the disclosure timeline:
+basically keep the server socket open seemingly *forever*. I tried to
+warn the Matrix security team but somehow, I don't think it was taken
+very seriously. Here's the disclosure timeline:
 
  * January 18: contacted Matrix security
  * January 19: response: already [reported as a bug](https://github.com/matrix-org/synapse/issues/8302)
@@ -263,7 +270,10 @@ TODO: rate limiting https://github.com/matrix-org/synapse/blob/12d1f82db21360397
 
 TODO: irc vs email vs mastodon federation / forks
 
-TODO: you can use the admin API to impersonate a room admin?
+TODO: you can use the admin API to impersonate a room admin? see also
+the other TODO below
+
+TODO: bridge moderation is hell
 
 # Availability
 
@@ -323,27 +333,36 @@ such an alias (in Element), you need to go in the room settings'
 name (e.g. `foo`), and then that room will be available on your
 `example.com` homeserver as `#foo:example.com`.)
 
-TODO: new users on certain can be added to a space automatically in
-synapse. existing users can be told about the space with a server
-notice. https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1387
-
-TODO: by default the room belongs to the public federation, anyone
-invited (or joins if public) can join from any server. spaces as
-directory. the room doesn't belong to a server. you can create a room
-on serverA, admin from serverB join and belong as admin. the room will
-be replicated on the two server. if serverA falls, the serverB will be
-picked up. a room doesn't have an FQDN, it has a Matrix ID (basically
-a random number). it has a server name, but that's just to avoid
-collision. you can have server-specific aliases. each server needs to
-have admin.
-
-TODO: room namespaces eg #fractal:gnome.org (alias) room id is
-HASH:matrix.org room was created on matrix.org, but admins are on
-gnome.org... room is primarily a gnome room.
-
-TODO: [tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) (no GUI for it), fait partie de
-[MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") allows a room admin to close a
-room, with a message and a pointer to another room.
+TODO: by default the room belongs to the public federation,. spaces as
+directory. the 
+
+A room doesn't belong to a server, it belongs to the federation.
+Anyone invited to a room (if private) can join from the room on any
+server. You can create a room on server A and when a user from server
+B joins, the room will be replicated on the two servers. If server A
+fails, server B will keep the room alive. A room doesn't have an FQDN,
+it has a Matrix ID (which basically a random number). It has a server
+name attached to it, but that was made just to avoid collisions.
+
+TODO: how does admin work again? can server B hijack a room on server
+A? I had noted "admin from serverB join and belong as admin. each
+server needs to have admin."
+
+This can get a little confusing. For example, the `#fractal:gnome.org`
+room is an alias on the `gnome.org` server, but the room ID is
+`HASH:matrix.org`. That's because the room was created on matrix.org,
+but admins are on `gnome.org` now.
+
+New users can be added to a space or room [automatically](https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1378-L1388) in
+Synapse. (Existing users can be told about the space with a server
+notice.) The point here is to have a way to pre-populate a list of
+rooms on the server, even if they are not necessarily present on that
+server directly, in case another server that has connected users hosts it.
+
+Rooms, by default, live forever, even after the last user quits. There
+is a[tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
+part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows a room
+admin to close a room, with a message and a pointer to another room.
 
 ## Home server
 
@@ -385,8 +404,10 @@ explicitly configured for your domain. You can't just put:
 support "virtual hosting" and you'd still be connecting to rooms and
 people with your `matrix.org` identity.
 
-TODO: what's the different between server-server and client-server API
-specs? e.g. why is there also <https://spec.matrix.org/v1.2/client-server-api/#server-discovery>?
+That specification is what allows servers to find each other. Clients,
+on the other hand, use the [client-server discovery API](https://spec.matrix.org/v1.2/client-server-api/#server-discovery): this is
+what allows a given client to find your home server when you type your
+Matrix ID on login.
 
 # Performance
 
@@ -399,7 +420,9 @@ now scale horizontally to multiple workers (see [this blog post for
 details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability)). And there are other home servers implementations
 ([dendrite](https://github.com/matrix-org/dendrite), golang, [entered beta in late 2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit),
 Rust, beta), but none of those are feature-complete, so they are not a
-solution for any performance issues that might be left with Synapse.
+solution for any performance issues that might be left with
+Synapse. And besides, Synapse is adding a lot of feature fasts, so
+it's unlikely those other servers will ever catchup.
 
 And Matrix can feel slow sometimes. For example, joining the "Matrix
 HQ" room in Element (from matrix.debian.social) takes a few *minutes*
@@ -458,4 +481,6 @@ TODO: registration and discovery workflow compared with signal
 
 TODO: admin API https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html
 
+TODO: mark all as read.
+
 [[!tag draft]]

fix broken link
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f6bc545f..a26450b2 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -189,7 +189,7 @@ seriously. Here's the disclosure timeline:
  * January 31: I respond that I believe the security issue is
    underestimated, ask for clearance to disclose
  * February 1: response: asking for two weeks delay after the next
-   release (1.53.0) including [another patch][], presumably in two
+   release (1.53.0) including [another patch][PR 11936], presumably in two
    weeks' time
  * February 22: Matrix 1.53.0 released
  * April 14: I notice the release, ask for clearance again

add toc
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f4a37ee9..f6bc545f 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -10,6 +10,8 @@ asking me to bridge it with Matrix. I have myself considered a few
 times the idea of just giving up and converting it. This space
 documents why neither of those have happened yet.
 
+[[!toc levels=2]]
+
 # Security and privacy
 
 ## Data retention defaults

expand on security / privacy issues now that the flaw is public
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index df9c58c3..f4a37ee9 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -10,14 +10,230 @@ asking me to bridge it with Matrix. I have myself considered a few
 times the idea of just giving up and converting it. This space
 documents why neither of those have happened yet.
 
-# Security
-
-TODO
-
-# Privacy
-
-TODO: vector.im privacy policy, GDPR compliance, metadata leaks,
-message expiry, discoverability of leakage.
+# Security and privacy
+
+## Data retention defaults
+
+One of my main concerns with Matrix at this point is data
+retention. In IRC, servers don't actually keep messages all that long:
+they pass them along to other servers and clients basically as fast as
+they can, only keep them from memory, and move on to the next
+message. There are no concerns about data retention on messages (and
+their metadata) other than the network layer. (I'm ignoring the issues
+with user registration here, which is a separate, if valid, concern.)
+
+Compare this to Matrix: when you send a message to a Matrix
+homeserver, that server first stores it in its internal SQL
+database. Then it will transmit that message to all clients connected
+to that server and room, and to all other servers that have clients
+connected to that room. Those remote servers, in turn, will keep a
+copy of that message and all its metadata in their own database, by
+default basically forever.
+
+Indeed, there is a mechanism to expire entries in Synapse, but it is
+[not enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one can safely assume that a message
+sent on Matrix is never expired.
+
+## GDPR in the federation
+
+But even if that setting was enabled by default, how do you control
+it? This is a fundamental problem of the federation: if anyone is
+allowed to join a given room (which is basically the default
+configuration of any room), anyone will log (deliberately or
+inadvertently) all content and metadata in that room.
+
+In the context of the GDPR, this is really tricky: who's the
+responsible party (know as the "data controller") here? It's basically
+any yahoo who fires up a home server and joins a room. Good luck
+enforcing the GDPR on those folks. In the brave new "peer-to-peer"
+world that Matrix is heading towards, it's, also, basically any client
+whatsoever, which also brings its own set of problems. 
+
+In a federated network, one has to wonder whether GDPR enforcement is
+even possible at all. Assuming you want to enforce your right to be
+forgotten in a given room, you would have to:
+
+ 1. enumerate all the users that ever joined the room while you were
+    there
+ 2. discover all their home servers
+ 3. start a GDPR procedure against all those servers
+
+I recognize this is a hard problem to solve while still keeping an
+open ecosystem. But I believe that Matrix should have much stricter
+defaults towards data retention than right now. Message expiry should
+be enforce *by default*. 
+
+In fact, maybe Synapse should be designed so that there's no
+configurable flag to turn off data retention. A bit like how most
+system loggers in UNIX (e.g. syslog) come with a log retention system
+that typically rotate logs after a few weeks or month. Historically,
+this was designed to keep hard drives from filling up, but it also has
+the added benefit of limiting the amount of personal information kept
+on disk in this modern day. (Arguably, syslog doesn't rotate logs on
+its own, but, say, Debian GNU/Linux, as an installed system, does have
+log retention policies well defined for installed packages. And "no
+expiry" is basically a bug.)
+
+## Matrix.org privacy policy
+
+When I first looked at Matrix, a long time ago, Element.io was called
+[Vector.im](https://vector.im/) and had a rather dubious privacy policy. I
+unfortunately cannot find a copy of it now on the internet archive,
+but it openly announced it was collecting (Google!) analytics on its
+users. When I asked Matrix people about this, they explained this was
+for development purposes and they were aiming for velocity at this
+point, not privacy. I am paraphrasing: I am sorry I lost track of that
+conversation that happened so long ago, you will have just to trust me
+on this.
+
+I think that, like the current retention policies, this set a bad
+precedent. Thankfully, since that policy was drafted, the GDPR
+happened and it seems like both the [Element.io privacy policy](https://element.io/privacy) and
+the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been somewhat improved.
+
+Notable points of the privacy policies:
+
+ * [2.3.1.1](https://matrix.org/legal/privacy-notice#2311-federation): the "federation" section actually outlines that
+   "*Federated homeservers and Matrix clients which respect the Matrix
+   protocol are expected to honour these controls and
+   redaction/erasure requests, but other federated homeservers are
+   outside of the span of control of Element, and we cannot guarantee
+   how this data will be processed*"
+ * [2.6](https://matrix.org/legal/privacy-notice#26-our-commitment-to-childrens-privacy): users under the age of 16 should *not* use the
+   `matrix.org` service
+ * [2.10](https://matrix.org/legal/privacy-notice#210-who-else-has-access-to-my-data): Upcloud, Mythic Beast, Amazon, and CloudFlare possibly
+   have access to your data (it's nice to at least mention this in the
+   privacy policy: many providers don't even bother admiting this)
+ * [Element 2.2.1](https://element.io/privacy): mentions many more third parties (Twilio,
+   Stripe, [Quaderno](https://www.quaderno.io/), LinkedIn, Twitter, Google, [Outplay](https://www.outplayhq.com/),
+   [PipeDrive](https://www.pipedrive.com/), [HubSpot](https://www.hubspot.com/), [Posthog](https://posthog.com/), Sentry, and [Matomo](https://matomo.org/)
+   (phew!)
+
+I'm not super happy with all the trackers they have on the Element
+platform, but then again you don't have to use that client
+whatsoever. Your favorite homeserver (assuming you are not on
+Matrix.org) probably has their own Element deployment, hopefully
+without all that garbage.
+
+Overall, this is all a huge improvement over the previous privacy
+policy, so hats off to the Matrix people for figuring out a reasonable
+policy in such a tricky context. I particularly like this bit:
+
+> We will forget your copy of your data upon your request. We will
+> also forward your request to be forgotten onto federated
+> homeservers. However - these homeservers are outside our span of
+> control, so we cannot guarantee they will forget your data.
+
+It's great they implemented those mechanisms and, after all, if
+there's an hostile party in there, nothing can prevent them from
+using *screenshots* to just exfiltrate your data away from the client
+side anyways, even with services typically seen as more secure because
+centralised like Signal.
+
+As an aside, I also appreciate that Matrix.org has a fairly decent
+[code of conduct](https://matrix.org/legal/code-of-conduct), based on the [TODO CoC](http://todogroup.org/opencodeofconduct/) which checks all the
+[boxes in the geekfeminism wiki](https://geekfeminism.fandom.com/wiki/Code_of_conduct_evaluations).
+
+## Metadata handling
+
+Overall, privacy protections in Matrix mostly concern message
+contents, not metadata. In other words, who's talking with who, when
+and from where is not well protected. Compared to a tool like Signal,
+which goes through great lengths to anonymise that data with features
+like [private contact discovery](https://signal.org/blog/private-contact-discovery/), [disappearing messages](https://signal.org/blog/disappearing-messages/),
+[sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
+behind.
+
+This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (open in 2019) in Synapse, but this is not
+just an implementation issue, it's a flaw in the protocol itself. Home
+servers keep join/leave of all rooms, which gives clear information
+about who is talking to. Synapse logs are also quite verbose and may
+contain privately identifiable information that home server admins
+might not be aware of in the first place. Those logs rotation are also
+separate from the server-level retention policy, which may be
+confusing.
+
+Combine this with the federation: even if you trust your home server
+to do the right thing, the second you join a public room with
+third-party home servers, those ideas kind of get thrown out because
+those servers can do whatever they want with that information. Again,
+a problem that is hard to solve in a federation.
+
+To be fair, IRC doesn't have a great story here either: any client
+knows not only who's talking to who in a room, but also typically
+their client IP address. Servers *can* (and often do) *obfuscate*
+this, but often that obfuscation is trivial to reverse. Some servers
+do provide "cloaks" (sometimes automatically), but that's kind of a
+"slap-on" solution that actually moves the problem elsewhere: now the
+server knows a little more about the user.
+
+## Amplification attacks on URL previews
+
+I (still!) run an [Icecast](https://en.wikipedia.org/wiki/Icecast) server and sometimes share links to it
+on IRC which, obviously, also ends up on (more than one!) Matrix home
+servers because many people use Matrix as an IRC bouncer. This, in
+turm, meant that Matrix would try to connect to that URL to generate a
+link preview.
+
+I felt this was a security issue, especially because they would
+basically keep the socket open *forever*. I tried to warn the Matrix
+security team but somehow, I don't think it was taken very
+seriously. Here's the disclosure timeline:
+
+ * January 18: contacted Matrix security
+ * January 19: response: already [reported as a bug](https://github.com/matrix-org/synapse/issues/8302)
+ * January 20: response: can't reproduce
+ * January 31: [timeout added][PR 11784], considered solved
+ * January 31: I respond that I believe the security issue is
+   underestimated, ask for clearance to disclose
+ * February 1: response: asking for two weeks delay after the next
+   release (1.53.0) including [another patch][], presumably in two
+   weeks' time
+ * February 22: Matrix 1.53.0 released
+ * April 14: I notice the release, ask for clearance again
+ * April 14: response: referred to the [public disclosure](https://github.com/matrix-org/synapse/security/advisories/GHSA-4822-jvwx-w47h)
+
+I think there are a couple of problems here:

(Diff truncated)
cleanup réseau pages a bit
diff --git "a/services/r\303\251seau.mdwn" "b/services/r\303\251seau.mdwn"
index dc5f0728..b20f899b 100644
--- "a/services/r\303\251seau.mdwn"
+++ "b/services/r\303\251seau.mdwn"
@@ -1,8 +1,6 @@
 Le réseau est constitué d'une ensemble d'interconnexions [[!wikipedia
 gigabit]] et d'un réseau [[wifi]], avec un uplink DSL.
 
-Update: ipv6 and dns work better with new router. see good test page: <http://en.conn.internet.nl/connection/>.
-
 Problèmes connus
 ================
 
@@ -11,10 +9,15 @@ Problèmes connus
    vérifier), cliquer sur "reconnect" de l'interface `WAN6` dans le
    GUI règle le problème. selon [ce bout de code](https://github.com/openwrt/luci/blob/e712a8a4ac896189c333400134e00977912a918a/modules/luci-mod-network/luasrc/controller/admin/network.lua#L169-L176), il suffirait de
    faire un `env -i /sbin/ifup %s >/dev/null 2>/dev/null` où `%s` est
-   l'interface réseau. à tester.
- * <del>la qualité de la bande passante varie avec les conditions météo</del> semble être résolu avec le VDSL
+   l'interface réseau. à tester. Update: ipv6 and dns work better with
+   new router. see good test page:
+   <http://en.conn.internet.nl/connection/>.
+ * <del>la qualité de la bande passante varie avec les conditions
+   météo</del> semble être résolu avec le VDSL
  * certaines requêtes DNS échouent, voir [[DNS]] pour les détails
  * le reverse DNS IPv6 ne fonctionne pas, voir mon [review the TSI](https://www.dslreports.com/forum/r30473265-Review)
+ * c'est cher, et pas vite, voir [[blog/2020-05-28-isp-upgrade]] for
+   options.
 
 Voir aussi
 ==========
@@ -104,3 +107,5 @@ Vitesse
     Total UAS:	2842	2811
 
 Donc 27 mbps down, 6 mbps up ou encore 3.4 MB/s down, 800 KB/s up.
+
+Voir [[services/réseau/crapn]] pour les détails de cette transition.
diff --git "a/services/r\303\251seau/crapn.mdwn" "b/services/r\303\251seau/crapn.mdwn"
index fd98120d..b3b8991b 100644
--- "a/services/r\303\251seau/crapn.mdwn"
+++ "b/services/r\303\251seau/crapn.mdwn"
@@ -1,6 +1,8 @@
 [[!meta title="Remplacement des services de communication au Crap'N"]]
 
-TODO: merge with above? wtf *is* this.
+Note: ceci est un document historique destiné à mes colocs lors du
+remplacement du service à la maison. Pour une version à jour du
+réseau, voir [[réseau]].
 
 J'aimerais remplacer le service internet au CrapN par un autre service internet qui me permetterait de déménager mon serveur (nommé "marcos") ainsi que les différents [[services]] que je gère présentement à la maison. En résumé, les services sont:
 

cross-ref with wifi tuning article
diff --git a/hardware/rosa.mdwn b/hardware/rosa.mdwn
index 5fc02f37..fb0276f0 100644
--- a/hardware/rosa.mdwn
+++ b/hardware/rosa.mdwn
@@ -34,6 +34,10 @@ I unfortunately forgot to run the same benchmarks with the stock
 firmware, but that could have been difficult unless it ships with
 `iperf3`...
 
+Note that some optimisations and changes have been performed to the
+wireless network since then, see this [[Wi-Fi tuning blog
+post|blog/2022-04-13-wifi-tuning]] for details.
+
 Wired network
 -------------
 

stopped using Smart HTTPS, switched to https-only mode
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index aeed33e3..0c4cbf65 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -71,13 +71,6 @@ I am testing those and they might make it to the top list once I'm happy:
  * [Popup window](https://addons.mozilla.org/en-US/firefox/addon/popup-window/) (no deb, [source](https://github.com/ettoolong/PopupWindow)) - open the link in a
    pop-up, useful to have an "app-like" window for a website (I use
    this for videoconferencing in a second tab)
- * [Smart HTTPS](https://addons.mozilla.org/en-US/firefox/addon/smart-https-revived/) (no deb, [source](https://github.com/ilGur1132/Smart-HTTPS)) - some use [HTTPS
-   everywhere](https://www.eff.org/https-everywhere) but i find that one works too and doesn't require
-   sites to be added to a list. nowadays, https URLs match http URLs
-   quite well: long gone are the days where wikipedia had a special
-   "secure" URL...  HE does have a "Block all unencrypted requests"
-   setting, but it does exactly that: it breaks plaintext sites
-   completely. See [issue #7936](https://github.com/EFForg/https-everywhere/issues/7936) and [issue #16488](https://github.com/EFForg/https-everywhere/issues/16488) for details.
  * [View Page Archive & Cache](https://addons.mozilla.org/en-US/firefox/addon/view-page-archive/) (no deb, [source](https://github.com/dessant/view-page-archive/)) - load page in
    one or many page archives. No "save" button unfortunately, but is
    good enough for my purposes. [The Archiver](https://addons.mozilla.org/en-US/firefox/addon/the-archiver/) (no deb,
@@ -191,6 +184,16 @@ hard to use or simply irrelevant.
    ([#871502](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=871502)). Right now I'm using the <del>standalone binary from
    upstream</del> flatpak but I'm looking at alternatives, see
    [[services/bookmarks]] for that more general problem.
+ * [Smart HTTPS](https://addons.mozilla.org/en-US/firefox/addon/smart-https-revived/) (no deb, [source](https://github.com/ilGur1132/Smart-HTTPS)) - some use [HTTPS
+   everywhere](https://www.eff.org/https-everywhere) but I found that one works too and doesn't require
+   sites to be added to a list. nowadays, https URLs match http URLs
+   quite well: long gone are the days where wikipedia had a special
+   "secure" URL... HE does have a "Block all unencrypted requests"
+   setting, but it does exactly that: it breaks plaintext sites
+   completely. See [issue #7936](https://github.com/EFForg/https-everywhere/issues/7936) and [issue #16488](https://github.com/EFForg/https-everywhere/issues/16488) for
+   details. Nowadays, I just don't need any extension: I enable
+   [HTTPS-only mode](https://blog.mozilla.org/security/2020/11/17/firefox-83-introduces-https-only-mode/) (AKA `dom.security.https_only_mode`). The EFF
+   even [deprecated HTTPS everywhere](https://www.eff.org/https-everywhere/set-https-default-your-browser) because of this.
 
 [it's all text!]: https://addons.mozilla.org/en-US/firefox/addon/its-all-text/
 
@@ -317,6 +320,8 @@ that I version-control into git:
    * `security.webauth.webauthn` - enable [WebAuthN](https://www.w3.org/TR/webauthn/) support, not
      sure what that's for but it sounds promising
  * `browser.urlbar.trimURLs`: false. show protocol regardless of URL
+ * `dom.security.https_only_mode`: `true` - only access HTTPS
+   websites, click-through for bypass.
 
 I also set privacy parameters following this [user.js](https://gitlab.com/anarcat/scripts/blob/master/firefox-tmp#L7) config
 which, incidentally, is injected in temporary profiles started with

approve comment
diff --git a/blog/2022-05-13-brtfs-notes/comment_1_8b6b318dadc7cd863bb3fc50d323c18b._comment b/blog/2022-05-13-brtfs-notes/comment_1_8b6b318dadc7cd863bb3fc50d323c18b._comment
new file mode 100644
index 00000000..089e7471
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_1_8b6b318dadc7cd863bb3fc50d323c18b._comment
@@ -0,0 +1,20 @@
+[[!comment format=mdwn
+ ip="108.162.145.28"
+ claimedauthor="Matthew"
+ url="https://mjdsystems.ca"
+ subject="Some clarifications"
+ date="2022-05-21T05:10:15Z"
+ content="""
+I have some (hopefully) helpful comments to some of your points above.
+
+# Regarding volumes/subvolumes
+I'm not sure what exactly a BTRFS volume is, but I think it would just be the filesystem.  Subvolumes are what allow you to separate the filesystem into separate pools, similar to an LVM volume.  The big difference is BTRFS lets you easily de-duplicate between these volumes (using reflinks through things like cp --reflink and snapshots).  The general advice I've heard (and follow myself) is not to use the root subvolume, but to instead create a subvolume for each purpose (e.x. one for /, one for /home).  This makes it easy to create snapshots and inspect the system in the future.
+
+BTRFS doesn't give an easy answer to the question \"how much space does this subvolume take\" because there isn't an easy answer.  Imagine you have one subvolume, then you take a snapshot.  How much space should you charge against both subvolumes (since the snapshot is just a regular subvolume that just shares data)?  Depending upon your use case, this answer can differ.  BTRFS can help track this information if you enable quotas, but I have not had a enough of need to enable them myself.
+
+# On btrfs filesystem usage
+This view exposes more details of the underlying filesystem, and is more a debug aid.  The unallocated space is areas of the filesystem that can be allocated to data or metadata.  Metadata is, AFAIK, are core filesystem data structures compared to data being just regular data.  That unallocated space will be used by the filesystem as you store more data.
+
+# Conclusion
+As someone who has used BTRFS in quite some anger for a long time now, I find some of your critiques completely valid.  Some of those issues I find to be features (having subvolumes just take whatever space is useful) but sometimes painful (why can't I know how much space my backups are taking).  And having been down the BTRFS stability rabbit hole, I have had to deal with various issues.  That being said I do enjoy using it where I have it.
+"""]]
diff --git a/blog/zfs-migration/comment_1_6413ee698aca447b5215e8258e630979._comment b/blog/zfs-migration/comment_1_6413ee698aca447b5215e8258e630979._comment
new file mode 100644
index 00000000..f1c24596
--- /dev/null
+++ b/blog/zfs-migration/comment_1_6413ee698aca447b5215e8258e630979._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ ip="50.100.165.103"
+ subject="comment 1"
+ date="2022-05-20T20:58:51Z"
+ content="""
+Cool blog.  It's too bad that the docs you referenced didn't insist on adding by by-uuid or by-uuid (/dev/disk/...) rather than /dev/sdx -- this is generally important, and especially important for USB-connected disks.
+"""]]

renew my openpgp key
diff --git a/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe b/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe
index 224f3439..b8451159 100644
Binary files a/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe and b/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe differ

minor edits
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 244d1993..2201eeb6 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -42,11 +42,13 @@ This is going to partition `/dev/sdc` with:
         sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
         sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
 
+That will look something like this:
+
         root@curie:/home/anarcat# sgdisk -p /dev/sdc
         Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
         Model: ESD-S1C         
         Sector size (logical/physical): 512/512 bytes
-        Disk identifier (GUID): 932ED8E5-8B5C-4183-9967-56D7652C01DA
+        Disk identifier (GUID): [REDACTED]
         Partition table holds up to 128 entries
         Main partition table begins at sector 2 and ends at sector 33
         First usable sector is 34, last usable sector is 1953525134
@@ -98,7 +100,7 @@ workstation, we're betting that we will not suffer from this problem,
 after hearing a report from another Debian developer running this
 setup on their workstation successfully.
 
-# Creating "pools"
+# Creating pools
 
 ZFS pools are somewhat like "volume groups" if you are familiar with
 LVM, except they obviously also do things like RAID-10. (Even though
@@ -212,7 +214,7 @@ Also, the [FreeBSD handbook quick start](https://docs.freebsd.org/en/books/handb
 about their first example, which is with a single disk. So I am
 reassured at least. All 
 
-# Creating filesystems AKA "datasets"
+# Creating mount points
 
 Next we create the actual filesystems, known as "datasets" which are
 the things that get mounted on mountpoint and hold the actual files.
@@ -878,7 +880,7 @@ this.
 
 # References
 
-### ZFS documentation
+## ZFS documentation
 
  * [Debian wiki page](https://wiki.debian.org/ZFS): good introduction, basic commands, some
    advanced stuff

add toc
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 90ab321e..244d1993 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -10,6 +10,8 @@ because I find it too confusing and unreliable.
 
 So off we go.
 
+[[!toc levels=3]]
+
 # Installation
 
 Since this is a conversion (and not a new install), our procedure is

fix heading
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 1fed42b3..90ab321e 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -614,7 +614,7 @@ then you mount the root filesystem and all the others:
     mount -t tmpfs tmpfs /mnt/run &&
     mkdir /mnt/run/lock
 
-# Remaining work
+# Remaining issues
 
 TODO: swap. how do we do it?
 

document lockups
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index c6d12eb0..1fed42b3 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -618,8 +618,6 @@ then you mount the root filesystem and all the others:
 
 TODO: swap. how do we do it?
 
-TODO: talk about the lockups during migration
-
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
@@ -708,6 +706,174 @@ reporting the right timestamps in the end, although it does feel like
 *starting* all the processes (even if not doing any work yet) could
 skew the results.
 
+## Hangs during procedure
+
+During the procedure, it happened a few times where any ZFS command
+would completely hang. It seems that using an external USB drive to
+sync stuff didn't work so well: sometimes it would reconnect under a
+different device (from `sdc` to `sdd`, for example), and this would
+greatly confuse ZFS.
+
+Here, for example, is `sdd` reappearing out of the blue:
+
+    May 19 11:22:53 curie kernel: [  699.820301] scsi host4: uas
+    May 19 11:22:53 curie kernel: [  699.820544] usb 2-1: authorized to connect
+    May 19 11:22:53 curie kernel: [  699.922433] scsi 4:0:0:0: Direct-Access     ROG      ESD-S1C          0    PQ: 0 ANSI: 6
+    May 19 11:22:53 curie kernel: [  699.923235] sd 4:0:0:0: Attached scsi generic sg2 type 0
+    May 19 11:22:53 curie kernel: [  699.923676] sd 4:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
+    May 19 11:22:53 curie kernel: [  699.923788] sd 4:0:0:0: [sdd] Write Protect is off
+    May 19 11:22:53 curie kernel: [  699.923949] sd 4:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+    May 19 11:22:53 curie kernel: [  699.924149] sd 4:0:0:0: [sdd] Optimal transfer size 33553920 bytes
+    May 19 11:22:53 curie kernel: [  699.961602]  sdd: sdd1 sdd2 sdd3 sdd4
+    May 19 11:22:53 curie kernel: [  699.996083] sd 4:0:0:0: [sdd] Attached SCSI disk
+
+Next time I run a ZFS command (say `zpool list`), the command
+completely hangs (`D` state) and this comes up in the logs:
+
+    May 19 11:34:21 curie kernel: [ 1387.914843] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=71344128 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914859] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=205565952 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914874] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272789504 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914906] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=270336 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.914932] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073225728 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.914948] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073487872 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.915165] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272793600 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.915183] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=339853312 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.915648] WARNING: Pool 'bpool' has encountered an uncorrectable I/O failure and has been suspended.
+    May 19 11:34:21 curie kernel: [ 1387.915648] 
+    May 19 11:37:25 curie kernel: [ 1571.558614] task:txg_sync        state:D stack:    0 pid:  997 ppid:     2 flags:0x00004000
+    May 19 11:37:25 curie kernel: [ 1571.558623] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.558640]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.558650]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.558670]  schedule_timeout+0x8b/0x140
+    May 19 11:37:25 curie kernel: [ 1571.558675]  ? __next_timer_interrupt+0x110/0x110
+    May 19 11:37:25 curie kernel: [ 1571.558678]  io_schedule_timeout+0x4c/0x80
+    May 19 11:37:25 curie kernel: [ 1571.558689]  __cv_timedwait_common+0x12b/0x160 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.558694]  ? add_wait_queue_exclusive+0x70/0x70
+    May 19 11:37:25 curie kernel: [ 1571.558702]  __cv_timedwait_io+0x15/0x20 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.558816]  zio_wait+0x129/0x2b0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.558929]  dsl_pool_sync+0x461/0x4f0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559032]  spa_sync+0x575/0xfa0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559138]  ? spa_txg_history_init_io+0x101/0x110 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559245]  txg_sync_thread+0x2e0/0x4a0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559354]  ? txg_fini+0x240/0x240 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559366]  thread_generic_wrapper+0x6f/0x80 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.559376]  ? __thread_exit+0x20/0x20 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.559379]  kthread+0x11b/0x140
+    May 19 11:37:25 curie kernel: [ 1571.559382]  ? __kthread_bind_mask+0x60/0x60
+    May 19 11:37:25 curie kernel: [ 1571.559386]  ret_from_fork+0x22/0x30
+    May 19 11:37:25 curie kernel: [ 1571.559401] task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
+    May 19 11:37:25 curie kernel: [ 1571.559404] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.559409]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.559412]  ? __kmalloc_node+0x141/0x2b0
+    May 19 11:37:25 curie kernel: [ 1571.559417]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559420]  schedule_preempt_disabled+0xa/0x10
+    May 19 11:37:25 curie kernel: [ 1571.559424]  __mutex_lock.constprop.0+0x133/0x460
+    May 19 11:37:25 curie kernel: [ 1571.559435]  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
+    May 19 11:37:25 curie kernel: [ 1571.559537]  spa_all_configs+0x41/0x120 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559644]  zfs_ioc_pool_configs+0x17/0x70 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559752]  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559758]  ? _copy_from_user+0x28/0x60
+    May 19 11:37:25 curie kernel: [ 1571.559860]  zfsdev_ioctl+0x53/0xe0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559866]  __x64_sys_ioctl+0x83/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559869]  do_syscall_64+0x33/0x80
+    May 19 11:37:25 curie kernel: [ 1571.559873]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    May 19 11:37:25 curie kernel: [ 1571.559876] RIP: 0033:0x7fcf0ef32cc7
+    May 19 11:37:25 curie kernel: [ 1571.559878] RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    May 19 11:37:25 curie kernel: [ 1571.559881] RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
+    May 19 11:37:25 curie kernel: [ 1571.559883] RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
+    May 19 11:37:25 curie kernel: [ 1571.559885] RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
+    May 19 11:37:25 curie kernel: [ 1571.559886] R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
+    May 19 11:37:25 curie kernel: [ 1571.559888] R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
+    May 19 11:37:25 curie kernel: [ 1571.559980] task:zpool           state:D stack:    0 pid:11815 ppid:  3816 flags:0x00004000
+    May 19 11:37:25 curie kernel: [ 1571.559983] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.559988]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.559992]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559995]  io_schedule+0x42/0x70
+    May 19 11:37:25 curie kernel: [ 1571.560004]  cv_wait_common+0xac/0x130 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.560008]  ? add_wait_queue_exclusive+0x70/0x70
+    May 19 11:37:25 curie kernel: [ 1571.560118]  txg_wait_synced_impl+0xc9/0x110 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560223]  txg_wait_synced+0xc/0x40 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560325]  spa_export_common+0x4cd/0x590 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560430]  ? zfs_log_history+0x9c/0xf0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560537]  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560543]  ? _copy_from_user+0x28/0x60
+    May 19 11:37:25 curie kernel: [ 1571.560644]  zfsdev_ioctl+0x53/0xe0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560649]  __x64_sys_ioctl+0x83/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.560653]  do_syscall_64+0x33/0x80
+    May 19 11:37:25 curie kernel: [ 1571.560656]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    May 19 11:37:25 curie kernel: [ 1571.560659] RIP: 0033:0x7fdc23be2cc7
+    May 19 11:37:25 curie kernel: [ 1571.560661] RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    May 19 11:37:25 curie kernel: [ 1571.560664] RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
+    May 19 11:37:25 curie kernel: [ 1571.560666] RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
+    May 19 11:37:25 curie kernel: [ 1571.560667] RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
+    May 19 11:37:25 curie kernel: [ 1571.560669] R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
+    May 19 11:37:25 curie kernel: [ 1571.560671] R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
+
+Here's another example, where you see the USB controller bleeping out
+and back into existence:
+
+    mai 19 11:38:39 curie kernel: usb 2-1: USB disconnect, device number 2
+    mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronizing SCSI cache
+    mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
+    mai 19 11:39:25 curie kernel: INFO: task zed:1564 blocked for more than 241 seconds.
+    mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
+    mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+    mai 19 11:39:25 curie kernel: task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
+    mai 19 11:39:25 curie kernel: Call Trace:
+    mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
+    mai 19 11:39:25 curie kernel:  ? __kmalloc_node+0x141/0x2b0
+    mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
+    mai 19 11:39:25 curie kernel:  schedule_preempt_disabled+0xa/0x10
+    mai 19 11:39:25 curie kernel:  __mutex_lock.constprop.0+0x133/0x460
+    mai 19 11:39:25 curie kernel:  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
+    mai 19 11:39:25 curie kernel:  spa_all_configs+0x41/0x120 [zfs]
+    mai 19 11:39:25 curie kernel:  zfs_ioc_pool_configs+0x17/0x70 [zfs]
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
+    mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
+    mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
+    mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    mai 19 11:39:25 curie kernel: RIP: 0033:0x7fcf0ef32cc7
+    mai 19 11:39:25 curie kernel: RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
+    mai 19 11:39:25 curie kernel: RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
+    mai 19 11:39:25 curie kernel: RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
+    mai 19 11:39:25 curie kernel: R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
+    mai 19 11:39:25 curie kernel: R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
+    mai 19 11:39:25 curie kernel: INFO: task zpool:11815 blocked for more than 241 seconds.
+    mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
+    mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+    mai 19 11:39:25 curie kernel: task:zpool           state:D stack:    0 pid:11815 ppid:  2621 flags:0x00004004
+    mai 19 11:39:25 curie kernel: Call Trace:
+    mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
+    mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
+    mai 19 11:39:25 curie kernel:  io_schedule+0x42/0x70
+    mai 19 11:39:25 curie kernel:  cv_wait_common+0xac/0x130 [spl]
+    mai 19 11:39:25 curie kernel:  ? add_wait_queue_exclusive+0x70/0x70
+    mai 19 11:39:25 curie kernel:  txg_wait_synced_impl+0xc9/0x110 [zfs]
+    mai 19 11:39:25 curie kernel:  txg_wait_synced+0xc/0x40 [zfs]
+    mai 19 11:39:25 curie kernel:  spa_export_common+0x4cd/0x590 [zfs]
+    mai 19 11:39:25 curie kernel:  ? zfs_log_history+0x9c/0xf0 [zfs]
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
+    mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
+    mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
+    mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    mai 19 11:39:25 curie kernel: RIP: 0033:0x7fdc23be2cc7
+    mai 19 11:39:25 curie kernel: RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
+    mai 19 11:39:25 curie kernel: RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
+    mai 19 11:39:25 curie kernel: RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
+    mai 19 11:39:25 curie kernel: R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
+    mai 19 11:39:25 curie kernel: R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
+
+I understand those are rather extreme conditions: I would fully expect
+the pool to stop working if the underlying drives disappear. What
+doesn't seem acceptable is that a command would completely hang like
+this.
+
 # References
 
 ### ZFS documentation

move fio discussion to appendix
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 4861eac8..c6d12eb0 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -592,81 +592,6 @@ Another test was performed while in "rescue" mode but was ultimately
 lost. It's actually still in the old M.2 drive, but I cannot mount
 that device with the external USB controller I have right now.
 
-## Side note about fio job files
-
-I would love to have just a single `.fio` job file that lists multiple
-jobs to run *serially*. For example, this file describes the above
-workload pretty well:
-
-[[!format txt """
-[global]
-# cargo-culting Salter
-fallocate=none
-ioengine=posixaio
-runtime=60
-time_based=1
-end_fsync=1
-stonewall=1
-group_reporting=1
-# no need to drop caches, done by default
-# invalidate=1
-
-# Single 4KiB random read/write process
-[randread-4k-4g-1x]
-stonewall=1
-rw=randread
-bs=4k
-size=4g
-numjobs=1
-iodepth=1
-
-[randwrite-4k-4g-1x]
-stonewall=1
-rw=randwrite
-bs=4k
-size=4g
-numjobs=1
-iodepth=1
-
-# 16 parallel 64KiB random read/write processes:
-[randread-64k-256m-16x]
-stonewall=1
-rw=randread
-bs=64k
-size=256m
-numjobs=16
-iodepth=16
-
-[randwrite-64k-256m-16x]
-stonewall=1
-rw=randwrite
-bs=64k
-size=256m
-numjobs=16
-iodepth=16
-
-# Single 1MiB random read/write process
-[randread-1m-16g-1x]
-stonewall=1
-rw=randread
-bs=1m
-size=16g
-numjobs=1
-iodepth=1
-
-[randwrite-1m-16g-1x]
-stonewall=1
-rw=randwrite
-bs=1m
-size=16g
-numjobs=1
-iodepth=1
-"""]]
-
-... except the jobs are actually run in parallel, even though they are
-`stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
-to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
-
 # Recovery procedures
 
 For test purposes, I unmounted all systems during the procedure:
@@ -705,11 +630,84 @@ TODO: send/recv, automated snapshots
 TODO: merge this documentation with the [[hardware/tubman]]
 documentation. maybe create a separate zfs primer?
 
+## fio improvements
+
 I really want to improve my experience with `fio`. Right now, I'm just
 cargo-culting stuff from other folks and I don't really like
 it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
 that it doesn't really work that well for disk tests.
 
+I would love to have just a single `.fio` job file that lists multiple
+jobs to run *serially*. For example, this file describes the above
+workload pretty well:
+
+    [global]
+    # cargo-culting Salter
+    fallocate=none
+    ioengine=posixaio
+    runtime=60
+    time_based=1
+    end_fsync=1
+    stonewall=1
+    group_reporting=1
+    # no need to drop caches, done by default
+    # invalidate=1
+
+    # Single 4KiB random read/write process
+    [randread-4k-4g-1x]
+    rw=randread
+    bs=4k
+    size=4g
+    numjobs=1
+    iodepth=1
+
+    [randwrite-4k-4g-1x]
+    rw=randwrite
+    bs=4k
+    size=4g
+    numjobs=1
+    iodepth=1
+
+    # 16 parallel 64KiB random read/write processes:
+    [randread-64k-256m-16x]
+    rw=randread
+    bs=64k
+    size=256m
+    numjobs=16
+    iodepth=16
+
+    [randwrite-64k-256m-16x]
+    rw=randwrite
+    bs=64k
+    size=256m
+    numjobs=16
+    iodepth=16
+
+    # Single 1MiB random read/write process
+    [randread-1m-16g-1x]
+    rw=randread
+    bs=1m
+    size=16g
+    numjobs=1
+    iodepth=1
+
+    [randwrite-1m-16g-1x]
+    rw=randwrite
+    bs=1m
+    size=16g
+    numjobs=1
+    iodepth=1
+
+... except the jobs are actually started in parallel, even though they
+are `stonewall`'d, as far as I can tell by the reports. I [sent a
+mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u) to the [fio mailing list](https://lore.kernel.org/fio/) for clarification. 
+
+It looks like the jobs are *started* in parallel, but actual
+(correctly) run serially. It seems like this might just be a matter of
+reporting the right timestamps in the end, although it does feel like
+*starting* all the processes (even if not doing any work yet) could
+skew the results.
+
 # References
 
 ### ZFS documentation

migration completed
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 56403158..4861eac8 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -260,6 +260,17 @@ the things that get mounted on mountpoint and hold the actual files.
 
    ... and no, just creating `/mnt/var/lib` doesn't fix that problem.
 
+   Also note that you will *probably* need to change storage driver in
+   Docker, see the [zfs-driver documentation](https://docs.docker.com/storage/storagedriver/zfs-driver/) for details but,
+   basically, I did:
+   
+       echo '{ "storage-driver": "zfs" }' > /etc/docker/daemon.json
+
+   Note, as an aside, that podman has the same problem (and similar
+   solution):
+   
+       printf '[storage]\ndriver = "zfs"\n' > /etc/containers/storage.conf
+
  * make a `tmpfs` for `/run`:
 
         mkdir /mnt/run &&
@@ -376,14 +387,15 @@ seems to be at around 5Gbps:
 So it shouldn't cap at that speed. It's possible the USB adapter is
 failing to give me the full speed though.
 
-TODO: we are here
-
-TODO: ddrescue LVM setup to *other* NVMe drive, to allow for similar
-benchmarks later
-
-TODO: benchmark before, in single-user mode?
+At this point, we're about ready to do the final configuration. We
+drop to single user mode and do the rest of the procedure. That used
+to be `shutdown now`, but it seems like the systemd switch broke that,
+so now you can reboot into grub and pick the "recovery"
+option. Alternatively, you might try `systemctl rescue` (untested).
 
-TODO: rsync in single user mode, then continue below
+I also wanted to copy the drive over to another new NVMe drive, but
+that failed: it looks like the USB controller I have doesn't work with
+older, non-NVME drives.
 
 # Boot configuration
 
@@ -422,7 +434,8 @@ Enable the service:
 
     systemctl enable zfs-import-bpool.service
 
-TODO: fstab? swap?
+I had to trim down `/etc/fstab` and `/etc/crypttab` to only contain
+references to the legacy filesystems (`/srv` is still BTRFS!).
 
 Rebuild boot loader with support for ZFS, but also to workaround
 GRUB's missing zpool-features support:
@@ -474,22 +487,19 @@ Exit chroot:
 
 # Finalizing
 
-TODO: move Docker to the right place:
+One last sync was done in rescue mode:
 
-    rm /var/lib/docker/
-    mv /home/docker/* /var/lib/docker/
-    rmdir /home/docker
-
-TODO: last sync in single user mode
+    for fs in /boot/ /boot/efi/ / /home/; do
+        echo "syncing $fs to /mnt$fs..." && 
+        rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
+    done
 
-Unmount filesystems:
+Then we unmount all filesystems:
  
     mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
     zpool export -a
 
-TODO: reboot
-TODO: swap drives
-TODO: new benchmark
+Reboot, swap the drives, and boot in ZFS. Hurray!
 
 # Benchmarks
 
@@ -578,6 +588,10 @@ what it's worth. Those results are curiously inconsistent with the
 non-idle test, many tests perform more *poorly* than when the
 workstation was busy, which is troublesome.
 
+Another test was performed while in "rescue" mode but was ultimately
+lost. It's actually still in the old M.2 drive, but I cannot mount
+that device with the external USB controller I have right now.
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple
@@ -653,8 +667,34 @@ iodepth=1
 `stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
 to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
 
+# Recovery procedures
+
+For test purposes, I unmounted all systems during the procedure:
+
+    umount /mnt/boot/efi /mnt/boot/run
+    umount -a -t zfs
+    zpool export -a
+
+And disconnected the drive, to see how I would recover this system
+from another Linux system in case of a total motherboard failure.
+
+To import an existing pool, plug the device, then import the pool with
+an alternate root, so it doesn't mount over your existing filesystems,
+then you mount the root filesystem and all the others:
+
+    zpool import -l -a -R /mnt &&
+    zfs mount rpool/ROOT/debian &&
+    zfs mount -a &&
+    mount /dev/sdc2 /mnt/boot/efi &&
+    mount -t tmpfs tmpfs /mnt/run &&
+    mkdir /mnt/run/lock
+
 # Remaining work
 
+TODO: swap. how do we do it?
+
+TODO: talk about the lockups during migration
+
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
@@ -662,6 +702,9 @@ here.
 
 TODO: send/recv, automated snapshots
 
+TODO: merge this documentation with the [[hardware/tubman]]
+documentation. maybe create a separate zfs primer?
+
 I really want to improve my experience with `fio`. Right now, I'm just
 cargo-culting stuff from other folks and I don't really like
 it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
diff --git a/hardware/tubman.md b/hardware/tubman.md
index 4b0fc1d6..368cd904 100644
--- a/hardware/tubman.md
+++ b/hardware/tubman.md
@@ -444,6 +444,28 @@ IO statistics, every second:
 
     zpool iostat 1
 
+### Mounting
+
+After a `zfs list`, you should see the datasets you can mount. You can
+mount one by name, for example with:
+
+    zfs mount bpool/ROOT/debian
+
+Note that it will mount the device in its pre-defined `mountpoint`
+property. If you want to mount it elsewhere, this is the magic
+formula:
+
+    mount -o zfsutil -t zfs bpool/BOOT/debian /mnt
+
+If the dataset is encrypted, however, you first need to unlock it
+with:
+
+    zpool import -l -a -R /mnt
+
+Note that the above is preferred: it will set the entire imported pool
+to mount under `/mnt` instead of the toplevel. That way you don't need
+the earlier hack to mount it elsewhere.
+
 ### Snapshots
 
 Creating:

another benchmarks set done
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 8ccfb228..56403158 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -146,9 +146,8 @@ This is a more typical pool creation.
         zpool create \
             -o ashift=12 \
             -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
-            -O acltype=posixacl -O xattr=sa \
+            -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
             -O compression=zstd \
-            -O dnodesize=auto \
             -O relatime=on \
             -O canmount=off \
             -O mountpoint=/ -R /mnt \
@@ -558,6 +557,27 @@ assume my work affected the benchmarks greatly.
    * read: 165MiB/s (173MB/s), sync: 172MiB/s (180MB/s)
    * write: 74.7MiB/s (78.3MB/s), sync: 38.5MiB/s (40.4MB/s)
 
+### Somewhat idle test, mdadm/luks/lvm/ext4
+
+This test was done while I was away from my workstation. Everything
+was still running, so a bunch of stuff was probably waking up and
+disturbing the test, but it should be more reliable than the above.
+
+ * 4k blocks, 4GB, 1 process:
+   * read: 16.8MiB/s (17.7MB/s), sync: 18.9MiB/s (19.8MB/s)
+   * write: 73.8MiB/s (77.3MB/s), sync: 847KiB/s (867kB/s)
+ * 64k blocks, 256MB, 16 process:
+   * read: 526MiB/s (552MB/s), sync: 520MiB/s (546MB/s)
+   * write: 98.3MiB/s (103MB/s), sync: 29.6MiB/s (30.0MB/s)
+ * 1m blocks, 16G 1 process:
+   * read: 148MiB/s (155MB/s), sync: 162MiB/s (170MB/s)
+   * write: 109MiB/s (114MB/s), sync: 48.6MiB/s (50.0MB/s)
+
+It looks like the 64k test is the one that can max out the SSD, for
+what it's worth. Those results are curiously inconsistent with the
+non-idle test, many tests perform more *poorly* than when the
+workstation was busy, which is troublesome.
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple

and power consumptions improvements!
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index 5baade96..e024a55d 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -88,7 +88,11 @@ Cons:
    than my current (2021-09-27) laptop (Purism 13v4, currently says
    7h). power problems confirmed by [this report from Linux After
    Dark][linux-after-dark-framework] which also mentions that the USB adapters take power *even
-   when not in use* and quite a bit (400mW in some cases!)
+   when not in use* and quite a bit (400mW in some cases!). update:
+   apparently the [second generation laptop][] has improvements to the
+   battery life, namely associated with the "big-little" design of the
+   12th gen Intel chips but also [standby consumption](https://news.ycombinator.com/item?id=31433666) and
+   [firmware updates for various chipsets](https://news.ycombinator.com/item?id=31434021)
 
 [linux-after-dark-framework]: https://linuxafterdark.net/linux-after-dark-episode-14/
 
@@ -97,7 +101,10 @@ Cons:
    After Dark][linux-after-dark-framework]), so unlikely to have one in the future</del>
    Update: it seems like they cracked that nut and will ship an
    [ethernet expansion card](https://frame.work/ca/en/products/ethernet-expansion-card) in their [second generation
-   laptop](https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646), which is impressive
+   laptop][], which is impressive. Downside: the [chipset is
+   realtek](https://news.ycombinator.com/item?id=31434483), so probably firmware blobby.
+
+[second generation laptop]: https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646
 
  * a bit pricey for the performance, especially when compared to the
    competition (e.g. Dell XPS, Apple M1), but be worth waiting for

update: framework will ship an ethernet port, whoohoo!
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index f2020631..5baade96 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -92,9 +92,12 @@ Cons:
 
 [linux-after-dark-framework]: https://linuxafterdark.net/linux-after-dark-episode-14/
 
- * no RJ-45 port, and attempts at designing ones are failing because
-   the modular plugs are too thin to fit (according to [Linux After
-   Dark][linux-after-dark-framework]), so unlikely to have one in the future
+ * <del>no RJ-45 port, and attempts at designing ones are failing
+   because the modular plugs are too thin to fit (according to [Linux
+   After Dark][linux-after-dark-framework]), so unlikely to have one in the future</del>
+   Update: it seems like they cracked that nut and will ship an
+   [ethernet expansion card](https://frame.work/ca/en/products/ethernet-expansion-card) in their [second generation
+   laptop](https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646), which is impressive
 
  * a bit pricey for the performance, especially when compared to the
    competition (e.g. Dell XPS, Apple M1), but be worth waiting for

more todos, typos
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 80c44b0d..8ccfb228 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -379,6 +379,9 @@ failing to give me the full speed though.
 
 TODO: we are here
 
+TODO: ddrescue LVM setup to *other* NVMe drive, to allow for similar
+benchmarks later
+
 TODO: benchmark before, in single-user mode?
 
 TODO: rsync in single user mode, then continue below
@@ -420,6 +423,8 @@ Enable the service:
 
     systemctl enable zfs-import-bpool.service
 
+TODO: fstab? swap?
+
 Rebuild boot loader with support for ZFS, but also to workaround
 GRUB's missing zpool-features support:
 
@@ -483,7 +488,9 @@ Unmount filesystems:
     mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
     zpool export -a
 
-reboot to the new system.
+TODO: reboot
+TODO: swap drives
+TODO: new benchmark
 
 # Benchmarks
 
@@ -651,7 +658,7 @@ that it doesn't really work that well for disk tests.
  * [FreeBSD handbook](https://docs.freebsd.org/en/books/handbook/zfs/): FreeBSD-specific of course, but
    excellent as always
  * [OpenZFS FAQ](https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html)
- * [OpenZFS: Debian buyllseye root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.htm): excellent documentation, basis
+ * [OpenZFS: Debian buyllseye root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html): excellent documentation, basis
    for the above procedure
  * [another ZFS on linux documentation](https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/)
 

fix blob format
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2f476f9a..80c44b0d 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -557,7 +557,7 @@ I would love to have just a single `.fio` job file that lists multiple
 jobs to run *serially*. For example, this file describes the above
 workload pretty well:
 
-[[!format """
+[[!format txt """
 [global]
 # cargo-culting Salter
 fallocate=none

move benchmark script to git
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index aeebebd3..2f476f9a 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -508,43 +508,10 @@ article](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-
 already pretty strange. But also it doesn't include stuff like
 dropping caches or repeating results.
 
-So here's my variation. 
-
-[[!format sh """
-#!/bin/sh
-
-set -e
-
-common_flags="--group_reporting --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
-
-while read type bs size jobs extra ; do
-    name="${type}${bs}${size}${jobs}x$extra"
-    echo "dropping caches..." >&2
-    sync
-    echo 3 > /proc/sys/vm/drop_caches
-    echo "running job $name..." >&2
-    fio $common_flags --name="$name" \
-        --rw="$type" \
-        --bs="$bs" \
-        --size="$size" \
-        --numjobs="$jobs" \
-        --iodepth="$jobs" \
-        $extra
-done <<EOF
-randread  4k 4g 1
-randwrite 4k 4g 1
-randread  64k 256m 16
-randwrite 64k 256m 16
-randread  1m 16g 1
-randwrite 1m 16g 1
-randread  4k 4g 1 --fsync=1
-randwrite 4k 4g 1 --fsync=1
-randread  64k 256m 16 --fsync=1
-randwrite 64k 256m 16 --fsync=1
-randread  1m 16g 1 --fsync=1
-randwrite 1m 16g 1 --fsync=1
-EOF
-"""]]
+So here's my variation, which i called [fio-ars-bench.sh](https://gitlab.com/anarcat/scripts/-/blob/main/fio-ars-bench.sh) for
+now. It just batches a bunch of `fio` tests, one by one, 60 seconds
+each. It should take about 12 minutes to run, as there are 3 pair of
+tests, read/write, with and without async.
 
 And before I show the results, it should be noted there is a huge
 caveat here The test is done between:

some benchmarks
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 0249f8d4..aeebebd3 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -379,11 +379,9 @@ failing to give me the full speed though.
 
 TODO: we are here
 
-TODO: resync
+TODO: benchmark before, in single-user mode?
 
-TODO: benchmark before
-
-TODO: resync in single user mode, then 
+TODO: rsync in single user mode, then continue below
 
 # Boot configuration
 
@@ -517,7 +515,7 @@ So here's my variation.
 
 set -e
 
-common_flags="--group_reporting --minimal --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
+common_flags="--group_reporting --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
 
 while read type bs size jobs extra ; do
     name="${type}${bs}${size}${jobs}x$extra"
@@ -533,18 +531,18 @@ while read type bs size jobs extra ; do
         --iodepth="$jobs" \
         $extra
 done <<EOF
-randwrite 4k 4g 1
 randread  4k 4g 1
-randwrite 64k 256m 16
+randwrite 4k 4g 1
 randread  64k 256m 16
-randwrite 1m 16g 1
+randwrite 64k 256m 16
 randread  1m 16g 1
-randwrite 4k 4g 1 --fsync=1
+randwrite 1m 16g 1
 randread  4k 4g 1 --fsync=1
-randwrite 64k 256m 16 --fsync=1
+randwrite 4k 4g 1 --fsync=1
 randread  64k 256m 16 --fsync=1
-randwrite 1m 16g 1 --fsync=1
+randwrite 64k 256m 16 --fsync=1
 randread  1m 16g 1 --fsync=1
+randwrite 1m 16g 1 --fsync=1
 EOF
 """]]
 
@@ -568,6 +566,24 @@ not on reads. It's also possible it outperforms it on both, because
 it's a newer drive. A new test might be possible with a new external
 USB drive as well, although I doubt I will find the time to do this.
 
+## Results
+
+### Non-idle test, mdadm/luks/lvm/ext4
+
+Those tests were done with the above script, in `/home`, while working
+on other things on my workstation, which generally felt sluggish, so I
+assume my work affected the benchmarks greatly.
+
+ * 4k blocks, 4GB, 1 process:
+   * read: 21.5MiB/s (22.5MB/s), sync: 20.8MiB/s (21.9MB/s)
+   * write: 139MiB/s (146MB/s), sync: 1118KiB/s (1145kB/s)
+ * 64k blocks, 256MB, 16 process:
+   * read: 513MiB/s (537MB/s), sync: 512MiB/s (537MB/s)
+   * write: 160MiB/s (167MB/s), sync: 41.5MiB/s (43.5MB/s)
+ * 1m blocks, 16G 1 process:
+   * read: 165MiB/s (173MB/s), sync: 172MiB/s (180MB/s)
+   * write: 74.7MiB/s (78.3MB/s), sync: 38.5MiB/s (40.4MB/s)
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple
@@ -640,8 +656,8 @@ iodepth=1
 """]]
 
 ... except the jobs are actually run in parallel, even though they are
-`stonewall`'d, as far as I can tell by the reports. I sent a mail to
-the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
+`stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
+to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
 
 # Remaining work
 
@@ -652,6 +668,11 @@ here.
 
 TODO: send/recv, automated snapshots
 
+I really want to improve my experience with `fio`. Right now, I'm just
+cargo-culting stuff from other folks and I don't really like
+it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
+that it doesn't really work that well for disk tests.
+
 # References
 
 ### ZFS documentation

expand on benchmarks
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2fda422d..0249f8d4 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -21,6 +21,10 @@ So, install the required packages, on the current system:
 
     apt install --yes gdisk zfs-dkms zfs zfs-initramfs zfsutils-linux
 
+We also tell DKMS that we need to rebuild the initrd when upgrading:
+
+    echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
+
 # Partitioning
 
 This is going to partition `/dev/sdc` with:
@@ -336,6 +340,30 @@ idea. At this point, the procedure was restarted all the way back to
 which, surprisingly, doesn't require any confirmation (`zpool destroy
 rpool`).
 
+The second run was cleaner:
+
+    root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
+            echo "syncing $fs to /mnt$fs..." && 
+            rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
+        done
+    syncing /boot/ to /mnt/boot/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
+    syncing /boot/efi/ to /mnt/boot/efi/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/110)  
+    syncing / to /mnt/...
+     28,019,033,070  97%   42.03MB/s    0:10:35 (xfr#703671, ir-chk=1093/833515)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
+    could not make way for new symlink: var/lib/docker
+     34,081,807,102  98%   44.84MB/s    0:12:04 (xfr#736580, to-chk=0/867723)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+    syncing /home/ to /mnt/home/...
+    rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
+    IO error encountered -- skipping file deletion
+     24,043,086,450  96%   62.03MB/s    0:06:09 (xfr#151819, ir-chk=15117/172571)
+    file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/4C1FDBFEA976FF924D062FB990B24B897A77B84B"
+    315,423,626,507  96%   67.09MB/s    1:14:43 (xfr#2256845, to-chk=0/2994364)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+
+
 Also note the transfer speed: we seem capped at 76MB/s, or
 608Mbit/s. This is not as fast as I was expecting: the USB connection
 seems to be at around 5Gbps:
@@ -349,14 +377,8 @@ seems to be at around 5Gbps:
 So it shouldn't cap at that speed. It's possible the USB adapter is
 failing to give me the full speed though.
 
-TODO: make a new paste
-
 TODO: we are here
 
-TODO:
-
-    echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
-
 TODO: resync
 
 TODO: benchmark before
@@ -478,7 +500,7 @@ This is a test that was ran in single-user mode using fio and the
 
         fio --name=randwrite64k16x --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1
 
- * Single 1MiB random write process
+ * Single 1MiB random write process:
 
         fio --name=randwrite1m1x --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
 
@@ -488,7 +510,7 @@ article](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-
 already pretty strange. But also it doesn't include stuff like
 dropping caches or repeating results.
 
-So here's my variation
+So here's my variation. 
 
 [[!format sh """
 #!/bin/sh
@@ -497,14 +519,12 @@ set -e
 
 common_flags="--group_reporting --minimal --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
 
-# + --directory=/test?
-
 while read type bs size jobs extra ; do
     name="${type}${bs}${size}${jobs}x$extra"
-    echo "dropping caches..."
+    echo "dropping caches..." >&2
     sync
     echo 3 > /proc/sys/vm/drop_caches
-    echo "running job $name..."
+    echo "running job $name..." >&2
     fio $common_flags --name="$name" \
         --rw="$type" \
         --bs="$bs" \
@@ -528,6 +548,101 @@ randread  1m 16g 1 --fsync=1
 EOF
 """]]
 
+And before I show the results, it should be noted there is a huge
+caveat here The test is done between:
+
+ * a WDC WDS500G1B0B-00AS40 SSD, a [WD blue M.2 2280 SSD](https://www.westerndigital.com/products/internal-drives/wd-blue-sata-m-2-ssd#WDS500G2B0B) (running
+   mdadm/LUKS/LVM/ext4), that is at least 5 years old, spec'd at
+   560MB/s read, 530MB/s write
+
+ * a brand new [WD blue SN550](https://www.westerndigital.com/products/internal-drives/wd-blue-sn550-nvme-ssd#WDS500G2B0C) drive, which claims to be able to
+   push 2400MB/s read and 1750MB/s write
+
+In practice, I'm going to assume we'll never reach those numbers
+because we're not actually NVMe so the bottleneck isn't the disk
+itself. For our purposes, it might still give us useful results.
+
+My bias, before building, running and analysing those results is that
+ZFS should outperform the traditional stack on writes, but possibly
+not on reads. It's also possible it outperforms it on both, because
+it's a newer drive. A new test might be possible with a new external
+USB drive as well, although I doubt I will find the time to do this.
+
+## Side note about fio job files
+
+I would love to have just a single `.fio` job file that lists multiple
+jobs to run *serially*. For example, this file describes the above
+workload pretty well:
+
+[[!format """
+[global]
+# cargo-culting Salter
+fallocate=none
+ioengine=posixaio
+runtime=60
+time_based=1
+end_fsync=1
+stonewall=1
+group_reporting=1
+# no need to drop caches, done by default
+# invalidate=1
+
+# Single 4KiB random read/write process
+[randread-4k-4g-1x]
+stonewall=1
+rw=randread
+bs=4k
+size=4g
+numjobs=1
+iodepth=1
+
+[randwrite-4k-4g-1x]
+stonewall=1
+rw=randwrite
+bs=4k
+size=4g
+numjobs=1
+iodepth=1
+
+# 16 parallel 64KiB random read/write processes:
+[randread-64k-256m-16x]
+stonewall=1
+rw=randread
+bs=64k
+size=256m
+numjobs=16
+iodepth=16
+
+[randwrite-64k-256m-16x]
+stonewall=1
+rw=randwrite
+bs=64k
+size=256m
+numjobs=16
+iodepth=16
+
+# Single 1MiB random read/write process
+[randread-1m-16g-1x]
+stonewall=1
+rw=randread
+bs=1m
+size=16g
+numjobs=1
+iodepth=1
+
+[randwrite-1m-16g-1x]
+stonewall=1
+rw=randwrite
+bs=1m
+size=16g
+numjobs=1
+iodepth=1
+"""]]
+
+... except the jobs are actually run in parallel, even though they are
+`stonewall`'d, as far as I can tell by the reports. I sent a mail to
+the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
+
 # Remaining work
 
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!

update zfs migration
had to rebuild teh pools because utf8only is crap (and i don't only
have utf8)
i screwed up smartctl and sgdisks commands (scary) so i don't actually
know the block size.
actually use zstd encryption
start the first sync
design a benchmark procedure
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index d76dc3d7..2fda422d 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -36,27 +36,26 @@ This is going to partition `/dev/sdc` with:
         sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
         sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
 
-It looks like this:
-
-    root@curie:~# sgdisk -p /dev/sdb
-    Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
-    Model: WDC WD10JPLX-00M
-    Sector size (logical/physical): 512/4096 bytes
-    Disk identifier (GUID): D8806C02-B5A6-4705-ACA9-F5A92F98C2D1
-    Partition table holds up to 128 entries
-    Main partition table begins at sector 2 and ends at sector 33
-    First usable sector is 34, last usable sector is 1953525134
-    Partitions will be aligned on 2048-sector boundaries
-    Total free space is 3437 sectors (1.7 MiB)
-
-    Number  Start (sector)    End (sector)  Size       Code  Name
-       1            2048          411647   200.0 MiB   EF00  EFI System Partition
-       2          411648         2508799   1024.0 MiB  8300  
-       3         2508800        18958335   7.8 GiB     8300  
-       4        18958336      1953523711   922.5 GiB   8300
-
-This, by the way, says the device has 4KB sector size. `smartctl`
-agrees as well:
+        root@curie:/home/anarcat# sgdisk -p /dev/sdc
+        Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
+        Model: ESD-S1C         
+        Sector size (logical/physical): 512/512 bytes
+        Disk identifier (GUID): 932ED8E5-8B5C-4183-9967-56D7652C01DA
+        Partition table holds up to 128 entries
+        Main partition table begins at sector 2 and ends at sector 33
+        First usable sector is 34, last usable sector is 1953525134
+        Partitions will be aligned on 16-sector boundaries
+        Total free space is 14 sectors (7.0 KiB)
+
+        Number  Start (sector)    End (sector)  Size       Code  Name
+           1              48            2047   1000.0 KiB  EF02  
+           2            2048         1050623   512.0 MiB   EF00  
+           3         1050624         3147775   1024.0 MiB  BF01  
+           4         3147776      1953525134   930.0 GiB   BF00
+
+Unfortunately, we can't be sure of the sector size here, because the
+USB controller is probably lying to us about it. Normally, this
+`smartctl` command should tell us the sector size as well:
 
     root@curie:~# smartctl -i /dev/sdb -qnoserial
     smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-14-amd64] (local build)
@@ -77,8 +76,13 @@ agrees as well:
     SMART support is: Available - device has SMART capability.
     SMART support is: Enabled
 
+Above is the example of the builtin HDD drive. But the SSD device
+enclosed in that USB controller [doesn't support SMART commands](https://www.smartmontools.org/ticket/1054),
+so we can't trust that it really has 512 bytes sectors.
+
 This matters because we need to tweak the `ashift` value
-correctly. 4KB means `ashift=12`.
+correctly. We're going to go ahead the SSD drive has the common 4KB
+settings, which means `ashift=12`.
 
 Note here that we are *not* creating a separate partition for
 swap. Swap on ZFS volumes (AKA "swap on ZVOL") can trigger lockups and
@@ -137,11 +141,10 @@ This is a more typical pool creation.
 
         zpool create \
             -o ashift=12 \
-            -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
+            -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
             -O acltype=posixacl -O xattr=sa \
-            -O compression=lz4 \
+            -O compression=zstd \
             -O dnodesize=auto \
-            -O normalization=formD \
             -O relatime=on \
             -O canmount=off \
             -O mountpoint=/ -R /mnt \
@@ -160,8 +163,6 @@ Breaking this down:
  * `-O compression=zstd`: enable [zstd](https://en.wikipedia.org/wiki/Zstd) compression, can be
    disabled/enabled by dataset to with `zfs set compression=off
    rpool/example`
- * `-O normalization=formD`: normalize file names on comparisons (not
-   storage), implies `utf8only=on`
  * `-O relatime=on`: classic `atime` optimisation, another that could
    be used on a busy server is `atime=off`
  * `-O canmount=off`: do not make the pool mount automatically with
@@ -171,7 +172,14 @@ Breaking this down:
 
 Those settings are all available in [zfsprops(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zfsprops.8.en.html). Other flags are
 defined in [zpool-create(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zpool-create.8.en.html). The reasoning behind them is also
-explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian wiki](https://wiki.debian.org/ZFS#Advanced_Topics).
+explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian
+wiki](https://wiki.debian.org/ZFS#Advanced_Topics). Those flags were actually not used:
+
+ * `-O normalization=formD`: normalize file names on comparisons (not
+   storage), implies `utf8only=on`, which is a [bad idea](https://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames) (and
+   effectively meant my first sync failed to copy some files,
+   including [this folder from a supysonic checkout](https://github.com/spl0k/supysonic/tree/270fa9883b2f2bc98f1482a68f7d9022017af50b/tests/assets/%E6)). and this
+   cannot be changed after the filesystem is created. bad, bad, bad.
 
 ## Side note about single-disk pools
 
@@ -277,16 +285,71 @@ like this:
     rpool/var/lib/docker  899G  256K  899G   1% /mnt/var/lib/docker
     /dev/sdc2             511M  4.0K  511M   1% /mnt/boot/efi
 
+Now that we have everything setup and mounted, let's copy all files
+over.
 
+# Copying files
 
-# Copy files over
+This is a list of all the mounted filesystems
 
     for fs in /boot/ /boot/efi/ / /home/; do
         echo "syncing $fs to /mnt$fs..." && 
         rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
     done
 
-TODO: paste what it looked like
+You can check that the list is correct with:
+
+    mount -l -t ext4,btrfs,vfat | awk '{print $3}'
+
+Note that we skip `/srv` as it's on a different disk.
+
+On the first run, we had:
+
+    root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
+            echo "syncing $fs to /mnt$fs..." && 
+            rsync -aSHAXx --info=progress2 $fs /mnt$fs
+        done
+    syncing /boot/ to /mnt/boot/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
+    syncing /boot/efi/ to /mnt/boot/efi/...
+         16,831,437 100%  184.14MB/s    0:00:00 (xfr#101, to-chk=0/110)
+    syncing / to /mnt/...
+     28,019,293,280  94%   47.63MB/s    0:09:21 (xfr#703710, ir-chk=6748/839220)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
+    could not make way for new symlink: var/lib/docker
+     34,081,267,990  98%   50.71MB/s    0:10:40 (xfr#736577, to-chk=0/867732)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+    syncing /home/ to /mnt/home/...
+    rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
+     24,456,268,098  98%   68.03MB/s    0:05:42 (xfr#159867, ir-chk=6875/172377) 
+    file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/B3AB0CDA9C4454B3C1197E5A22669DF8EE849D90"
+    199,762,528,125  93%   74.82MB/s    0:42:26 (xfr#1437846, ir-chk=1018/1983979)rsync: [generator] recv_generator: mkdir "/mnt/home/anarcat/dist/supysonic/tests/assets/\#346" failed: Invalid or incomplete multibyte or wide character (84)
+    *** Skipping any contents from this failed directory ***
+    315,384,723,978  96%   76.82MB/s    1:05:15 (xfr#2256473, to-chk=0/2993950)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+
+Note the failure to transfer that supysonic file? It turns out they
+had a [weird filename in their source tree](https://github.com/spl0k/supysonic/pull/183), since then removed,
+but still it showed how the `utf8only` feature might not be such a bad
+idea. At this point, the procedure was restarted all the way back to
+"Creating pools", after unmounting all ZFS filesystems (`umount
+/mnt/run /mnt/boot/efi && umount -t zfs -a`) and destroying the pool,
+which, surprisingly, doesn't require any confirmation (`zpool destroy
+rpool`).
+
+Also note the transfer speed: we seem capped at 76MB/s, or
+608Mbit/s. This is not as fast as I was expecting: the USB connection
+seems to be at around 5Gbps:
+
+    anarcat@curie:~$ lsusb -tv | head -4
+    /:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
+        ID 1d6b:0003 Linux Foundation 3.0 root hub
+        |__ Port 1: Dev 4, If 0, Class=Mass Storage, Driver=uas, 5000M
+            ID 0b05:1932 ASUSTek Computer, Inc.
+
+So it shouldn't cap at that speed. It's possible the USB adapter is
+failing to give me the full speed though.
+
+TODO: make a new paste
 
 TODO: we are here
 
@@ -402,6 +465,69 @@ Unmount filesystems:
 
 reboot to the new system.
 
+# Benchmarks
+
+This is a test that was ran in single-user mode using fio and the
+[Ars Technica recommended tests](https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/), which are:
+
+ * Single 4KiB random write process:
+
+        fio --name=randwrite4k1x --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
+
+ * 16 parallel 64KiB random write processes:
+
+        fio --name=randwrite64k16x --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1
+
+ * Single 1MiB random write process
+
+        fio --name=randwrite1m1x --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

(Diff truncated)
start working on migrating my workstation to ZFS
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
new file mode 100644
index 00000000..d76dc3d7
--- /dev/null
+++ b/blog/zfs-migration.md
@@ -0,0 +1,429 @@
+In my [[hardware/tubman]] setup, I started using ZFS on an old server
+I had lying around. The machine is really old though (2011!) and it
+"feels" pretty slow. I want to see how much of that is ZFS and how
+much is the machine. Synthetic benchmarks [show that ZFS may be slower
+than mdadm in RAID-10 or RAID-6 configuration](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-disks-two-filesystems-one-winner/), so I want to
+confirm that on a live workload: my workstation. Plus, I want easy,
+regular, high performance backups (with send/receive snapshots) and
+there's no way I'm going to use [[BTRFS|2022-05-13-brtfs-notes]]
+because I find it too confusing and unreliable.
+
+So off we go.
+
+# Installation
+
+Since this is a conversion (and not a new install), our procedure is
+slightly different than the [official documentation](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html) but otherwise
+it's pretty much in the same spirit: we're going to use ZFS for
+everything, including the root filesystem.
+
+So, install the required packages, on the current system:
+
+    apt install --yes gdisk zfs-dkms zfs zfs-initramfs zfsutils-linux
+
+# Partitioning
+
+This is going to partition `/dev/sdc` with:
+
+ * 1MB MBR / BIOS legacy boot
+ * 512MB EFI boot
+ * 1GB bpool, unencrypted pool for /boot
+ * rest of the disk for zpool, the rest of the data
+
+        sgdisk --zap-all /dev/sdc
+        sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/sdc
+        sgdisk     -n2:1M:+512M   -t2:EF00 /dev/sdc
+        sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
+        sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
+
+It looks like this:
+
+    root@curie:~# sgdisk -p /dev/sdb
+    Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
+    Model: WDC WD10JPLX-00M
+    Sector size (logical/physical): 512/4096 bytes
+    Disk identifier (GUID): D8806C02-B5A6-4705-ACA9-F5A92F98C2D1
+    Partition table holds up to 128 entries
+    Main partition table begins at sector 2 and ends at sector 33
+    First usable sector is 34, last usable sector is 1953525134
+    Partitions will be aligned on 2048-sector boundaries
+    Total free space is 3437 sectors (1.7 MiB)
+
+    Number  Start (sector)    End (sector)  Size       Code  Name
+       1            2048          411647   200.0 MiB   EF00  EFI System Partition
+       2          411648         2508799   1024.0 MiB  8300  
+       3         2508800        18958335   7.8 GiB     8300  
+       4        18958336      1953523711   922.5 GiB   8300
+
+This, by the way, says the device has 4KB sector size. `smartctl`
+agrees as well:
+
+    root@curie:~# smartctl -i /dev/sdb -qnoserial
+    smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-14-amd64] (local build)
+    Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
+
+    === START OF INFORMATION SECTION ===
+    Model Family:     Western Digital Black Mobile
+    Device Model:     WDC WD10JPLX-00MBPT0
+    Firmware Version: 01.01H01
+    User Capacity:    1 000 204 886 016 bytes [1,00 TB]
+    Sector Sizes:     512 bytes logical, 4096 bytes physical
+    Rotation Rate:    7200 rpm
+    Form Factor:      2.5 inches
+    Device is:        In smartctl database [for details use: -P show]
+    ATA Version is:   ATA8-ACS T13/1699-D revision 6
+    SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
+    Local Time is:    Tue May 17 13:33:04 2022 EDT
+    SMART support is: Available - device has SMART capability.
+    SMART support is: Enabled
+
+This matters because we need to tweak the `ashift` value
+correctly. 4KB means `ashift=12`.
+
+Note here that we are *not* creating a separate partition for
+swap. Swap on ZFS volumes (AKA "swap on ZVOL") can trigger lockups and
+that issue is [still not fixed upstream](https://github.com/openzfs/zfs/issues/7734). [Ubuntu recommends using
+a separate partition for swap instead](https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847628). But since this is "just" a
+workstation, we're betting that we will not suffer from this problem,
+after hearing a report from another Debian developer running this
+setup on their workstation successfully.
+
+# Creating "pools"
+
+ZFS pools are somewhat like "volume groups" if you are familiar with
+LVM, except they obviously also do things like RAID-10. (Even though
+LVM can technically [also do RAID](https://manpages.debian.org/bullseye/lvm2/lvmraid.7.en.html), people typically use [mdadm](https://manpages.debian.org/bullseye/mdadm/mdadm.8.en.html)
+instead.) 
+
+In any case, the guide suggests creating two different pools here:
+one, in cleartext, for boot, and a separate, encrypted one, for the
+rest. Technically, the boot partition is required because the Grub
+bootloader only supports readonly ZFS pools, from what I
+understand. But I'm a little out of my depth here and just following
+the guide.
+
+## Boot pool creation
+
+This creates the boot pool in readonly mode with features that grub
+supports:
+
+        zpool create \
+            -o cachefile=/etc/zfs/zpool.cache \
+            -o ashift=12 -d \
+            -o feature@async_destroy=enabled \
+            -o feature@bookmarks=enabled \
+            -o feature@embedded_data=enabled \
+            -o feature@empty_bpobj=enabled \
+            -o feature@enabled_txg=enabled \
+            -o feature@extensible_dataset=enabled \
+            -o feature@filesystem_limits=enabled \
+            -o feature@hole_birth=enabled \
+            -o feature@large_blocks=enabled \
+            -o feature@lz4_compress=enabled \
+            -o feature@spacemap_histogram=enabled \
+            -o feature@zpool_checkpoint=enabled \
+            -O acltype=posixacl -O canmount=off \
+            -O compression=lz4 \
+            -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
+            -O mountpoint=/boot -R /mnt \
+            bpool /dev/sdc3
+
+I haven't investigated all those settings and just trust the upstream
+guide on the above.
+
+## Main pool creation
+
+This is a more typical pool creation.
+
+        zpool create \
+            -o ashift=12 \
+            -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
+            -O acltype=posixacl -O xattr=sa \
+            -O compression=lz4 \
+            -O dnodesize=auto \
+            -O normalization=formD \
+            -O relatime=on \
+            -O canmount=off \
+            -O mountpoint=/ -R /mnt \
+            rpool /dev/sdc4
+
+Breaking this down:
+
+ * `-o ashift=12`: mentioned above, 4k sector size
+ * `-O encryption=on -O keylocation=prompt -O keyformat=passphrase`:
+   encryption, prompt for a password, default algorithm is
+   `aes-256-gcm`, explicit in the guide, made implicit here
+ * `-O acltype=posixacl -O xattr=sa`: enable ACLs, with better
+   performance (not enabled by default)
+ * `-O dnodesize=auto`: related to extended attributes, less
+   compatibility with other implementations
+ * `-O compression=zstd`: enable [zstd](https://en.wikipedia.org/wiki/Zstd) compression, can be
+   disabled/enabled by dataset to with `zfs set compression=off
+   rpool/example`
+ * `-O normalization=formD`: normalize file names on comparisons (not
+   storage), implies `utf8only=on`
+ * `-O relatime=on`: classic `atime` optimisation, another that could
+   be used on a busy server is `atime=off`
+ * `-O canmount=off`: do not make the pool mount automatically with
+   `mount -a`?
+ * `-O mountpoint=/ -R /mnt`: mount pool on `/` in the future, but
+   `/mnt` for now
+
+Those settings are all available in [zfsprops(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zfsprops.8.en.html). Other flags are
+defined in [zpool-create(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zpool-create.8.en.html). The reasoning behind them is also
+explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian wiki](https://wiki.debian.org/ZFS#Advanced_Topics).
+
+## Side note about single-disk pools
+
+Also note that we're living dangerously here: single-disk ZFS pools
+are [rumoured to be more dangerous](https://www.truenas.com/community/threads/single-drive-zfs.35515/) than not running ZFS at
+all. The choice quote from this article is:
+
+> [...] any error can be detected, but cannot be corrected. This
+> sounds like an acceptable compromise, but its actually not. The
+> reason its not is that ZFS' metadata cannot be allowed to be
+> corrupted. If it is it is likely the zpool will be impossible to
+> mount (and will probably crash the system once the corruption is
+> found). So a couple of bad sectors in the right place will mean that
+> all data on the zpool will be lost. Not some, all. Also there's no
+> ZFS recovery tools, so you cannot recover any data on the drives.
+
+Compared with (say) ext4, where a single disk error can recovered,
+this is pretty bad. But we are ready to live with this with the idea
+that we'll have hourly offline snapshots that we can easily recover
+from. It's trade-off. Also, we're running this on a NVMe/M.2 drive

(Diff truncated)
responses
diff --git a/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment b/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment
new file mode 100644
index 00000000..44b3f443
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment
@@ -0,0 +1,40 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject=""""""
+ date="2022-05-16T14:45:50Z"
+ content="""
+> of course it is circumstantial, but facebook runs a few thousand servers with btrfs (AFAIR)
+
+i wouldn't call this "circumstantial", that's certainly a strong data point. But Facebook has a whole other scale and, if they're anything like other large SV shops, they have their own Linux kernel fork that might have improvements we don't.
+
+>  and personally I am running it since many years without a single failure.
+
+that, however, I would call "anecdotal" and something I hear a lot.... often followed with things like:
+
+> with some quirks, again, I admit - namely that healing is not automatic
+
+... which, for me, is the entire problem here. "it kind of works for me, except sometimes not" is really not what I expect from a filesystem.
+
+> Concerning subvolumes - you don't get separate disk usage simply because it is part of the the main volume (volid=5), and within there just a subdirectory. So getting disk usage from there would mean reading the whole tree (du-like -- btw, I recommend gdu which is many many times faster than du!).
+>
+> The root subvolume on fedora is not the same as volid=5 but just another subvolume. I also have root volumes for Debian/Arch/Fedora on my system (sharing /usr/local and /home volumes). That they called it root is indeed confusing.
+
+[Hacker News](https://news.ycombinator.com/item?id=31383007) helpfully reminded me that:
+
+> the author intentionally gave up early on understanding and simply
+> rants about everything that does not look or work as usual
+
+I think it's framed as criticism of my work, but I take it as a compliment. I reread the two paragraphs a few times, and they still don't make much sense to me. It just begs more questions:
+
+ 1. can we have more than one main volumes?
+ 2. why was it setup with subvolumes instead of volumes?
+ 3. why isn't everything volumes?
+
+I know I sound like a newbie meeting a complex topic and giving up. But here's the thing: I've encountered (and worked in production) with at least half a dozen filesystems in my lifetime (ext2/ext3/ext4, XFS, UFS, FAT16/FAT32, NTFS, HFS, ExFAT, ZFS), and for most of those, I could use them without having to go very deep into the internals. 
+
+But BTRFS gets obscure *quick*. Even going through official documentation (e.g. [BTRFS Design](https://btrfs.wiki.kernel.org/index.php/Btrfs_design)), you *start* with C structs. And somewhere down there there's this confusing diagram about the internal mechanics of the btree and how you build subvolumes and snapshots on top of that.
+
+If you want to hack on BTRFS, that's great. You can get up to speed pretty quick. But I'm not looking at BTRFS from an enthusiast, kernel developer look. I'm looking at it from a "OMG what is this" look, with very little time to deal with it. Every other filesystem architecture I've used like this so far has been able to somewhat be operational in a day or two. After spending multiple days banging my head on this problem, I felt I had to write this down, because everything seems so obtuse that I can't wrap my head around it.
+
+Anyways, thanks for the constructive feedback, it certainly clarifies things a little, but really doesn't make me want to adopt BTRFS in any significant way.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment b/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment
new file mode 100644
index 00000000..1e2bc245
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""comment 3"""
+ date="2022-05-16T14:58:23Z"
+ content="""
+> Seems worth to consider on your side, too, makes it a bit easier to deal with this (and with btrfs volumes you can have multiple dists booting) 
+
+I'm certainly considering some sort of RAID or snapshotting for my workstation now. Problem is it's a NUC so it really can't fit more disks.
+
+Considering my ... unfruitful experience with BTRFS, I probably will stay the heck away from it though, but thanks for the advice.
+
+> Working, up to date backups are a must have. 
+
+That's the understatement of the day. :p
+
+Thankfully, as I said, this machine is mostly throw-away. But because our installers are still kind of crap, it takes a while to recover it, so I am thinking RAID or offline snapshots could be useful to speed up recovery...
+"""]]

approve comment
diff --git a/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment b/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment
new file mode 100644
index 00000000..0e56bb11
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="180.94.215.40"
+ claimedauthor="Norbert"
+ subject="Some comments"
+ date="2022-05-14T01:21:55Z"
+ content="""
+Concerning stability: of course it is circumstantial, but facebook runs a few thousand servers with btrfs (AFAIR), and personally I am running it since many years without a single failure. Admittingly, raid5/6 is broken, don't touch it. raid1 works also rock solid afais (with some quirks, again, I admit - namely that healing is not automatic).
+
+Concerning subvolumes - you don't get separate disk usage simply because it is part of the the main volume (volid=5), and within there just a subdirectory. So getting disk usage from there would mean reading the whole tree (du-like -- btw, I recommend `gdu` which is many many times faster than du!).
+
+The `root` subvolume on fedora is not the same as `volid=5` but just another subvolume. I also have root volumes for Debian/Arch/Fedora on my system (sharing `/usr/local` and `/home` volumes). That they called it `root` is indeed confusing.
+
+One thing that I like a lot about btrfs is `btrfs send/receive`, it is a nice way to do incremental backups.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment b/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment
new file mode 100644
index 00000000..484f8d6b
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="88.196.217.58"
+ claimedauthor="Arti"
+ subject="Dying SSD-s"
+ date="2022-05-16T07:18:45Z"
+ content="""
+I also have experienced few SATA and NVME drives just disappearing on reboot or even during normal usage. In my experience SSD-s just stop working without any warnings. Working, up to date backups are a must have.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment b/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment
new file mode 100644
index 00000000..14e8abfe
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="180.94.215.40"
+ claimedauthor="Norbert"
+ subject="BTRFS raid?"
+ date="2022-05-14T01:12:25Z"
+ content="""
+I have seen similar things happening with some ssds, too. Since I run btrfs-raid (over 7 disks or so) it happens now and then, and usually is fixed by unplugging, plugging in a new disk, and rebalancing. Seems worth to consider on your side, too, makes it a bit easier to deal with this (and with btrfs volumes you can have multiple dists booting)
+"""]]

and another failure
diff --git a/blog/2022-05-13-nvme-disk-failure.md b/blog/2022-05-13-nvme-disk-failure.md
new file mode 100644
index 00000000..d4e4b69f
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure.md
@@ -0,0 +1,53 @@
+[[!meta title="NVMe/SSD disk failure"]]
+
+Yesterday, my workstation ([[curie|hardware/curie]]) was hung when I
+came in the office. After a "[skinny elephant](https://en.wikipedia.org/wiki/Raising_Skinny_Elephants_Is_Boring)", the box rebooted,
+but it couldn't find the primary disk (in the BIOS). Instead, it
+booted on the secondary HDD drive, still running an old Fedora 27
+install which somehow survived to this day, possibly because [[BTRFS
+is incomprehensible|blog/2022-05-13-btrfs-notes]].
+
+Somehow, I blindly accepted the Fedora prompt asking me to upgrade to
+Fedora 28, not realizing that:
+
+ 1. Fedora is now at release 36, not 28
+ 2. major upgrades take about an hour...
+ 3. ... and happen at boot time, blocking the entire machine (I'll
+    remember this next time I laugh at Windows and Mac OS users stuck
+    on updates on boot)
+ 4. you can't skip more than one major upgrade
+
+Which means that upgrading to latest would take over 4
+hours. Thankfully, it's mostly automated and seems to work pretty well
+(which is [not exactly the case for Debian](https://wiki.debian.org/AutomatedUpgrade)). It still seems like a
+lot of wasted time -- it would probably be better to just reinstall
+the machine at this point -- and not what I had planned to do that
+morning at all.
+
+In any case, after waiting all that time, the machine booted (in
+Fedora) again, and now it *could* detect the SSD disk. The BIOS could
+find the disk too, so after I reinstalled grub (from Fedora) and fixed
+the boot order, it rebooted, but secureboot failed, so I turned that
+off (!?), and I was back in Debian.
+
+I did an emergency backup with `ddrescue`, *from the running system*
+which probably doesn't really work as a backup (because the filesystem
+is likely to be corrupt) but it was fast enough (20 minutes) and gave
+me some peace of mind. My offsites backup have been down for a while
+and since I treat my workstations as "cattle" (not "pets"), I don't
+have a solid recovery scenario for those situations other than "just
+reinstall and run Puppet", which takes a while.
+
+Now I'm wondering what the next step is: probably replace the disk
+anyways (the new one is bigger: 1TB instead of 500GB), or keep the new
+one as a hot backup somehow. Too bad I don't have a snapshotting
+filesystem on there... (Technically, I have LVM, but LVM snapshots are
+heavy and slow, and can't atomically cover the entire machine.)
+
+It's kind of scary how this thing failed: totally dropped off the bus,
+just not in the BIOS at all. I prefer the way spinning rust fails:
+clickety sounds, tons of warnings beforehand, partial recovery
+possible. With this new flashy junk, you just lose everything all at
+once. Not fun.
+
+[[!tag debian-planet debian hardware fail]]
diff --git a/hardware/curie.mdwn b/hardware/curie.mdwn
index 7ade7c51..1873ff7f 100644
--- a/hardware/curie.mdwn
+++ b/hardware/curie.mdwn
@@ -129,6 +129,10 @@ the upgrade eventually go through, but it finally did.
 The [release notes](https://downloadmirror.intel.com/29102/eng/SY_0072_ReleaseNotes.pdf) detail the updates since the previous one (v61)
 which includes a bunch of security updates, for example.
 
+## SSD disk failure
+
+See [[blog/2022-05-13-nvme-disk-failure]].
+
 ## Replacement options
 
 The CMOS battery died some time in 2021, and I'm having a hard time

publish btrfs notes
diff --git a/blog/btrfs-notes.md b/blog/2022-05-13-brtfs-notes.md
similarity index 57%
rename from blog/btrfs-notes.md
rename to blog/2022-05-13-brtfs-notes.md
index 2ec14a15..71b484a5 100644
--- a/blog/btrfs-notes.md
+++ b/blog/2022-05-13-brtfs-notes.md
@@ -2,21 +2,21 @@
 
 I'm not a fan of [BTRFS](https://btrfs.wiki.kernel.org/). This page serves as a reminder of why,
 but also a cheat sheet to figure out basic tasks in a BTRFS
-environment because those are *not* obvious when coming from any other
-filesystem environment.
+environment because those are *not* obvious to me, even after
+repeatedly having to deal with them.
 
-Trigger warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
+Content warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
 
 [[!toc]]
 
 # Stability concerns
 
-I'm a little worried about its [stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been
-historically quite flaky. RAID-5 and RAID-6 are still marked
-[unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for example, and it's kind of a lucky guess whether
-your current kernel will behave properly with your planned
-workload. For example, [with Linux 4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status) were marked as "mostly OK"
-with a note that says:
+I'm worried about [BTRFS stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been historically
+... changing. RAID-5 and RAID-6 are still marked [unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for
+example. It's kind of a lucky guess whether your current kernel will
+behave properly with your planned workload. For example, [in Linux
+4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status), RAID-1 and RAID-10 were marked as "mostly OK" with a note that
+says:
 
 > Needs to be able to create two copies always. Can get stuck in
 > irreversible read-only mode if only one copy can be made.
@@ -28,27 +28,38 @@ Even as of now, RAID-1 and RAID-10 has this note:
 > improved so the reads will spread over the mirrors evenly or based
 > on device congestion.
 
+Granted, that's not a stability concern anymore, just performance. A
+reviewer of a draft of this article actually claimed that BTRFS only
+reads from one of the drives, which hopefully is inaccurate, but goes
+to show how confusing all this is.
+
 There are [other warnings](https://wiki.debian.org/Btrfs#Other_Warnings) in the Debian wiki that are quite
-worrisome. Even if those are fixed, it can be hard to tell *when* they
-were fixed.
+scary. Even the legendary Arch wiki [has a warning on top of their
+BTRFS page, still](https://wiki.archlinux.org/title/btrfs).
+
+Even if those issues are now fixed, it can be hard to tell *when* they
+were fixed. There is a [changelog by feature](https://btrfs.wiki.kernel.org/index.php/Changelog#By_feature) but it explicitly
+warns that it doesn't know "which kernel version it is considered
+mature enough for production use", so it's also useless for this.
 
 It would have been much better if BTRFS was released into the world
-only when those bugs were being completely fixed. Even now, we get
-mixed messages even in the official BTRFS documentation which says
-"The Btrfs code base is stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same
-time clearly stating [unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently
-RAID56).
-
-There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I will
-stop here, but let's just say that I feel a little uncomfortable
+only when those bugs were being completely fixed. Or that, at least,
+features were announced when they were stable, not just "we merged to
+mainline, good luck". Even now, we get mixed messages even in the
+official BTRFS documentation which says "The Btrfs code base is
+stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same time clearly stating
+[unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently RAID56).
+
+There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I
+will stop here, but let's just say that I feel a little uncomfortable
 trusting server data with full RAID arrays to BTRFS. But surely, for a
-workstation, things should just work smoothly... Right? Let's see the
-snags I hit.
+workstation, things should just work smoothly... Right? Well, let's
+see the snags I hit.
 
 # My BTRFS test setup
 
-Before I go any further, it will probably help to clarify how I am
-testing BTRFS in the first place.
+Before I go any further, I should probably clarify how I am testing
+BTRFS in the first place.
 
 The reason I tried BTRFS is that I was ... let's just say "strongly
 encouraged" by the [LWN](https://lwn.net) editors to install [Fedora](https://getfedora.org/) for the
@@ -69,34 +80,41 @@ table looks like this:
     └─sda4                   8:4    0 922,5G  0 part  
       └─fedora_crypt       253:4    0 922,5G  0 crypt /
 
+(This might not entirely be accurate: I rebuilt this from the Debian
+side of things.)
+
 This is pretty straightforward, except for the swap partition:
 normally, I just treat swap like any other logical volume and create
 it in a logical volume. This is now just speculation, but I bet it was
 setup this way because "swap" support was only added in BTRFS 5.0.
 
-I fully expect BTRFS fans to yell at me now because this is an old
+I fully expect BTRFS experts to yell at me now because this is an old
 setup and BTRFS is so much better now, but that's exactly the point
-here. That setup is not *that* old (2018? is that old? really?), and
-migrating to a new partition scheme isn't exactly practical right
-now. But let's move on to more practical considerations.
+here. That setup is not *that* old (2018? old? really?), and migrating
+to a new partition scheme isn't exactly practical right now. But let's
+move on to more practical considerations.
 
 # No builtin encryption
 
 BTRFS aims at replacing the entire [mdadm](https://en.wikipedia.org/wiki/Mdadm), [LVM][], and [ext4](https://en.wikipedia.org/wiki/Ext4)
-stack with a single entity, alongside adding new features like
+stack with a single entity, and adding new features like
 deduplication, checksums and so on.
 
-Yet there is one feature it is critically missing: encryption. See,
-*my* stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and ext4. This
-is convenient because I have only a single volume to decrypt.
+Yet there is one feature it is critically missing: encryption. See, my
+typical stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and
+ext4. This is convenient because I have only a single volume to
+decrypt.
 
 If I were to use BTRFS on servers, I'd need to have one LUKS volume
 *per-disk*. For a simple RAID-1 array, that's not too bad: one extra
 key. But for large RAID-10 arrays, this gets really unwieldy.
 
-The obvious BTRFS alternative, ZFS, supports encryption out of the box
-and mixes it above the disks so you only have one passphrase to
-enter.
+The obvious BTRFS alternative, ZFS, [supports encryption](https://arstechnica.com/gadgets/2021/06/a-quick-start-guide-to-openzfs-native-encryption/) out of
+the box and mixes it above the disks so you only have one passphrase
+to enter. The main downside of ZFS encryption is that it happens above
+the "pool" level so you can typically see filesystem names (and
+possibly snapshots, depending on how it is built), which is not the
+case with a more traditional stack.
 
 [LVM]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
 
@@ -107,13 +125,17 @@ traditional LVM stack (which is itself kind of confusing if you're new
 to that stuff), you have those layers:
 
  * disks: let's say `/dev/nvme0n1` and `nvme1n1`
- * mdadm RAID arrays: let's say the above disks are joined in a RAID-1
-   array in `/dev/md1`
- * LVM volume groups or VG: the above RAID device (technically a
+ * RAID arrays with mdadm: let's say the above disks are joined in a
+   RAID-1 array in `/dev/md1`
+ * volume groups or VG with LVM: the above RAID device (technically a
    "physical volume" or PV) is assigned into a VG, let's call it
-   `vg_tbbuild05`
+   `vg_tbbuild05` (multiple PVs can be added to a single VG which is
+   why there is that abstraction)
  * LVM logical volumes: out of *that* volume group actually "virtual
-   partitions" or "logical volumes" are created
+   partitions" or "logical volumes" are created, that is where your
+   filesystem lives
+ * filesystem, typically with ext4: that's your normal filesystem,
+   which treats the logical volume as just another block device
 
 A typical server setup would look like this:
 
@@ -130,18 +152,17 @@ A typical server setup would look like this:
     │     └─vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
     └─nvme0n1p4               259:4    0     1M  0 part
 
-
 I stripped the other `nvme1n1` disk because it's basically the same.
 
-Now, if we look at my workstation, which doesn't even have RAID, we
-have the following:
+Now, if we look at my BTRFS-enabled workstation, which doesn't even
+have RAID, we have the following:
 
  * disk: `/dev/sda` with, again, `/dev/sda4` being where BTRFS lives
  * filesystem: `fedora_crypt`, which is, confusingly, kind of like a
    volume group. it's where everything lives. i think.
  * subvolumes: `home`, `root`, `/`, etc. those are actually the things
    that get mounted. you'd think you'd mount a filesystem, but no, you
-   mount a subvolume
+   mount a subvolume. that is backwards.
 
 It looks something like this to `lsblk`:
 
@@ -189,8 +210,17 @@ This is *really* confusing. I don't even know if I understand this
 right, and I've been staring at this all afternoon. Hopefully, the
 lazyweb will correct me eventually.
 
-So at least I can refer to this section in the future, the next time I
-fumble around the `btrfs` commandline.
+(As an aside, why are they called "subvolumes"? If something is a
+"[sub](https://en.wiktionary.org/wiki/sub#Latin)" of "something else", that "something else" must exist
+right? But no, BTRFS doesn't have "volumes", it only has
+"subvolumes". Go figure. Presumably the filesystem still holds "files"
+though, at least empirically it doesn't seem like it lost anything so

(Diff truncated)
expand
diff --git a/blog/btrfs-notes.md b/blog/btrfs-notes.md
index 98cdb177..2ec14a15 100644
--- a/blog/btrfs-notes.md
+++ b/blog/btrfs-notes.md
@@ -39,7 +39,7 @@ mixed messages even in the official BTRFS documentation which says
 time clearly stating [unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently
 RAID56).
 
-There are much harsher BTRFS critics than me [out there](https://2.5admins.com/) so I will
+There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I will
 stop here, but let's just say that I feel a little uncomfortable
 trusting server data with full RAID arrays to BTRFS. But surely, for a
 workstation, things should just work smoothly... Right? Let's see the
@@ -280,6 +280,55 @@ how much disk each volume (and snapshot) takes:
 
 That's 56360 times faster.
 
+But yes, that's not fair: those in the know will know there's a
+*different* command to do what `df` does with BTRFS filesystems, the
+`btrfs filesystem usage` command:
+
+    root@curie:/home/anarcat# time btrfs filesystem usage /srv
+    Overall:
+        Device size:		 922.47GiB
+        Device allocated:		 916.47GiB
+        Device unallocated:		   6.00GiB
+        Device missing:		     0.00B
+        Used:			 884.97GiB
+        Free (estimated):		  30.84GiB	(min: 27.84GiB)
+        Free (statfs, df):		  30.84GiB
+        Data ratio:			      1.00
+        Metadata ratio:		      2.00
+        Global reserve:		 512.00MiB	(used: 0.00B)
+        Multiple profiles:		        no
+
+    Data,single: Size:906.45GiB, Used:881.61GiB (97.26%)
+       /dev/mapper/fedora_crypt	 906.45GiB
+
+    Metadata,DUP: Size:5.00GiB, Used:1.68GiB (33.58%)
+       /dev/mapper/fedora_crypt	  10.00GiB
+
+    System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
+       /dev/mapper/fedora_crypt	  16.00MiB
+
+    Unallocated:
+       /dev/mapper/fedora_crypt	   6.00GiB
+
+    real	0m0,004s
+    user	0m0,000s
+    sys	0m0,004s
+
+Almost as fast as ZFS's df! Good job. But wait. That doesn't actually
+tell me usage per *subvolume*. Notice it's `filesystem usage`, not
+`subvolume usage`, which unhelpfully refuses to exist. That command
+only shows that one "filesystem" internal statistics that are pretty
+opaque.. You can also appreciate it's wasting 6GB of "unallocated"
+disk space there: I probably did something Very Wrong and should be
+punished by Hacker News. I also wonder why it has 1.68GB of "metadata"
+used...
+
+At this point, I just really want to throw that thing out of the
+window and restart from scratch. I don't really feel like learning the
+BTRFS internals, as they seem oblique and completely bizarre to me. I
+bet that ZFS would do wonders here, and I'd get 8GB (or more?)
+back. Who knows.
+
 # Conclusion
 
 I find BTRFS utterly confusing and I'm worried about its
@@ -288,13 +337,27 @@ and coherence before I even consider running this anywhere else than a
 lab, and that's really too bad, because there are really nice features
 in BTRFS that would greatly help my workflow.
 
-Right now, I'm stuck with OpenZFS, which currently involves building
-kernel modules from scratch on every host. I'm hoping some day the
-copyright issues are resolved and we can at least ship binary
-packages, but the politics (e.g. convincing Debian that is the right
-thing to do, good luck) and the logistics (e.g. DKMS auto-builders? is
-that even a thing? how about signed DKMS packages? fun fun fun!) seem
-really impractical.
+Right now, I'm experimenting with OpenZFS. It's so much simpler, and
+just works, and it's rock solid. After [this 10 minute read](https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/), I had
+a good understanding of how ZFS worked: a `vdev` is kind of like a
+RAID array, a `vpool` is a volume group, and you create datasets
+(filesystems, like logical volumes + ext) underneath. In fact, that's
+probably all you need to know, unless you want to start optimizing
+more obscure things like [recordsize](https://klarasystems.com/articles/tuning-recordsize-in-openzfs/).
+
+Running ZFS on Linux which currently involves building kernel modules
+from scratch on every host. But I was able to setup a ZFS-only server
+using [this excellent documentation](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/) without too much problem.
+
+I'm hoping some day the copyright issues are resolved and we can at
+least ship binary packages, but the politics (e.g. convincing Debian
+that is the right thing to do, good luck) and the logistics (e.g. DKMS
+auto-builders? is that even a thing? how about signed DKMS packages?
+fun fun fun!) seem really impractical. Who knows, maybe hell will
+freeze over ([again](https://blogs.gnome.org/uraeus/2022/05/11/why-is-the-open-source-driver-release-from-nvidia-so-important-for-linux/)) and Oracle will fix the CDDL. I personally
+think that we should just completely ignore this problem and ship
+binary packages, but I'm a pragmatic and do not always fit well with
+the free software fundamentalists.
 
 Which means that, short term, we don't have a reliable, advanced
 filesystem in Linux. And that's really too bad.

btrfs notes
diff --git a/blog/btrfs-notes.md b/blog/btrfs-notes.md
new file mode 100644
index 00000000..98cdb177
--- /dev/null
+++ b/blog/btrfs-notes.md
@@ -0,0 +1,302 @@
+[[!meta title="BTRFS notes"]]
+
+I'm not a fan of [BTRFS](https://btrfs.wiki.kernel.org/). This page serves as a reminder of why,
+but also a cheat sheet to figure out basic tasks in a BTRFS
+environment because those are *not* obvious when coming from any other
+filesystem environment.
+
+Trigger warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
+
+[[!toc]]
+
+# Stability concerns
+
+I'm a little worried about its [stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been
+historically quite flaky. RAID-5 and RAID-6 are still marked
+[unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for example, and it's kind of a lucky guess whether
+your current kernel will behave properly with your planned
+workload. For example, [with Linux 4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status) were marked as "mostly OK"
+with a note that says:
+
+> Needs to be able to create two copies always. Can get stuck in
+> irreversible read-only mode if only one copy can be made.
+
+Even as of now, RAID-1 and RAID-10 has this note:
+
+> The simple redundancy RAID levels utilize different mirrors in a way
+> that does not achieve the maximum performance. The logic can be
+> improved so the reads will spread over the mirrors evenly or based
+> on device congestion.
+
+There are [other warnings](https://wiki.debian.org/Btrfs#Other_Warnings) in the Debian wiki that are quite
+worrisome. Even if those are fixed, it can be hard to tell *when* they
+were fixed.
+
+It would have been much better if BTRFS was released into the world
+only when those bugs were being completely fixed. Even now, we get
+mixed messages even in the official BTRFS documentation which says
+"The Btrfs code base is stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same
+time clearly stating [unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently
+RAID56).
+
+There are much harsher BTRFS critics than me [out there](https://2.5admins.com/) so I will
+stop here, but let's just say that I feel a little uncomfortable
+trusting server data with full RAID arrays to BTRFS. But surely, for a
+workstation, things should just work smoothly... Right? Let's see the
+snags I hit.
+
+# My BTRFS test setup
+
+Before I go any further, it will probably help to clarify how I am
+testing BTRFS in the first place.
+
+The reason I tried BTRFS is that I was ... let's just say "strongly
+encouraged" by the [LWN](https://lwn.net) editors to install [Fedora](https://getfedora.org/) for the
+[[terminal emulators series|blog/2018-04-12-terminal-emulators-1]].
+That, in turn, meant the setup was done with BTRFS, because that was
+somewhat the default in Fedora 27 (or did I want to experiment? I
+don't remember, it's been too long already).
+
+So Fedora was setup on my 1TB HDD and, with encryption, the partition
+table looks like this:
+
+    NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
+    sda                      8:0    0 931,5G  0 disk  
+    ├─sda1                   8:1    0   200M  0 part  /boot/efi
+    ├─sda2                   8:2    0     1G  0 part  /boot
+    ├─sda3                   8:3    0   7,8G  0 part  
+    │ └─fedora_swap        253:5    0   7.8G  0 crypt [SWAP]
+    └─sda4                   8:4    0 922,5G  0 part  
+      └─fedora_crypt       253:4    0 922,5G  0 crypt /
+
+This is pretty straightforward, except for the swap partition:
+normally, I just treat swap like any other logical volume and create
+it in a logical volume. This is now just speculation, but I bet it was
+setup this way because "swap" support was only added in BTRFS 5.0.
+
+I fully expect BTRFS fans to yell at me now because this is an old
+setup and BTRFS is so much better now, but that's exactly the point
+here. That setup is not *that* old (2018? is that old? really?), and
+migrating to a new partition scheme isn't exactly practical right
+now. But let's move on to more practical considerations.
+
+# No builtin encryption
+
+BTRFS aims at replacing the entire [mdadm](https://en.wikipedia.org/wiki/Mdadm), [LVM][], and [ext4](https://en.wikipedia.org/wiki/Ext4)
+stack with a single entity, alongside adding new features like
+deduplication, checksums and so on.
+
+Yet there is one feature it is critically missing: encryption. See,
+*my* stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and ext4. This
+is convenient because I have only a single volume to decrypt.
+
+If I were to use BTRFS on servers, I'd need to have one LUKS volume
+*per-disk*. For a simple RAID-1 array, that's not too bad: one extra
+key. But for large RAID-10 arrays, this gets really unwieldy.
+
+The obvious BTRFS alternative, ZFS, supports encryption out of the box
+and mixes it above the disks so you only have one passphrase to
+enter.
+
+[LVM]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
+
+# Subvolumes, filesystems, and devices
+
+I find BTRFS's architecture to be utterly confusing. In the
+traditional LVM stack (which is itself kind of confusing if you're new
+to that stuff), you have those layers:
+
+ * disks: let's say `/dev/nvme0n1` and `nvme1n1`
+ * mdadm RAID arrays: let's say the above disks are joined in a RAID-1
+   array in `/dev/md1`
+ * LVM volume groups or VG: the above RAID device (technically a
+   "physical volume" or PV) is assigned into a VG, let's call it
+   `vg_tbbuild05`
+ * LVM logical volumes: out of *that* volume group actually "virtual
+   partitions" or "logical volumes" are created
+
+A typical server setup would look like this:
+
+    NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
+    nvme0n1                   259:0    0   1.7T  0 disk  
+    ├─nvme0n1p1               259:1    0     8M  0 part  
+    ├─nvme0n1p2               259:2    0   512M  0 part  
+    │ └─md0                     9:0    0   511M  0 raid1 /boot
+    ├─nvme0n1p3               259:3    0   1.7T  0 part  
+    │ └─md1                     9:1    0   1.7T  0 raid1 
+    │   └─crypt_dev_md1       253:0    0   1.7T  0 crypt 
+    │     ├─vg_tbbuild05-root 253:1    0    30G  0 lvm   /
+    │     ├─vg_tbbuild05-swap 253:2    0 125.7G  0 lvm   [SWAP]
+    │     └─vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
+    └─nvme0n1p4               259:4    0     1M  0 part
+
+
+I stripped the other `nvme1n1` disk because it's basically the same.
+
+Now, if we look at my workstation, which doesn't even have RAID, we
+have the following:
+
+ * disk: `/dev/sda` with, again, `/dev/sda4` being where BTRFS lives
+ * filesystem: `fedora_crypt`, which is, confusingly, kind of like a
+   volume group. it's where everything lives. i think.
+ * subvolumes: `home`, `root`, `/`, etc. those are actually the things
+   that get mounted. you'd think you'd mount a filesystem, but no, you
+   mount a subvolume
+
+It looks something like this to `lsblk`:
+
+    NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
+    sda                      8:0    0 931,5G  0 disk  
+    ├─sda1                   8:1    0   200M  0 part  /boot/efi
+    ├─sda2                   8:2    0     1G  0 part  /boot
+    ├─sda3                   8:3    0   7,8G  0 part  [SWAP]
+    └─sda4                   8:4    0 922,5G  0 part  
+      └─fedora_crypt       253:4    0 922,5G  0 crypt /srv
+
+Notice how we don't see all the BTRFS volumes here? Maybe it's because
+I'm mounting this from the Debian side, but `lsblk` definitely gets
+confused here. I frankly don't quite understand what's going on, even
+after repeatedly looking around the [rather dismal
+documentation](https://btrfs.readthedocs.io/en/latest/). But that's what I gather from the following
+commands:
+
+    root@curie:/home/anarcat# btrfs filesystem show
+    Label: 'fedora'  uuid: 5abb9def-c725-44ef-a45e-d72657803f37
+    	Total devices 1 FS bytes used 883.29GiB
+    	devid    1 size 922.47GiB used 916.47GiB path /dev/mapper/fedora_crypt
+
+    root@curie:/home/anarcat# btrfs subvolume list /srv
+    ID 257 gen 108092 top level 5 path home
+    ID 258 gen 108094 top level 5 path root
+    ID 263 gen 108020 top level 258 path root/var/lib/machines
+
+I only got to that point through trial and error. Notice how I use an
+existing mountpoint to list the related subvolumes. If I try to use
+the filesystem path, the one that's listed in `filesystem show`, I
+fail:
+
+    root@curie:/home/anarcat# btrfs subvolume list /dev/mapper/fedora_crypt 
+    ERROR: not a btrfs filesystem: /dev/mapper/fedora_crypt
+    ERROR: can't access '/dev/mapper/fedora_crypt'
+
+Maybe I just need to use the label? Nope:
+
+    root@curie:/home/anarcat# btrfs subvolume list fedora
+    ERROR: cannot access 'fedora': No such file or directory
+    ERROR: can't access 'fedora'
+
+This is *really* confusing. I don't even know if I understand this
+right, and I've been staring at this all afternoon. Hopefully, the
+lazyweb will correct me eventually.
+
+So at least I can refer to this section in the future, the next time I
+fumble around the `btrfs` commandline.
+

(Diff truncated)
approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_bf38323e3eb278f95fea5e291a90178d._comment b/blog/2022-04-27-sbuild-qemu/comment_1_bf38323e3eb278f95fea5e291a90178d._comment
new file mode 100644
index 00000000..97719090
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_bf38323e3eb278f95fea5e291a90178d._comment
@@ -0,0 +1,32 @@
+[[!comment format=mdwn
+ ip="84.114.211.250"
+ claimedauthor="Christian Kastner"
+ subject="Faster bootup"
+ date="2022-05-08T16:57:09Z"
+ content="""
+>Do say more about this! I would love to get faster bootup, that's the main pain point right now. It does feel like runtime performance impact is negligible (but I'd love to improve on that too), but startup time definitely feels slow.
+
+Well, on my local amd64 system, a full boot to a console takes about 9s, 5-6s of which are spent in GRUB, loading the kernel and initramfs, and so on.
+
+If one doesn't need the GRUB menu, I guess 1s could be shaved off by setting GRUB_TIMEOUT=0 in /usr/share/sbuild/sbuild-qemu-create-modscript, then rebuilding a VM.
+
+It seems that the most time-consuming step is loading the initramfs and the initial boot, and I while I haven't looked into it yet, I feel like this could also be optimized. Minimal initramfs, minimal hardware, etc.
+
+>Are you familiar with Qemu's microvm platform? How would we experiment with stuff like that in the sbuild-qemu context?
+
+I've stumbled over it about a year ago, but didn't get it to run -- I think I had an older QEMU environment. With 1.7 now in bullseye-backports, I need to give it a try again soon.
+
+However, as far as I understand it, microvm only works for a very limited x86_64 environment. In other words, this would provide only the isolation features of a VM, but not point (1) and (2) of my earlier comment.
+
+Not that I'm against that, on the contrary, I'd still like to add that as a \"fast\" option.
+
+firecracker-vm (Rust, Apache 2.0, maintained by Amazon on GitHub) also provides a microvm-like solution. Haven't tried it yet, though.
+
+> How do I turn on host=guest?
+
+That should happen automatically through autopkgtest. sbuild-qemu calls sbuild, sbuild bridges with autopkgtest-virt-qemu, autopkgtest-virt-qemu has the host=guest detection built in.
+
+It's odd that there's nothing in the logs indicating whether this is happening (not even with --verbose or --debug), but a simple test is: if the build is dog-slow, as in 10-15x slower than native , it's without KVM :-)
+
+Note that in order to use KVM, the building user must be in the 'kvm' group.
+"""]]
diff --git a/blog/2022-05-06-wallabako-1.4.0-released/comment_1_50d37bb98c62941cbab992e827d478d6._comment b/blog/2022-05-06-wallabako-1.4.0-released/comment_1_50d37bb98c62941cbab992e827d478d6._comment
new file mode 100644
index 00000000..f9643455
--- /dev/null
+++ b/blog/2022-05-06-wallabako-1.4.0-released/comment_1_50d37bb98c62941cbab992e827d478d6._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ ip="91.54.59.28"
+ subject="comment 1"
+ date="2022-05-07T05:17:34Z"
+ content="""
+Thank you for the update, I've encountered wallabako some month back and really like it. 
+
+My motivation and setup for using it is similar to yours. However, I'm using wallabako als cli on OpenBSD and save epubs to a syncthing share that is connected with my pocketbook reader.
+
+Keep going.
+"""]]

small edits
diff --git a/blog/2022-05-06-wallabako-1.4.0-released.md b/blog/2022-05-06-wallabako-1.4.0-released.md
index a3d759aa..59325f01 100644
--- a/blog/2022-05-06-wallabako-1.4.0-released.md
+++ b/blog/2022-05-06-wallabako-1.4.0-released.md
@@ -16,9 +16,9 @@ readers probably don't even know I sometimes meddle with in
 
 # What's Wallabako
 
-Wallabako is a weird little program I designed to read articles on my
-E-book reader. I use it to spend less time on the computer: I save
-articles in a read-it-later app named [Wallabag](https://wallabag.org/) (hosted by a
+[Wallabako](https://gitlab.com/anarcat/wallabako/) is a weird little program I designed to read articles
+on my E-book reader. I use it to spend less time on the computer: I
+save articles in a read-it-later app named [Wallabag](https://wallabag.org/) (hosted by a
 generous friend), and then Wallabako connects to that app, downloads
 an EPUB version of the book, and then I can read it on the device
 directly.
@@ -33,6 +33,12 @@ interface (called "Nickel"), [Koreader](https://koreader.rocks/) and [Plato](htt
 use Koreader for everything nowadays, but it should work equally well
 on the others.
 
+Wallabako is actually setup to be started by `udev` when there's a
+connection change detected by the kernel, which is kind of a gross
+hack. It's clunky, but actually works and I thought for a while about
+switching to something else, but it's really the easiest way to go,
+and that requires the less interaction by the user.
+
 # Why I'm (still) using it
 
 I wrote Wallabako because I read a *lot* of articles on the
@@ -158,19 +164,18 @@ that thing to the ebook reader at every code iteration.
 I had originally thought I should add some sort of graphical interface
 in Koreader for Wallabako as well, and had [requested that feature
 upstream](https://github.com/koreader/koreader/issues/2621). Unfortunately (or fortunately?), they took my idea and
-just *ran* with it. Some courageous soul actually [wrote a full Wallabag plugin for
-koreader][] which makes implementing koreader support in Wallabako a
-much less pressing issue.
+just *ran* with it. Some courageous soul actually [wrote a full
+Wallabag plugin for koreader][], in Lua of course.
 
 Compared to the Wallabako implementation however, the koreader plugin
-is much slower, as it downloads articles serially instead of
-concurrently. It is, however, much more usable as the user is given a
-visible feedback of the various steps. I still had to enable full
-debugging to diagnose a problem (which was that I shouldn't have a
-trailing slash, and that some special characters don't work in
-passwords). It's also better to write the config file with a normal
-text editor, over SSH or with the Kobo mounted to your computer
-instead of typing those really long strings over the kobo.
+is much slower, probably because it downloads articles serially
+instead of concurrently. It is, however, much more usable as the user
+is given a visible feedback of the various steps. I still had to
+enable full debugging to diagnose a problem (which was that I
+shouldn't have a trailing slash, and that some special characters
+don't work in passwords). It's also better to write the config file
+with a normal text editor, over SSH or with the Kobo mounted to your
+computer instead of typing those really long strings over the kobo.
 
 There's [no sample config file][] which makes that harder but a
 workaround is to save the configuration with dummy values and fix them
@@ -180,7 +185,7 @@ loss][] (Wallabag article being deleted!) for an unsuspecting user...
 
 [lead to data loss]: https://github.com/koreader/koreader/issues/8936
 [no sample config file]: https://github.com/koreader/koreader/issues/7576
-[wrote a fullWallabag plugin for koreader]: https://github.com/koreader/koreader/pull/4271 
+[wrote a full Wallabag plugin for koreader]: https://github.com/koreader/koreader/pull/4271 
 
 So basically, I started working on Wallabag again because the koreader
 implementation of their Wallabag client was not up to spec for me. It

wallabako release
diff --git a/blog/2022-05-06-wallabako-1.4.0-released.md b/blog/2022-05-06-wallabako-1.4.0-released.md
new file mode 100644
index 00000000..a3d759aa
--- /dev/null
+++ b/blog/2022-05-06-wallabako-1.4.0-released.md
@@ -0,0 +1,255 @@
+[[!meta title="Wallabako 1.4.0 released"]]
+
+I don't particularly like it when people announce their personal
+projects on their blog, but I'm making an exception for this one,
+because it's a little special for me.
+
+You see, I have just released [Wallabako 1.4.0](https://gitlab.com/anarcat/wallabako/-/tags/1.4.0) (and a quick,
+mostly irrelevant [1.4.1 hotfix](https://gitlab.com/anarcat/wallabako/-/tags/1.4.1)) today. It's the first release of
+that project in almost 3 years (the previous was [1.3.1](https://gitlab.com/anarcat/wallabako/-/tags/1.3.1), before
+the pandemic).
+
+The other reason I figured I would mention it is that I have almost
+*never* talked about Wallabako on this blog at all, so many of my
+readers probably don't even know I sometimes meddle with in
+[Golang](https://go.dev/) which surprises even me sometimes.
+
+# What's Wallabako
+
+Wallabako is a weird little program I designed to read articles on my
+E-book reader. I use it to spend less time on the computer: I save
+articles in a read-it-later app named [Wallabag](https://wallabag.org/) (hosted by a
+generous friend), and then Wallabako connects to that app, downloads
+an EPUB version of the book, and then I can read it on the device
+directly.
+
+When I'm done reading the book, Wallabako notices and sets the article
+as read in Wallabag. I also set it to delete the book locally, but you
+can actually configure to keep those books around forever if you feel
+like it.
+
+Wallabako supports syncing read status with the built-in Kobo
+interface (called "Nickel"), [Koreader](https://koreader.rocks/) and [Plato](https://github.com/baskerville/plato/). I happen to
+use Koreader for everything nowadays, but it should work equally well
+on the others.
+
+# Why I'm (still) using it
+
+I wrote Wallabako because I read a *lot* of articles on the
+internet. It's actually *most* of my readings. I read about 10 books a
+year (which I don't think is much), but I probably read more in terms
+of time and pages in Wallabag. I haven't actually made the math, but I
+estimate I spend at least double the time reading articles than I
+spend reading books.
+
+If I wouldn't have Wallabag, I would have hundreds of tabs open in my
+web browser all the time. So at least that problem is easily solved:
+throw everything in Wallabag, sort and read later.
+
+If I wouldn't have Wallabako however, I would be either spend that
+time reading on the computer -- which I prefer to spend working on
+free software or work -- or on my phone -- which is kind of better,
+but really cramped.
+
+I had stopped (and developing) Wallabako for a while, actually, Around
+2019, I got tired of always read those technical articles (basically
+work stuff!) at home. I realized I was just not "reading" (as in
+books! fiction! fun stuff!) anymore, at least not as much as I
+wanted.
+
+So I tried to make this separation: the ebook reader is for cool book
+stuff. The rest is work. But because I had the Wallabag Android app on
+my phone and tablet, I could still read those articles there, which I
+thought was pretty neat. But that meant that I was constantly looking
+at my phone, which is something I'm generally trying to avoid, as it
+sets a bad example for the kids (small and big) around me.
+
+Then I realized there was one stray ebook reader lying around at
+home. I had recently [[bought a Kobo Aura
+HD|hardware/tablet/kobo-clara-hd]] to read books, and I like that
+device. And it's going to stay locked down to reading books. But
+there's still that old battered Kobo Glo HD reader lying around, and
+I figured I could just borrow it to read Wallabag articles.
+
+# What is this new release
+
+But oh boy that was a lot of work. Wallabako was kind of a mess: it
+was using the deprecated [go dep](https://github.com/golang/dep) tool, which lost the battle with
+[go mod](https://go.dev/ref/mod). Cross-compilation was broken for older devices, and I had
+to implement support for Koreader.
+
+## go mod
+
+So I had to learn `go mod`. I'm still not sure I got that part right:
+LSP is yelling at me because it can't find the imports, and I'm
+generally just "[YOLO everythihng][]" every time I get anywhere close
+to it. That's not the way to do Go, in general, and not how I like to
+do it either.
+
+[YOLO everythihng]: https://en.wikipedia.org/wiki/YOLO_(aphorism)
+
+But I guess that, given time, I'll figure it out and make it work for
+me. It certainly works now. I think.
+
+## Cross compilation
+
+The hard part was different. You see, Nickel uses [SQLite](https://www.sqlite.org/) to store
+metadata about books, so Wallabako actually needs to tap into that
+SQLite database to propagate read status. Originally, I just linked
+against some [sqlite3 library](https://github.com/mattn/go-sqlite3) I found lying around. It's basically
+a wrapper around the C-based SQLite and generally works fine. But that
+means you actually link your Golang program against a C library. And
+that's when things get a little nutty.
+
+If you would just build Wallabag naively, it would [fail when deployed
+on the Kobo Glo HD](https://gitlab.com/anarcat/wallabako/-/issues/43). That's because the device runs a really old
+kernel: the prehistoric `Linux kobo 2.6.35.3-850-gbc67621+ #2049
+PREEMPT Mon Jan 9 13:33:11 CST 2017 armv7l GNU/Linux`. That was built
+in 2017, but the kernel was actually [released in 2010](https://kernelnewbies.org/Linux_2_6_35), a whole *5
+years* before the [Glo HD was released, in 2015](https://wiki.mobileread.com/wiki/Kobo_Glo_HD) which is kind of
+outrageous. and yes, that is with the [latest firmware release](https://wiki.mobileread.com/wiki/Kobo_Firmware_Releases). 
+
+My bet is they just don't upgrade the kernel on those things, as the
+Glo was probably bought around 2017...
+
+In any case, the problem is we are cross-compiling here. And Golang is
+pretty good about cross-compiling, but because we have C in there,
+we're actually cross-compiling with "CGO" which is really just Golang
+with a GCC backend. And that's much, much harder to figure out because
+you need to pass down flags into GCC and so on. It was a nightmare.
+
+That's until I found this outrageous "little" project called
+[modernc.org/sqlite](https://modernc.org/sqlite). What that thing does (with a hefty does of
+dependencies that would make any Debian developer recoil in horror) is
+to *transpile* the SQLite C source code to Golang. You read that
+right: it rewrites SQLite in Go. On the fly. It's nuts.
+
+But it works. And you end up with a "pure go" program, and that thing
+compiles much faster and runs fine on older kernel.
+
+I still wasn't sure I wanted to just stick with that forever, so I
+kept the old sqlite3 code around, behind a compile-time tag. At the
+top of the `nickel_modernc.go` file, there's this magic string:
+
+    //+build !sqlite3
+
+And at the top of `nickel_sqlite3.go` file, there's this magic string:
+
+    //+build sqlite3
+
+So now, by default, the `modernc` file gets included, but if I pass
+`--tags sqlite3` to the Go compiler (to `go install` or whatever), it
+will actually switch to the other implementation. Pretty neat stuff.
+
+## Koreader port
+
+The last part was something I was hesitant in doing for a long time,
+but that turned out to be pretty easy. I have basically switch to
+using Koreader to read everything. Books, PDF, everything goes through
+it. I really like that it stores its metadata in sidecar files: I
+synchronize all my books with [Syncthing](https://syncthing.net/) which means I can carry
+my read status, annotations and all that stuff without having to think
+about it. (And yes, I installed Syncthing on my Kobo.)
+
+The [koreader.go port](https://gitlab.com/anarcat/wallabako/-/blob/8cce90771fbef9f8089a1f0569c184c6aa67f8d0/koreader.go) was less than 80 lines, and I could even
+make a nice little [test suite](https://gitlab.com/anarcat/wallabako/-/blob/8cce90771fbef9f8089a1f0569c184c6aa67f8d0/koreader_test.go) so that I don't have to redeploy
+that thing to the ebook reader at every code iteration.
+
+I had originally thought I should add some sort of graphical interface
+in Koreader for Wallabako as well, and had [requested that feature
+upstream](https://github.com/koreader/koreader/issues/2621). Unfortunately (or fortunately?), they took my idea and
+just *ran* with it. Some courageous soul actually [wrote a full Wallabag plugin for
+koreader][] which makes implementing koreader support in Wallabako a
+much less pressing issue.
+
+Compared to the Wallabako implementation however, the koreader plugin
+is much slower, as it downloads articles serially instead of
+concurrently. It is, however, much more usable as the user is given a
+visible feedback of the various steps. I still had to enable full
+debugging to diagnose a problem (which was that I shouldn't have a
+trailing slash, and that some special characters don't work in
+passwords). It's also better to write the config file with a normal
+text editor, over SSH or with the Kobo mounted to your computer
+instead of typing those really long strings over the kobo.
+
+There's [no sample config file][] which makes that harder but a
+workaround is to save the configuration with dummy values and fix them
+up after. Finally I also found the default setting ("Remotely delete
+finished articles") really dangerous as it can basically [lead to data
+loss][] (Wallabag article being deleted!) for an unsuspecting user...
+
+[lead to data loss]: https://github.com/koreader/koreader/issues/8936
+[no sample config file]: https://github.com/koreader/koreader/issues/7576
+[wrote a fullWallabag plugin for koreader]: https://github.com/koreader/koreader/pull/4271 
+
+So basically, I started working on Wallabag again because the koreader
+implementation of their Wallabag client was not up to spec for me. It
+might be good enough for you, but I guess if you like Wallabako, you
+should thank the koreader folks for their sloppy implementation, as
+I'm now working again on Wallabako.
+
+# Actual release notes
+
+Those are the actual [release notes for 1.4.0](https://gitlab.com/anarcat/wallabako/-/tags/1.4.0).
+

(Diff truncated)
response
diff --git a/blog/2022-04-27-sbuild-qemu/comment_7_4f11b05e9bb491c60920bf280c5076d9._comment b/blog/2022-04-27-sbuild-qemu/comment_7_4f11b05e9bb491c60920bf280c5076d9._comment
new file mode 100644
index 00000000..080d1407
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_7_4f11b05e9bb491c60920bf280c5076d9._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""optimizing qemu"""
+ date="2022-05-06T13:08:12Z"
+ content="""
+> If host=guest arch and KVM is enabled, the emulation overhead is negligible, and I guess the boot process could be sped up with a trick or two. Most of the time is spent in the BIOS resp. UEFI environment.
+
+Do say more about this! I would love to get faster bootup, that's the main pain point right now. It does feel like runtime performance impact is negligible (but I'd love to improve on that too), but startup time definitely feels slow.
+
+Are you familiar with Qemu's [microvm platform](https://qemu.readthedocs.io/en/latest/system/i386/microvm.html)? How would we experiment with stuff like that in the sbuild-qemu context? How do I turn on `host=guest`?
+
+Thanks for the feedback!
+"""]]

approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_ab0513b031389c5991892d5c1d2f256b._comment b/blog/2022-04-27-sbuild-qemu/comment_1_ab0513b031389c5991892d5c1d2f256b._comment
new file mode 100644
index 00000000..5e003ebe
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_ab0513b031389c5991892d5c1d2f256b._comment
@@ -0,0 +1,22 @@
+[[!comment format=mdwn
+ ip="84.114.211.250"
+ claimedauthor="Christian Kastner"
+ subject="Why a VM"
+ date="2022-05-04T14:57:19Z"
+ content="""
+> Why not a container?
+
+> You can obtain the same level of security/isolation, with 1/100 of the effort.
+
+Containers can have a good level of security/isolation, but VM isolation is still stronger.
+
+In any case, two clear advantages that VMs have over containers are that (1) one can test entire systems, which (2) may also have a foreign architecture.  There are also a number of other minor advantages to using QEMU of course, e.g. snapshotting.
+
+For example, I maintain the keyutils package, and in the process have discovered architecture-specific bugs in the kernel, for architectures I don't have physical access to. I needed to run custom kernels to debug these, and I can't to that with containers or on porterboxes.
+
+As a co-maintainer of scikit-learn, I've also discovered a number of upstream issues in scikit-learn and numpy for the architectures that upstreams can't/don't really test in CI (e.g. 32-bit ARM). I've run into all kinds of issues with porterboxes (e.g.: not enough space), which I don't have with a local image.
+
+So in my case, there's no way around sbuild-qemu and autopkgtest-virt-qemu anyway. And (echoing anarcat here) on the plus side: KVM + QEMU just feel much cleaner for isolation.
+
+If host=guest arch and KVM is enabled, the emulation overhead is negligible, and I guess the boot process could be sped up with a trick or two. Most of the time is spent in the BIOS resp. UEFI environment.
+"""]]

response
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_1c87c913f1eedf922415b7e9e61d82ab._comment b/blog/2022-04-27-sbuild-qemu/comment_1_1c87c913f1eedf922415b7e9e61d82ab._comment
new file mode 100644
index 00000000..953c6bf9
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_1c87c913f1eedf922415b7e9e61d82ab._comment
@@ -0,0 +1,41 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""Re: why a VM"""
+ date="2022-05-03T13:19:54Z"
+ content="""
+> My main point was in terms of effort.
+
+But what effort do you see in maintaining a VM image exactly?
+
+> Playing with croups, namespaces, and the selinux beast is surely haphazard, but nobody does it manually. 
+>
+> The typical use case are the CI/CD pipelines, where the developer just choose the base image to use.
+
+(I also use Docker to run GitLab CI pipelines, for the record, but that's not the topic here.)
+
+See, that's the big lie of containers. I could also just "choose a base image" for a VM, say from Vagrant boxes or whatever. I don't do that, because I like to know where the heck my stuff comes from. I pull it from official Debian mirrors, so I know exactly what's in there.
+
+When you pull from Docker hub, you have a somewhat dubious trace of where those images come from. We (Debian members) maintain a few official images, but I find their built process to be personnally quite [convoluted and confusing](https://github.com/debuerreotype/debuerreotype). 
+
+> It's just few lines of JSON or yaml and we are done.
+
+See the funny thing right there is I don't even know what you're talking about here, and I've been deploying containers for years. Are you refering to a Docker compose YAML file? Or a Kubernetes manifest? Or the container image metadata? Are you actually writing *that* by hand!?
+
+That doesn't show me how you setup security in your countainers, nor the guarantees it offers. Do you run the containers as root? Do you enable user namespaces? selinux or apparmor? what kind of seccomp profile will that build need?
+
+Those are *all* questions I'd need to have answered if I'd want any sort of isolation even *remotely* close to what a VM offers.
+
+> There's no need to update the system, configure services, keep an eye on the resources, etc.
+
+You still need to update the image. I don't need to configure services or keep an eye on resources in my model either.
+
+> Again, how you do it is amazing and clean, but surely not a scalable solution 
+
+It seems you're trying to sell me on the idea that containers are great and scale better than VMs for the general case, in an article where I specifically advise users to use VM for *specific* case of providing *stronger* isolation for untrusted builds. I don't need to scale those builds to thousands of builds a day (but i *will* note that the Debian buildd's have been doing this for a long time without containers).
+
+> I personally use a mix of different solutions based on the customer needs, often it's podman or Docker, but not always. 
+
+Same, it's not one size fits all. The topic here is building Debian packages, and I find qemu to be a great fit.
+
+I dislike containers, but it's sometimes the best tool for the job. Just not in this case.
+"""]]

approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_cdc867100dbd3c6f5a8466e8146e825f._comment b/blog/2022-04-27-sbuild-qemu/comment_1_cdc867100dbd3c6f5a8466e8146e825f._comment
new file mode 100644
index 00000000..35140737
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_cdc867100dbd3c6f5a8466e8146e825f._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ ip="213.55.225.140"
+ claimedauthor="Antenore"
+ subject="Re: why a VM"
+ date="2022-05-02T19:13:06Z"
+ content="""
+Thanks for your answer.
+In general yes, a VM is more secured as the system resources are separated, in that sense you're are right. My main point was in terms of effort.
+Playing with croups, namespaces, and the selinux beast is surely haphazard, but nobody does it manually. 
+The typical use case are the CI/CD pipelines, where the developer just choose the base image to use. It's just few lines of JSON or yaml and we are done.
+There's no need to update the system, configure services, keep an eye on the resources, etc.
+
+Again, how you do it is amazing and clean, but surely not a scalable solution 
+
+I personally use a mix of different solutions based on the customer needs, often it's podman or Docker, but not always. 
+
+"""]]

follow directory rename
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index 16a3cc73..aeed33e3 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -114,7 +114,7 @@ tired of them...
    down". this can be disabled with `:unbind <C-f>`. also see the
    [builtin Firefox shortcuts][] and the `pentadactyl` entry in the
    XULocalypse section below. [Krabby](https://krabby.netlify.com/), another of those
-   implementations, has an [interesting list of alternatives](https://github.com/alexherbo2/krabby/blob/master/doc/alternatives.md).
+   implementations, has an [interesting list of alternatives](https://github.com/alexherbo2/krabby/blob/c525cf13962f72f4810fdc8f8032e6d9001308ea/docs/alternatives.md).
 
 ## Previously used
 

fix typo in mta article, thanks nick black
diff --git a/blog/2020-04-14-opendkim-debian.mdwn b/blog/2020-04-14-opendkim-debian.mdwn
index fb6faefb..f3647c11 100644
--- a/blog/2020-04-14-opendkim-debian.mdwn
+++ b/blog/2020-04-14-opendkim-debian.mdwn
@@ -74,7 +74,7 @@ If one of those is missing, then you are doing something wrong and
 your "spamminess" score will be worse. The latter is especially tricky
 as it validates the "Envelope From", which is the `MAIL FROM:` header
 as sent by the originating MTA, which you see as `from=<>` in the
-postfix lost.
+postfix logs.
     
 The following will happen anyways, as soon as you have a signature,
 that's normal:

response
diff --git a/blog/2022-04-27-sbuild-qemu/comment_3_1fad8e5625f8f744a57f8a455101deb6._comment b/blog/2022-04-27-sbuild-qemu/comment_3_1fad8e5625f8f744a57f8a455101deb6._comment
new file mode 100644
index 00000000..b3147179
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_3_1fad8e5625f8f744a57f8a455101deb6._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""why a VM"""
+ date="2022-05-02T18:03:35Z"
+ content="""
+> Why not a container?
+
+A "container" doesn't actually exist in the Linux kernel. It's a hodge-podge collection of haphazard security measures that are really hard to get right. Some do, most don't.
+
+Besides, which container are you refering to? I know of `unshare`, LXC, LXD, Docker, podman... it can mean so many things that it actually loses its meaning.
+
+I find Qemu + KVM to be much cleaner, and yes, it does provide a much stronger security isolation than a container.
+"""]]

approve comment
diff --git a/blog/2022-04-27-lsp-in-debian/comment_1_c41ae2a49d701b2bdbdb747bf92e241d._comment b/blog/2022-04-27-lsp-in-debian/comment_1_c41ae2a49d701b2bdbdb747bf92e241d._comment
new file mode 100644
index 00000000..2d8c37b2
--- /dev/null
+++ b/blog/2022-04-27-lsp-in-debian/comment_1_c41ae2a49d701b2bdbdb747bf92e241d._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="188.27.129.132"
+ claimedauthor="Thomas Koch"
+ url="https://blog.koch.ro"
+ subject="thx, elpa-lsp-haskell"
+ date="2022-04-29T11:45:32Z"
+ content="""
+Thank you for featuring the emacs lsp setup. My goal is to have a usable Haskell development environment in Debian only one `apt install` away.
+
+Unfortunately the *elpa-lsp-haskell* package is not the Haskell language server but only a small emacs package to connect to it, see this RFP instead:
+
+[#968373](https://bugs.debian.org/968373) RFP: hls+ghcide -- Haskell Development Environment and Language Server
+"""]]
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_281d153f1739942af7d5b25ceb680c69._comment b/blog/2022-04-27-sbuild-qemu/comment_1_281d153f1739942af7d5b25ceb680c69._comment
new file mode 100644
index 00000000..615ebf35
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_281d153f1739942af7d5b25ceb680c69._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="213.55.240.88"
+ claimedauthor="Antenore"
+ subject="Isn't too much a full VM? "
+ date="2022-04-29T14:38:59Z"
+ content="""
+Why not a container?
+
+While the article is fascinating and useful, I find it overwhelming setting up all of this to just build a package.
+
+You can obtain the same level of security/isolation, with 1/100 of the effort.
+
+Am I wrong?
+"""]]

fix links
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index ed3705d7..1d7fe18e 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -812,6 +812,7 @@ complicated and considered out of scope of this tutorial.
 [vagrant rsync]: https://docs.vagrantup.com/v2/synced-folders/rsync.html
 """]]
 
+[Vagrant]: https://www.vagrantup.com/
 [other provisionning tools]: https://www.vagrantup.com/docs/provisioning/
 [Puppet]: https://www.vagrantup.com/docs/provisioning/puppet_apply.html
 [Ansible]: https://www.vagrantup.com/docs/provisioning/ansible.html
@@ -846,7 +847,7 @@ Another simple approach is to use plain [Qemu][]. We will need to use a
 special tool to create the virtual machine as debootstrap only creates
 a chroot, which virtual machines do not necessarily understand. 
 
-[Qemu][]: https://www.qemu.org/
+[Qemu]: https://www.qemu.org/
 
 [[!tip """
 With `sbuild-qemu`, above, you already have a qemu image, built with

major editorial review of the debian packaging guide
I did a full reread and corrections. Phew!
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 5b4042dc..ed3705d7 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -1,10 +1,13 @@
 [[!meta title="Quick Debian development guide"]]
 
-[[!toc levels=3]]
-
 [[!note "This guide is also available under the URL
 <https://deb.li/quickdev> and as a video presentation
-<https://www.youtube.com/watch?v=O83rIRRJysA>."]]
+<https://www.youtube.com/watch?v=O83rIRRJysA>. This is a living,
+changing document, although the video, obviously, isn't."]]
+
+[[!toc levels=3]]
+
+# Introduction
 
 This guides aims to kickstart people with working in existing Debian
 packages, either to backport software, patch existing packages or work
@@ -35,6 +38,8 @@ may find useful when looking for more information.
 [Debian policy]: https://www.debian.org/doc/debian-policy/
 [developer's manual suite]: https://www.debian.org/doc/devel-manuals
 
+# Minimal packaging workflow
+
 This guides tries to take an opinionated approach to maintaining
 Debian packages. It doesn't try to cover all cases, doesn't try to
 teach you about [debhelper][], [cdbs][], [uscan][] or [make][]. It
@@ -75,12 +80,13 @@ comfortable working on.[^lazy]
     regardless of the version control used. Furthermore, some packages
     do not use version control at all!
 
-To get the source code on an arbitrary package, visit the
-[package tracker][].[^tracker] In this case, we look at the
-[Calibre package tracker page][] and find the download links for the
-release we're interested in. Since we are doing a backport, we use the
-`testing` download link. If you are looking for an antique package,
-you can also find download links on [archive.debian.net][].
+To get the source code on an arbitrary package, visit the [package
+tracker][].[^tracker] In this case, we look at the [Calibre package
+tracker page][] and find the download links for the release we're
+interested in. Since we are doing a backport, we use the `testing`
+download link. If you are looking for an packages not in a
+distribution or antique package, you can also find download links on
+[snapshot.debian.org][] and [archive.debian.net][].
 
 <span/><div class="tip">It's also helpful to use [rmadison][], part of
 the [devscripts package][], to look at the various versions available
@@ -113,6 +119,7 @@ To get the Ubuntu results, I added the following line to my
 [devscripts package]: https://tracker.debian.org/devscripts
 [rmadison]: https://manpages.debian.org/rmadison
 [archive.debian.net]: https://archive.debian.net/
+[snapshot.debian.org]: https://snapshot.debian.org/
 
 What we are looking for is the [calibre_2.55.0+dfsg-1.dsc][] file, the
 "source description" file for the `2.55.0+dfsg-1` version that is
@@ -176,11 +183,11 @@ all the patches specific to Debian.
 Then dget downloads the files `.orig.tar.xz` and `.debian.tar.xz`
 files.
 
-[^dfsg]: Well, not exactly: in this case, it's a modification the
-upstream source code, prepared specifically to remove non-free
-software, hence the `+dfsg` suffix, which is an acronym for
-[Debian Free Software Guidelines][]. The `+dfsg` is simply a naming
-convention used to designated such modified tarballs.
+[^dfsg]: Well, not exactly: in this case, it's a modification of the
+    upstream source code, prepared specifically to remove non-free
+    software, hence the `+dfsg` suffix, which is an acronym for
+    [Debian Free Software Guidelines][]. The `+dfsg` is simply a
+    naming convention used to designated such modified tarballs.
 
 [Debian Free Software Guidelines]: https://www.debian.org/social_contract#guidelines
 
@@ -189,12 +196,13 @@ web of trust,[^openpgp] using [dscverify][]. The `.dsc` files includes
 checksums for the downloaded files, and those checksums are verified
 as well.
 
-Then the files are extracted using `dpkg-source -x`. Notice how `dget`
-is basically just a shortcut to commands you could all have ran by
-hand. This is something useful to keep in mind to understand how this
-process works.
+Then the files are extracted using `dpkg-source -x`. 
+
+Notice how `dget` is just a shortcut to commands you could all have
+ran by hand.
 
 [dscverify]: https://manpages.debian.org/dscverify
+
 [^openpgp]: In my case, this works cleanly, but that is only because
     the key is known on my system. `dget` actually offloads that work
     to `dscverify` which looks into the official keyrings in the
@@ -212,14 +220,18 @@ process works.
 
 [debian-keyring package]: https://packages.debian.org/debian-keyring
 
-If the version control system the package uses is familiar to you,
-you *can* use [debcheckout][] to checkout the source directly. If you
-are comfortable with many revision control systems, this may be better
-for you in general. However, keep in mind that it does not ensure
-end-to-end cryptographic integrity like the previous procedure
-does. It *will* be useful, however, if you want to review the source
-code history of the package to figure out where things come from.
+If the version control system the package uses is familiar to you, you
+*can* use [debcheckout][] to checkout the source directly. However,
+keep in mind that it does not ensure end-to-end cryptographic
+integrity like the previous procedure does, and instead relies on
+HTTPS-level transport security. 
+
+It might be useful if you prefer to collaborate with GitLab merge
+requests over at [salsa.debian.org][], but be warned that not all
+maintainers watch their GitLab projects as closely as the bug tracking
+system.
 
+[salsa.debian.org]: https://salsa.debian.org/
 [debcheckout]: https://manpages.debian.org/debcheckout
 
 # Modifying the package
@@ -263,12 +275,12 @@ changelog and what bugs are being fixed. Here's the result:
      -- Antoine Beaupré <anarcat@debian.org>  Tue, 26 Apr 2016 16:49:56 -0400
 
 [^tilde]: The "tilde" character to indicate that this is a *lower*
-version than the version I am backporting from, so that when the users
-eventually upgrade to the next stable (`stretch`, in this case), they
-will actually upgrade to the real version in stretch, and not keep the
-backport lying around. There is a
-[detailed algorithm description of how version number are compared][],
-which you can test using `dpkg --compare-versions` if you are unsure.
+    version than the version I am backporting from, so that when the
+    users eventually upgrade to the next stable (`stretch`, in this
+    case), they will actually upgrade to the real version in stretch,
+    and not keep the backport lying around. There is a [detailed
+    algorithm description of how version number are compared][], which
+    you can test using `dpkg --compare-versions` if you are unsure.
 
 [detailed algorithm description of how version number are compared]: https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
 
@@ -281,7 +293,7 @@ Note that there are other options you can pass to `dch`. I often use:
 
 There are more described in the [dch][] manpage. The
 [managing packages section][] of the developer's reference is useful
-in crafting those specific packages.
+in crafting packages specific to your situation.
 
 [managing packages section]: https://www.debian.org/doc/manuals/developers-reference/pkgs.html
 [dch]: https://manpages.debian.org/dch
@@ -374,9 +386,10 @@ it. The generic command to build a Debian package is
 files should all show up in the parent directory.
 
 [^changes]: the `.changes` file is similar to the `.dsc` file, but
-    also covers the `.deb` file. So it's the `.dsc` file for the
-    binary package.
-[^debuild]: I also often use `debuild` instead of `dpkg-buildpackage`
+    also covers the `.deb` file. So it's kind of the `.dsc` file for
+    the binary package (except there's also a `.changes` file for
+    source-only uploads, so not really).
+[^debuild]: You can also use `debuild` instead of `dpkg-buildpackage`
     because it also runs [lintian][] and signs the binary package with
     [debsign][].
 
@@ -388,29 +401,31 @@ If you are building from a VCS (e.g. git) checkout, you will get a lot
 of garbage in your source package. To avoid this, you need to use a
 tool specifically crafted for your VCS. I use [git-buildpackage][] (or
 `gbp` in short) for that purpose, but other also use the simpler
-[git-pkg][]. I find that `gbp` has more error checking but it is more
-complicated and less intuitive if you actually know what you are
-doing, which wasn't my case when I started.[^gitpkg]
+[git-pkg][]. I find that `gbp` has more error checking and a better
+workflow, but there are many opinions on how to do this.[^gitpkg]
+There are also *many* other ways of packaging Debian packages in Git,
+including [dgit][] and [git-dpm][], so until Debian standardizes on
+*one* of those, this guide will remain git-agnostic.
 </div>
 
+[git-dpm]: https://tracker.debian.org/pkg/git-dpm
+[dgit]: https://tracker.debian.org/pkg/dgit
 [git-pkg]: https://manpages.debian.org/git-pkg
 [git-buildpackage]: https://manpages.debian.org/git-buildpackage
 [^gitpkg]: git-pkg actually only extracts a source package from your
     git tree, and nothing else. There are hooks to trigger builds and
     so on, but it's basically expected that you do that yourself, and
-    gitpkg is just there to clean things up for your. git-buildpackage
-    does way more stuff, which can be confusing for people more
-    familiar with the Debian toolchain.
+    gitpkg is just there to clean things up for your.
 
-In any case, there's a catch here. The catch is that you need all the
-build-dependencies for the above builds to succeed. You may not have
-all of those, so you can try to install them with:
+In any case, there's a catch here: you need all the build-dependencies
+for the above builds to succeed. You may not have all of those, so you
+can try to install them with:
 
     sudo mk-build-deps -i -r calibre
 
-But this installs a lot of cruft on your system! `mk-build-deps` makes
-a dummy package to wrap them all up together, so they are easy to
-uninstall.
+`mk-build-deps` makes a dummy package to wrap them all up together, so

(Diff truncated)
move more discussion to the blog post
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index 548c5496..b5cd6290 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -310,6 +310,34 @@ right place... right? See also [[services/hosting]].
 [Qemu]: http://qemu.org/
 [chroot]: https://manpages.debian.org/chroot
 
+## pbuilder vs sbuild
+
+I was previously using `pbuilder` and switched in 2017 to `sbuild`.
+[AskUbuntu.com has a good comparative between pbuilder and sbuild][]
+that shows they are pretty similar. The big advantage of sbuild is
+that it is the tool in use on the buildds and it's written in Perl
+instead of shell.
+
+My concerns about switching were POLA (I'm used to pbuilder), the fact
+that pbuilder runs as a separate user (works with sbuild as well now,
+if the `_apt` user is present), and setting up COW semantics in sbuild
+(can't just plug cowbuilder there, need to configure overlayfs or
+aufs, which was non-trivial in Debian jessie).
+
+Ubuntu folks, again, have [more][] [documentation][] there. Debian
+also has [extensive documentation][], especially about [how to
+configure overlays][].
+
+I was ultimately convinced by [stapelberg's post on the topic][] which
+shows how much simpler sbuild really is...
+
+[stapelberg's post on the topic]: https://people.debian.org/~stapelberg/2016/11/25/build-tools.html
+[how to configure overlays]: https://wiki.debian.org/sbuild#sbuild_overlays_in_tmpfs
+[extensive documentation]: https://wiki.debian.org/sbuild
+[documentation]: https://wiki.ubuntu.com/SimpleSbuild
+[more]: https://wiki.ubuntu.com/SecurityTeam/BuildEnvironment
+[AskUbuntu.com has a good comparative between pbuilder and sbuild]: http://askubuntu.com/questions/53014/why-use-sbuild-over-pbuilder
+
 # Who
 
 Thanks lavamind for the introduction to the `sbuild-qemu` package.
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 9535bd02..5b4042dc 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -624,29 +624,6 @@ you often need `-sa` to provide the source tarball with the upload),
 you should use `--debbuildopts -sa` in `sbuild`. For git-buildpackage,
 simply add `-sa` to the commandline.
 
-[[!note """
-I was previously using `pbuilder` and switched in 2017 to `sbuild`. [AskUbuntu.com has a good comparative between pbuilder and sbuild][]
-that shows they are pretty similar. The big advantage of sbuild is
-that it is the tool in use on the buildds and it's written in Perl
-instead of shell. My concerns about switching were POLA (I'm used to
-pbuilder), the fact that pbuilder runs as a separate user (works with
-sbuild as well now, if the `_apt` user is present), and setting up COW
-semantics in sbuild (can't just plug cowbuilder there, need to
-configure overlayfs or aufs, which is non-trivial in jessie with
-backports...). Ubuntu folks, again, have [more][] [documentation][]
-there. Debian also has [extensive documentation][], especially about
-[how to configure overlays][]. I was convinced by
-[stapelberg's post on the topic][] which shows how simpler sbuild
-really is...
-
-[stapelberg's post on the topic]: https://people.debian.org/~stapelberg/2016/11/25/build-tools.html
-[how to configure overlays]: https://wiki.debian.org/sbuild#sbuild_overlays_in_tmpfs
-[extensive documentation]: https://wiki.debian.org/sbuild
-[documentation]: https://wiki.ubuntu.com/SimpleSbuild
-[more]: https://wiki.ubuntu.com/SecurityTeam/BuildEnvironment
-[AskUbuntu.com has a good comparative between pbuilder and sbuild]: http://askubuntu.com/questions/53014/why-use-sbuild-over-pbuilder
-"""]]
-
 <a name="offloading-cowpoke-and-debomatic" />
 
 ## Build servers
@@ -974,4 +951,7 @@ duplicate of [this other guide][].
 [this other guide]: https://wiki.debian.org/BuildingTutorial
 [this guide]: https://wiki.debian.org/BuildingAPackage
 
+A [[blog post|blog/2022-04-27-sbuild-qemu]] goes into depth about the
+alternatives to `qemu` and `sbuild`.
+
 [[!tag debian-planet debian debian-lts blog python-planet software geek free]]

rip out more todo work of the tutorial, into the blog post
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index ecf7d6fb..548c5496 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -235,6 +235,43 @@ found [libguestfs][] to be useful to operate on virtual images in
 various ways. [Libvirt][] and [Vagrant][] are also useful wrappers on
 top of the above systems.
 
+There are particularly a lot of different tools which use Docker,
+Virtual machines or some sort of isolation stronger than chroot to
+build packages. Here are some of the alternatives I am aware of:
+
+ * [Whalebuilder][] - Docker builder
+ * [conbuilder][] - "container" builder
+ * [debspawn][] - system-nspawn builder
+ * [docker-buildpackage][] - Docker builder
+ * [qemubuilder][] - qemu builder
+ * [qemu-sbuild-utils][] - qemu + sbuild + autopkgtest
+
+Take, for example, [Whalebuilder][], which uses Docker to build
+packages instead of `pbuilder` or `sbuild`. Docker provides more
+isolation than a simple `chroot`: in `whalebuilder`, packages are
+built without network access and inside a virtualized
+environment. Keep in mind there are limitations to Docker's security
+and that `pbuilder` and `sbuild` *do* build under a different user
+which will limit the security issues with building untrusted
+packages.
+
+On the upside, some of things are being fixed: `whalebuilder` is now
+an official Debian package ([[!debpkg whalebuilder]]) and has added
+the feature of [passing custom arguments to dpkg-buildpackage][].
+
+None of those solutions (except the `autopkgtest`/`qemu` backend) are
+implemented as a [sbuild plugin][], which would greatly reduce their
+complexity.
+
+[conbuilder]: https://salsa.debian.org/federico/conbuilder
+[debspawn]: https://github.com/lkorigin/debspawn
+[docker-buildpackage]: https://github.com/metux/docker-buildpackage
+[passing custom arguments to dpkg-buildpackage]: https://gitlab.com/uhoreg/whalebuilder/issues/4
+[qemubuilder]: https://wiki.debian.org/qemubuilder
+[sbuild plugin]: https://lists.debian.org/debian-devel/2018/08/msg00005.html
+[whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
+[qemu-sbuild-utils]: https://www.kvr.at/posts/qemu-sbuild-utils-01-sbuild-with-qemu/
+
 I was previously using [Qemu][] directly to run virtual machines, and
 had to create VMs by hand with various tools. This didn't work so well
 so I switched to using Vagrant as a de-facto standard to build
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 562f3eae..9535bd02 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -967,40 +967,6 @@ host your own Debian repository using [reprepro][] (Koumbit has some
 
 # Further work and remaining issues
 
-I am curious about other build environments which use Docker, Virtual
-machines or some sort of stronger isolation to build packages. Here
-are the alternatives I am aware of:
-
- * [Whalebuilder][] - Docker builder
- * [conbuilder][] - "container" builder
- * [debspawn][] - system-nspawn builder
- * [docker-buildpackage][] - Docker builder
- * [qemubuilder][] - qemu builder
- * [qemu-sbuild-utils][] - qemu + sbuild + autopkgtest
-
-Take, for example, [Whalebuilder][], which uses Docker to build
-packages instead of `pbuilder` or `sbuild`. Docker provides more
-isolation than a simple `chroot`: in `whalebuilder`, packages are
-built without network access and inside a virtualized
-environment. Keep in mind there are limitations to Docker's security
-and that `pbuilder` and `sbuild` *do* build under a different user
-which will limit the security issues with building untrusted
-packages. Furthermore, `whalebuilder` <del>is not currently packaged
-as an official Debian package</del> (it is now, see [[!debpkg
-whalebuilder]]) and lacks certain features, like [passing custom
-arguments to dpkg-buildpackage][] (update: fixed), so I don't feel it is quite ready
-yet. None of those solutions are implemented as a [sbuild plugin][],
-which would greatly reduce their complexity.
-
-[conbuilder]: https://salsa.debian.org/federico/conbuilder
-[debspawn]: https://github.com/lkorigin/debspawn
-[docker-buildpackage]: https://github.com/metux/docker-buildpackage
-[passing custom arguments to dpkg-buildpackage]: https://gitlab.com/uhoreg/whalebuilder/issues/4
-[qemubuilder]: https://wiki.debian.org/qemubuilder
-[sbuild plugin]: https://lists.debian.org/debian-devel/2018/08/msg00005.html
-[whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
-[qemu-sbuild-utils]: https://www.kvr.at/posts/qemu-sbuild-utils-01-sbuild-with-qemu/
-
 This guide should be integrated into the official documentation or the
 Debian wiki. It is eerily similar to [this guide][] which itself is a
 duplicate of [this other guide][].

fix tocs
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index a713bb79..ecf7d6fb 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -8,7 +8,7 @@ guide|software/debian-development]], I had a few pointers on how to
 configure sbuild with the normal `schroot` setup, but today I finished
 a [qemu](http://www.qemu.org/) based configuration.
 
-[[!toc]]
+[[!toc levels=3]]
 
 # Why
 
@@ -171,6 +171,8 @@ you feel like it.
 
 # Nitty-gritty details no one cares about
 
+## Fixing hang in sbuild cleanup
+
 I'm having a hard time making heads or tails of this, but please bear
 with me.
 
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 44cd1dda..562f3eae 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="Quick Debian development guide"]]
 
-[[!toc levels=2]]
+[[!toc levels=3]]
 
 [[!note "This guide is also available under the URL
 <https://deb.li/quickdev> and as a video presentation

reshuffle the quick debian devel guide
It was getting quite unwieldy, with
cowbuilder/pbuilder/sbuild/schroot/qemu instructions all over the
place. Now we clearly outline the schroot/qemu instructions
separately, and we've taken the VM disgression out into the blog post.
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index 3a8e2396..a713bb79 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -225,6 +225,52 @@ For some reason, before I added this line to my configuration:
 ... the "Cleanup" step would just completely hang. It was quite
 bizarre.
 
+## Disgression on the diversity of VM-like things
+
+There are a *lot* of different virtualization solutions one can use
+(e.g. [Xen][], [KVM][], [Docker][] or [Virtualbox][]). I have also
+found [libguestfs][] to be useful to operate on virtual images in
+various ways. [Libvirt][] and [Vagrant][] are also useful wrappers on
+top of the above systems.
+
+I was previously using [Qemu][] directly to run virtual machines, and
+had to create VMs by hand with various tools. This didn't work so well
+so I switched to using Vagrant as a de-facto standard to build
+development environment machines, but I'm returning to Qemu because it
+uses a similar backend as KVM and can be used to host longer-running
+virtual machines through libvirt.
+
+The great thing now is that `autopkgtest` has good support for `qemu`
+*and* `sbuild` has bridged the gap and can use it as a build
+backend. I originally had found those bugs in that setup, but *all* of
+them are now fixed:
+
+ * [#911977](https://bugs.debian.org/911977): sbuild: how do we correctly guess the VM name in autopkgtest?
+ * [#911979](https://bugs.debian.org/911979): sbuild: fails on chown in autopkgtest-qemu backend
+ * [#911963](https://bugs.debian.org/911963): autopkgtest qemu build fails with proxy_cmd: parameter not set
+ * [#911981](https://bugs.debian.org/911981): autopkgtest: qemu server warns about missing CPU features
+
+So we have unification! It's possible to run your virtual machines
+*and* Debian builds using a single VM image backend storage, which is
+no small feat, in my humble opinion. See the [sbuild-qemu blog post
+for the annoucement](https://www.kvr.at/posts/qemu-sbuild-utils-merged-into-sbuild/)
+
+Now I just need to figure out how to merge Vagrant, GNOME Boxes, and
+libvirt together, which should be a matter of placing images in the
+right place... right? See also [[services/hosting]].
+
+[Vagrant]: https://www.vagrantup.com/
+[Virtualbox]: https://en.wikipedia.org/wiki/Virtualbox
+[libguestfs]: https://en.wikipedia.org/wiki/Libguestfs
+[Libvirt]: https://en.wikipedia.org/wiki/Libvirt
+[Docker]: https://en.wikipedia.org/wiki/Docker_(software)
+[Xen]: https://en.wikipedia.org/wiki/Xen
+[HVM]: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
+[KVM]: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine
+[QCOW]: https://en.wikipedia.org/wiki/Qcow
+[Qemu]: http://qemu.org/
+[chroot]: https://manpages.debian.org/chroot
+
 # Who
 
 Thanks lavamind for the introduction to the `sbuild-qemu` package.
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 8d5c7ce0..44cd1dda 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -433,6 +433,8 @@ a clean, temporary `chroot`. To create that `.dsc` file, you can use
 `dpkg-buildpackage -S` simply call `sbuild` in the source directory
 which will create it for you.
 
+### schroot instructions
+
 To use sbuild, you first need to configure an image:
 
     sudo sbuild-createchroot --include=eatmydata,gnupg unstable /srv/chroot/unstable-amd64-sbuild http://deb.debian.org/debian
@@ -458,9 +460,52 @@ This assumes that:
     this). to create a tarball image, use this:
     
         sudo sbuild-createchroot --make-sbuild-tarball=/srv/chroot/unstable-amd64-sbuild.tar.gz unstable --chroot-prefix unstable-tar `mktemp -d` http://deb.debian.org/debian
+
+You can also use `qemu` instead, see below
 """]]
 
-[[!note """
+The above will create chroots for all the main suites and two
+architectures, using [debootstrap][]. You may of course modify this to
+taste based on your requirements and available disk space. My build
+directories count for around 7GB (including ~3GB of cached `.deb`
+packages) and each chroot is between 500MB and 700MB.
+
+[debootstrap]: https://manpages.debian.org/debootstrap
+
+[[!tip """
+A few handy sbuild-related commands:
+
+ * `sbuild -c bookworm-amd64-sbuild` - build in the `bookworm` chroot even
+   though another suite is specified (e.g. `UNRElEASED`,
+   `bookworm-backports` or `bookworm-security`)
+
+ * `sbuild --build-dep-resolver=aptitude` - use another solver for
+   dependencies, required for backports, for example. see the manpage
+   for details of those solvers.
+
+ * `schroot -c bookworm-amd64-sbuild` - enter the `bookworm` chroot to make
+   tests, changes will be discarded
+
+ * `sbuild-shell bookworm` - enter the `bookworm` chroot to make
+   *permanent* changes, which will *not* be discarded
+
+ * `sbuild-destroychroot` - supposedly destroys schroots created by
+   sbuild for later rebuilding, but I have found that command to be
+   quite unreliable. besides, all it does is:
+
+        rm -rf /srv/chroot/unstable-amd64-sbuild /etc/schroot/chroot.d/unstable-amd64-sbuild-*
+
+Also note that it is useful to add aliases to your `schroot`
+configuration files. This allows you, for example, to automatically
+build `bookworm-security` or `bookworm-backports` packages in the `bookworm`
+schroot. Just add this line to the relevant config in
+`/etc/schroot/chroot.d/`:
+
+    aliases=bookworm-security-amd64-sbuild,bookworm-backports-amd64-build
+"""]]
+
+### Qemu configuration
+
 To use qemu, use this instead:
 
     sudo mkdir -p /srv/sbuild/qemu/
@@ -500,15 +545,51 @@ something like this:
 
 Also see a more in-depth discussion about this configuration in [[this
 blog post|blog/2022-04-27-sbuild-qemu]].
-"""]]
 
-The above will create chroots for all the main suites and two
-architectures, using [debootstrap][]. You may of course modify this to
-taste based on your requirements and available disk space. My build
-directories count for around 7GB (including ~3GB of cached `.deb`
-packages) and each chroot is between 500MB and 700MB.
+[[!tip """
+A few handy `qemu` related commands:
 
-[debootstrap]: https://manpages.debian.org/debootstrap
+ * enter the VM to make test, changes will be discarded  (thanks Nick
+   Brown for the `sbuild-qemu-boot` tip!):
+ 
+        sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
+
+   That program is shipped only with bookworm and later, an equivalent
+   command is:
+
+        qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+   The key argument here is `-snapshot`.
+
+ * enter the VM to make *permanent* changes, which will *not* be
+   discarded:
+
+        sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
+
+   Equivalent command:
+
+        sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * update the VM (thanks lavamind):
+ 
+        sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
+
+ * build in a specific VM regardless of the suite specified in the
+   changelog (e.g. `UNRELEASED`, `bookworm-backports`,
+   `bookworm-security`, etc):
+
+        sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   Note that you'd also need to pass `--autopkgtest-opts` if you want
+   `autopkgtest` to run in the correct VM as well:
+
+        sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   You might also need parameters like `--ram-size` if you customized
+   it above.
+"""]]
+
+### Building with sbuild
 
 Then I build packages in one of three ways.
 
@@ -543,71 +624,6 @@ you often need `-sa` to provide the source tarball with the upload),
 you should use `--debbuildopts -sa` in `sbuild`. For git-buildpackage,
 simply add `-sa` to the commandline.
 
-[[!tip """
-A few handy sbuild-related commands:
-
- * `sbuild -c wheezy-amd64-sbuild` - build in the `wheezy` chroot even
-   though another suite is specified (e.g. `UNRElEASED`,
-   `wheezy-backports` or `wheezy-security`)
-
- * `sbuild --build-dep-resolver=aptitude` - use another solver for
-   dependencies, required for backports, for example. see the manpage
-   for details of those solvers.
-
- * `schroot -c wheezy-amd64-sbuild` - enter the `wheezy` chroot to make
-   tests, changes will be discarded

(fichier de différences tronqué)
settext/atx
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 1af799b4..8d5c7ce0 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -62,8 +62,7 @@ diagram:
 [cdbs]: https://manpages.debian.org/cdbs
 [debhelper]: https://manpages.debian.org/debhelper
 
-Find the source
-===============
+# Find the source
 
 In the following, I take the example of building a backport of the
 Calibre package, which I [needed][]. It's a good example because it
@@ -223,8 +222,7 @@ code history of the package to figure out where things come from.
 
 [debcheckout]: https://manpages.debian.org/debcheckout
 
-Modifying the package
-=====================
+# Modifying the package
 
 At this point, we have a shiny source tree available in the
 `calibre-2.55.0+dfsg/` directory:
@@ -233,8 +231,7 @@ At this point, we have a shiny source tree available in the
 
 We can start looking around and make some changes.
 
-Changing version
-----------------
+## Changing version
 
 The first thing we want to make sure we do is to bump the version
 number so that we don't mistakenly build a new package with the *same*
@@ -291,8 +288,7 @@ in crafting those specific packages.
 [security uploads]: https://www.debian.org/doc/manuals/developers-reference/pkgs.html#bug-security
 [non-maintainer uploads]: https://www.debian.org/doc/manuals/developers-reference/pkgs.html#nmu
 
-Changing package metadata
--------------------------
+## Changing package metadata
 
 If I needed to modify dependencies, I would have edited
 `debian/control` directly. Other modifications to the Debian package
@@ -303,8 +299,7 @@ package is built, but a good starting point is
 
 [Debian policy §4: Source packages]: https://www.debian.org/doc/debian-policy/ch-source.html
 
-Modifying the source code
--------------------------
+## Modifying the source code
 
 If I needed to modify the source tree *outside* `debian/`, I can do
 the modifications directly, then use `dpkg-source --commit` to
@@ -316,8 +311,7 @@ template when creating a new patch.
 [patch tagging guidelines]: http://dep.debian.net/deps/dep3/
 [quilt]: https://manpages.debian.org/quilt
 
-Applying patches
-----------------
+## Applying patches
 
 If I already have a patch I want to apply to the source tree, then
 [quilt][] is even *more* important. The first step is to import the
@@ -364,8 +358,7 @@ often extract the patch from a Git source tree fetched with
 Again, it's useful to add metadata to the patch and follow the
 [patch tagging guidelines][].
 
-Building the package
-====================
+# Building the package
 
 Now that we are satisfied with our modified package, we need to build
 it. The generic command to build a Debian package is
@@ -428,8 +421,7 @@ environment.
 
 For this, we need more powerful tools.
 
-Building in a clean environment
--------------------------------
+## Building in a clean environment
 
 I am using [[!man sbuild]] to build packages in a dedicated clean
 build environment. This means I can build packages for arbitrary
@@ -641,8 +633,7 @@ really is...
 
 <a name="offloading-cowpoke-and-debomatic" />
 
-Build servers
--------------
+## Build servers
 
 Sometimes, your machine is too slow to build this stuff yourself. If
 you have a more powerful machine lying around, you can send a source
@@ -710,8 +701,7 @@ have to build the package locally to be able to compare the results...
 [debomatic]: http://debomatic.github.io/
 [cowpoke]: https://manpages.debian.org/cowpoke
 
-Testing packages
-================
+# Testing packages
 
 Some packages have a built-in test suite which you should make sure
 runs properly during the build. Sometimes, backporting that test suite
@@ -723,8 +713,7 @@ also be used to see if the package cleans up properly after itself.
 [autopkgtest]: http://anonscm.debian.org/cgit/autopkgtest/autopkgtest.git/plain/doc/README.package-tests.rst
 [DEP8]: http://dep.debian.net/deps/dep8/
 
-With autopkgtest
-----------------
+## With autopkgtest
 
 When a package self-testing enabled, it will be ran by [Debian CI](https://ci.debian.net/)
 at various times. While there can be build-time tests, CI runs more
@@ -969,8 +958,7 @@ different mountpoint. Otherwise changes in the filesystem affect the
 parent host, in which case you can just copy over the chroot.
 """]]
 
-Uploading packages
-==================
+# Uploading packages
 
 Uploading packages can be done on your own personal archive if you
 have a webserver, using the following `~/.dput.cf` configuration:
@@ -1007,8 +995,7 @@ host your own Debian repository using [reprepro][] (Koumbit has some
 [official Debian archives]: https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#upload
 [backports]: http://backports.debian.org/Contribute/
 
-Further work and remaining issues
-=================================
+# Further work and remaining issues
 
 I am curious about other build environments which use Docker, Virtual
 machines or some sort of stronger isolation to build packages. Here

more tricks on autopkgtest VMs, thanks lazyweb!
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index a55e994b..3a8e2396 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -25,14 +25,16 @@ rely on qemu under the hood, certainly not chroots...
 
 I could also have decided to go with containers like LXC, LXD, Docker
 (with [conbuilder][], [whalebuilder][], [docker-buildpackage][]),
-systemd-nspawn (with [debspawn][]), or whatever: I didn't feel those
-offer the level of isolation that is provided by qemu.
+systemd-nspawn (with [debspawn][]), [unshare][] (with `schroot
+--chroot-mode=unshare`), or whatever: I didn't feel those offer the
+level of isolation that is provided by qemu.
 
 [conbuilder]: https://salsa.debian.org/federico/conbuilder
 [debspawn]: https://github.com/lkorigin/debspawn
 [docker-buildpackage]: https://github.com/metux/docker-buildpackage
 [qemubuilder]: https://wiki.debian.org/qemubuilder
 [whalebuilder]: https://www.uhoreg.ca/programming/debian/whalebuilder
+[unshare]: https://floss.social/@vagrantc/108207501382862868
 
 The main downside of this approach is that it is (obviously) slower
 than native builds. But on modern hardware, that cost should be
@@ -87,52 +89,85 @@ This configuration will:
     default)
  4. tell `autopkgtest` to use `qemu` for builds *and* for tests
 
-# Remaining work
+Note that the VM created by `sbuild-qemu-create` have an unlocked root
+account with an empty password.
 
-One thing I haven't quite figured out yet is the equivalent of those
-two `schroot`-specific commands from my [[quick Debian development
-guide|software/debian-development]]:
+## Other useful tasks
 
- * `sbuild -c unstable-amd64-sbuild` - build in the `unstable` chroot even
-   though another suite is specified (e.g. `UNRElEASED`,
-   `unstable-backports` or `unstable-security`)
+ * enter the VM to make test, changes will be discarded  (thanks Nick
+   Brown for the `sbuild-qemu-boot` tip!):
+ 
+        sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
 
- * `schroot -c unstable-amd64-sbuild` - enter the `unstable` chroot to make
-   tests, changes will be discarded
+   That program is shipped only with bookworm and later, an equivalent
+   command is:
 
- * `sbuild-shell unstable` - enter the `unstable` chroot to make
-   *permanent* changes, which will *not* be discarded
+        qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
 
-In other words: "just give me a shell in that VM". It seems to me
-`autopkgtest-virt-qemu` should have a magic flag that does that, but
-it doesn't look like that's a thing. When that program starts, it just
-says `ok` and sits there. When `autopkgtest` massages it just the
-right way, however, it will do this funky commandline:
+   The key argument here is `-snapshot`.
 
-    qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+ * enter the VM to make *permanent* changes, which will *not* be
+   discarded:
+
+        sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
+
+   Equivalent command:
+
+        sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * update the VM (thanks lavamind):
+ 
+        sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
+
+ * build in a specific VM regardless of the suite specified in the
+   changelog (e.g. `UNRELEASED`, `bookworm-backports`,
+   `bookworm-security`, etc):
+
+        sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   Note that you'd also need to pass `--autopkgtest-opts` if you want
+   `autopkgtest` to run in the correct VM as well:
 
-... which is a typical qemu commandline, I regret to announce. I
-managed to somehow boot a VM similar to the one `autopkgtest`
-provisions with this magic incantation:
+        sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
 
-    mkdir tmp
-    cd tmp
-    qemu-img create -f qcow2 -F qcow2 -b /srv/sbuild/qemu/unstable-amd64.img overlay.img
-    mkdir shared
-    qemu-system-x86_64 -m 4096 -smp 2  -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:$PWD/monitor,server,nowait -serial unix:$PWD/ttyS0,server,nowait -serial unix:$PWD/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=$PWD/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=$PWD/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+   You might also need parameters like `--ram-size` if you customized
+   it above.
 
-That gives you a VM like `autopkgtest` which has those peculiarities:
+And yes, this is all quite complicated and could be streamlined a
+little, but that's what you get when you have years of legacy and just
+want to get stuff done. It seems to me `autopkgtest-virt-qemu` should
+have a magic flag starts a shell for you, but it doesn't look like
+that's a thing. When that program starts, it just
+says `ok` and sits there.
 
- * the `shared` directory is, well, shared with the VM
+Maybe because the authors consider the above to be simple enough (see
+also [bug #911977](https://bugs.debian.org/911977) for a discussion of this problem).
+
+## Live access to a running test
+
+When `autopkgtest` starts a VM, it uses this funky `qemu` commandline:
+
+    qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
+
+... which is a [typical qemu commandline](https://lwn.net/Articles/872321/), I'm sorry to say. That
+gives us a VM with those settings (paths are relative to a temporary
+directory, `/tmp/autopkgtest-qemu.w1mlh54b/` in the above example):
+
+ * the `shared/` directory is, well, shared with the VM
  * port `10022` is forward to the VM's port `22`, presumably for SSH,
    but not SSH server is started by default
  * the `ttyS1` and `ttyS2` UNIX sockets are mapped to the first two
    serial ports (use `nc -U` to talk with those)
- * the `monitor` socket is a qemu control socket (see the [QEMU
-   monitor](https://people.redhat.com/pbonzini/qemu-test-doc/_build/html/topics/pcsys_005fmonitor.html) documentation)
+ * the `monitor` UNIX socket is a qemu control socket (see the [QEMU
+   monitor](https://people.redhat.com/pbonzini/qemu-test-doc/_build/html/topics/pcsys_005fmonitor.html) documentation, also `nc -U`)
+
+In other words, it's possible to access the VM with:
+
+    nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
 
-So I guess I could make a script out of this but for now this will
-have to be good enough.
+The `nc` socket interface is ... not great, but it works well
+enough. And you can probably fire up an SSHd to get a better shell if
+you feel like it.
 
 # Nitty-gritty details no one cares about
 
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 9de2a5dd..1af799b4 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -583,6 +583,39 @@ schroot. Just add this line to the relevant config in
     aliases=wheezy-security-amd64-sbuild,wheezy-backports-amd64-build
 """]]
 
+[[!tip """
+If you're using autopkgtest-qemu, the above is different and you
+should use those tips instead:
+
+ * enter the VM to make test, changes will be discarded:
+ 
+        qemu-img create -f qcow2 -F qcow2 -b /srv/sbuild/qemu/unstable-amd64.img overlay.img
+        qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * enter the VM to make *permanent* changes, which will *not* be
+   discarded:
+ 
+        sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
+
+ * update the VM:
+ 
+        sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
+
+ * build in a specific VM regardless of the suite specified in the
+   changelog (e.g. `UNRELEASED`, `bookworm-backports`,
+   `bookworm-security`, etc):
+
+        sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   Note that you'd also need to pass `--autopkgtest-opts` if you want
+   `autopkgtest` to run in the correct VM as well:
+
+        sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
+
+   You might also need parameters like `--ram-size` if you customized
+   it above.
+"""]]
+
 [[!note """
 I was previously using `pbuilder` and switched in 2017 to `sbuild`. [AskUbuntu.com has a good comparative between pbuilder and sbuild][]
 that shows they are pretty similar. The big advantage of sbuild is

approve comment
diff --git a/blog/2022-04-27-sbuild-qemu/comment_1_71942d0d1465887d78e25e4f9ca91813._comment b/blog/2022-04-27-sbuild-qemu/comment_1_71942d0d1465887d78e25e4f9ca91813._comment
new file mode 100644
index 00000000..78723e2b
--- /dev/null
+++ b/blog/2022-04-27-sbuild-qemu/comment_1_71942d0d1465887d78e25e4f9ca91813._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="81.110.21.226"
+ claimedauthor="Nick Brown"
+ subject="sbuild-qemu-boot"
+ date="2022-04-28T11:11:01Z"
+ content="""
+I spotted that 'sbuild-qemu-boot' was [added in 0.83](https://salsa.debian.org/debian/sbuild/-/commit/2e426f2eac7a81771d3963a0737e1d8fa2b60a2e) that looks to provide console access to the vm, though I've not had a chance to experiment with it yet,  it might help with two of the task you mention in your \"Remaining work\" section.
+"""]]

merge notes about lsp between the two pages
diff --git a/blog/2022-03-20-20-years-emacs.md b/blog/2022-03-20-20-years-emacs.md
index f7bb2d10..37757d33 100644
--- a/blog/2022-03-20-20-years-emacs.md
+++ b/blog/2022-03-20-20-years-emacs.md
@@ -66,25 +66,17 @@ up.
    lsp-mode is uncool and I should really do eglot instead, and that
    doesn't help.
    
-   **UPDATE**: I finally got tired and switched to `lsp-mode`. The
-   main reason for choosing it over eglot is that it's in Debian (and
-   eglot is not). (Apparently, eglot has more chance of being
-   upstreamed, "when it's done", but I guess I'll cross that bridge
-   when I get there.) `lsp-mode` feels slower than `elpy` but I
-   haven't done *any* of the [performance tuning](https://emacs-lsp.github.io/lsp-mode/page/performance/) and this will
-   improve even more with native compilation (see below).
-   
-   I already had `lsp-mode` partially setup in Emacs so I only had to
-   do [this small tweak to switch](https://gitlab.com/anarcat/emacs-d/-/commit/753ac702b08850322e92c56c2bbcc9afc70d599f) and [change the prefix key](https://gitlab.com/anarcat/emacs-d/-/commit/68331e54bd43a28fc75b28efb4de7f491ab77b72)
-   (because <kbd>s-l</kbd> or <kbd>mod</kbd> is used by my window
-   manager). I also had to pin LSP packages to bookworm [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/lsp.pp) and
-   [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/emacs.pp).
+   **UPDATE**: I finally got tired and switched to `lsp-mode`. See
+   [[this post for details|blog/2022-04-27-lsp-in-debian]].
 
  * I am not using [projectile](https://projectile.mx/). It's on some of my numerous todo
    lists somewhere, surely. I suspect it's important to getting my
    projects organised, but I still live halfway between the terminal
    and Emacs, so it's not quite clear what I would gain.
 
+   **Update**: I *also* started using projectile, but I'm not sure I
+   like it.
+
  * I had to ask what [native compilation](https://www.emacswiki.org/emacs/GccEmacs) was or why it mattered
    the first time I heard of it. And when I saw it again in the
    article, I had to click through to remember.
diff --git a/blog/2022-04-27-lsp-in-debian.md b/blog/2022-04-27-lsp-in-debian.md
index 308eac3f..cef805cf 100644
--- a/blog/2022-04-27-lsp-in-debian.md
+++ b/blog/2022-04-27-lsp-in-debian.md
@@ -40,10 +40,21 @@ and this `.emacs` snippet:
       (define-key lsp-ui-mode-map [remap xref-find-definitions] #'lsp-ui-peek-find-definitions)
       (define-key lsp-ui-mode-map [remap xref-find-references] #'lsp-ui-peek-find-references))
 
-
 Note: this configuration might have changed since I wrote this, see
 [my init.el configuration for the most recent config](https://gitlab.com/anarcat/emacs-d/blob/master/init.el).
 
+The main reason for choosing `lsp-mode` over eglot is that it's in
+Debian (and eglot is not). (Apparently, eglot has more chance of being
+upstreamed, "when it's done", but I guess I'll cross that bridge when
+I get there.)
+   
+I already had `lsp-mode` partially setup in Emacs so I only had to do
+[this small tweak to switch](https://gitlab.com/anarcat/emacs-d/-/commit/753ac702b08850322e92c56c2bbcc9afc70d599f) and [change the prefix key](https://gitlab.com/anarcat/emacs-d/-/commit/68331e54bd43a28fc75b28efb4de7f491ab77b72)
+(because <kbd>s-l</kbd> or <kbd>mod</kbd> is used by my window
+manager). I also had to pin LSP packages to bookworm [here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/emacs.pp) so that
+it properly detects `pylsp` (the older version in Debian bullseye only
+supports `pyls`, not packaged in Debian).
+
 This won't do anything by itself: Emacs will need *something* to talk
 with to provide the magic. Those are called "servers" and are
 basically different programs, for each programming language, that
@@ -72,14 +83,16 @@ Server" in the description (which also found a few more `pyls` plugins,
 e.g. `black` support).
 
 Note that the Python packages, in particular, need to be upgraded to
-their bookworm releases to work properly. It seems like there's some
-interoperability problems there that I haven't quite figured out
-yet. See also my [Puppet configuration for LSP](https://gitlab.com/anarcat/puppet/-/blob/main/site-modules/profile/manifests/lsp.pp).
+their bookworm releases to work properly ([here](https://gitlab.com/anarcat/puppet/-/blob/976d911e7abfedd3e3d4dcae87912b351ab89a0b/site-modules/profile/manifests/lsp.pp)). It seems like
+there's some interoperability problems there that I haven't quite
+figured out yet. See also my [Puppet configuration for LSP](https://gitlab.com/anarcat/puppet/-/blob/main/site-modules/profile/manifests/lsp.pp).
 
 Finally, note that I have now completely switched away from [Elpy](https://elpy.readthedocs.io/)
-to pyls, and I'm quite happy with the results. It's slower, but it
-is much more powerful. I particularly like the "rename symbol"
-functionality, which ... mostly works.
+to pyls, and I'm quite happy with the results. `lsp-mode` feels slower
+than `elpy` but I haven't done *any* of the [performance tuning](https://emacs-lsp.github.io/lsp-mode/page/performance/)
+and this will improve even more with native compilation. And
+`lsp-mode` is much more powerful. I particularly like the "rename
+symbol" functionality, which ... mostly works.
 
 # Remaining work
 

remove link(foo) test
This is a test to regenerate the index, which doesn't seem to update
correctly for the two new articles.
diff --git a/blog.mdwn b/blog.mdwn
index 36640dc4..4d4862b7 100644
--- a/blog.mdwn
+++ b/blog.mdwn
@@ -16,7 +16,6 @@
   or tagged(blog)
 )
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 show="3"
@@ -72,7 +71,6 @@ trail=yes
 )
 and creation_year(2022)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -88,7 +86,6 @@ quick=yes
 )
 and creation_year(2021)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -104,7 +101,6 @@ quick=yes
 )
 and creation_year(2020)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -120,7 +116,6 @@ quick=yes
 )
 and creation_year(2019)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -136,7 +131,6 @@ quick=yes
 )
 and creation_year(2018)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -152,7 +146,6 @@ quick=yes
 )
 and creation_year(2017)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -168,7 +161,6 @@ quick=yes
 )
 and creation_year(2016)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -184,7 +176,6 @@ quick=yes
 )
 and creation_year(2015)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -200,7 +191,6 @@ quick=yes
 )
 and creation_year(2014)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -216,7 +206,6 @@ quick=yes
 )
 and creation_year(2013)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -232,7 +221,6 @@ quick=yes
 )
 and creation_year(2012)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -248,7 +236,6 @@ quick=yes
 )
 and creation_year(2011)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -264,7 +251,6 @@ quick=yes
 )
 and creation_year(2010)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -280,7 +266,6 @@ quick=yes
 )
 and creation_year(2009)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -296,7 +281,6 @@ quick=yes
 )
 and creation_year(2008)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -312,7 +296,6 @@ quick=yes
 )
 and creation_year(2007)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -328,7 +311,6 @@ quick=yes
 )
 and creation_year(2006)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes
@@ -344,7 +326,6 @@ quick=yes
 )
 and creation_year(2005)
 and !blog/*/*
-and !link(foo)
 and !tagged(draft)
 and !tagged(redirection)"
 archive=yes

Archival link:

The above link creates a machine-readable RSS feed that can be used to easily archive new changes to the site. It is used by internal scripts to do sanity checks on new entries in the wiki.

Created . Edited .