Recent changes to this wiki. Not to be confused with my history.

Complete source to the wiki is available on gitweb or by cloning this site.

approve comment
diff --git a/blog/2022-06-17-matrix-notes/comment_1_03aca7dcccca62c198039227f633d29c._comment b/blog/2022-06-17-matrix-notes/comment_1_03aca7dcccca62c198039227f633d29c._comment
new file mode 100644
index 00000000..927639f2
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_1_03aca7dcccca62c198039227f633d29c._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="77.221.43.197"
+ claimedauthor="grin"
+ subject="Slight correction"
+ date="2022-08-17T19:35:04Z"
+ content="""
+Bans are *not* related to *servers*: they are on *rooms*, and it does not matter which server you're on. Mjolnir puts bans in rooms, so if you're banned by the Matrix.Org people you don't go into _their_ rooms. 
+You can probably join most of the rooms (apart from element, inc. ones) even on matrix.org server, if you chose it for whatever unexplainable reason.
+
+Servers can only disable your account of firewall you, everything else is up to the federation and the server(s) can do little about it.
+
+(If you're banned on a non-element inc. room you may try to contact the room admins not to use over-restrictive morg bans. Chances of success are low, though.)
+"""]]

Added a comment: Re: Mjolnir
diff --git a/blog/2022-06-17-matrix-notes/comment_10_486073933807bc12d3aa1ac669ebc1d6._comment b/blog/2022-06-17-matrix-notes/comment_10_486073933807bc12d3aa1ac669ebc1d6._comment
new file mode 100644
index 00000000..8657da73
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_10_486073933807bc12d3aa1ac669ebc1d6._comment
@@ -0,0 +1,26 @@
+[[!comment format=mdwn
+ username="im_austinhuang.me"
+ avatar="https://seccdn.libravatar.org/avatar/2ae380669c4a412a4b425ac4ca28b256"
+ subject="Re: Mjolnir"
+ date="2022-08-16T14:37:49Z"
+ content="""
+I did touch upon this in my previous comment.
+
+Slight correction: turns out it's possible to run Mjolnir without running a homeserver, and most features would still work.
+
+> And because not everyone can run their own instance
+
+People almost never use someone else's Mjolnir since they know that they'd have no control on how it behaves. That is, assuming that they know Mjolnir at the first place...
+
+> (or, even harder, maintain their own block list)
+
+AFAIK Mjolnir requires you to write all manual (not automated) bans into at least one ban list, so in theory all Mjolnir users would be maintaining at least one ban list (regardless of its visibility).
+
+> everyone essentially runs the same block list
+
+Rooms that are associated with a public organization (eg. Arch, Mozilla, etc.) that subscribe to Element Matrix Services generally use the `matrix-org-coc-bl` list (It is unclear whether they're obliged to do so, but in one case they do seem to agree with it. They also have their Mjolnir accounts on their domains), but otherwise its application is minimal (and voluntary). There also exists alternative ban lists [as intended](https://element.io/blog/moderation-needs-a-radical-change/), such as [Community Moderation Effort](https://matrix.to/#/#community-moderation-effort-bl:neko.dev) which is used for a few large privacy-related rooms.
+
+> I bet you tripped over some URL auto-ban
+
+I don't think `@abuse:matrix.org` use the `WordList` protection (Even so, it would not publish the ban to a ban list). IIUC there are certain keywords that ping online admins and bans are applied manually.
+"""]]

response
diff --git a/blog/2022-06-17-matrix-notes/comment_9_e5f93df79de8c8281067e75d7d6def31._comment b/blog/2022-06-17-matrix-notes/comment_9_e5f93df79de8c8281067e75d7d6def31._comment
new file mode 100644
index 00000000..d739f81f
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_9_e5f93df79de8c8281067e75d7d6def31._comment
@@ -0,0 +1,19 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""mjolnir bans"""
+ date="2022-08-16T13:26:18Z"
+ content="""
+It's true that mjolnir is a heck of a hammer: because Matrix's moderation systems are quite limited, they built that bot which does everything, everywhere. And because not everyone can run their own instance (or, even harder, maintain their own block list), everyone essentially runs the same block list and it makes it so situations like yours are really, really hard to get out of.
+
+Now as to the actual topic you are refering to, I am aware of the conspiracy theories surrounding the funding of the Matrix project. Currently working at Tor, I must say those kind of discussions are rather pointless and fruitless. Yes, it does look like Matrix was originally funded by some Israel corporation that might have to do with the Israel secret services and / or military. But anyone using the Internet should probably be aware of its roots in ARPANET and the military. The concept of onion routing used in Tor was invented at the US Navy (basically), and Tor does get a significant amount of funding from the US governemnt (still).
+
+That doesn't mean that Tor is backdoored. I think people over estimate the impact of funding in technical decisions. Or, more accurately, they misplace their concerns: Tor won't write a backdoor for the NSA because it's funded by the US state department. That doesn't even make sense. What does happen though is that funding directs Tor's work more towards anti-censorship (which is not really a problem in the US) instead of (say) resistance to surveillance from a global adversary (like the NSA, which is definitely a problem in the US).
+
+(It should also be noted that such a threat model has always been out of scope for Tor, and there is probably no anonymity system out there that effectively solves that problem anyways, but you know, facts, who cares about those...)
+
+So yeah, I haven't mentioned the Amdocs/Israel connexion in this review because, quite frankly, I am really tired of reading about it. Every time it comes up, it's the same thing: in [the hackea review](https://hackea.org/notas/matrix.html), they litterally say "[Disturbing history (or maybe just FUD)](https://hackea.org/notas/matrix.html#disturbing-history-or-maybe-just-fud)... I mean why even mention it at this point?
+
+I am much more concerned about the [privacy violations](https://gitlab.com/libremonde-org/papers/research/privacy-matrix.org) that are fundamental to the current design. Those should be our primary critique of Matrix, not some weird references to some past funder...
+
+And yes, Matrix moderation kind of sucks. I bet you tripped over some URL auto-ban that triggered when you posted a link about that annoying topic. I bet that the Matrix people have had this hundreds of times, and debated it dozens of times, without getting anywhere. I would totally understand them just getting tired of this and automatically banning people based on that URL. It's just too bad it's that global... They certainly would need a more fine-grained moderation system.
+"""]]

approve comment
diff --git a/blog/2022-06-17-matrix-notes/comment_1_0831643ad9bd71bd92b12a1114c421d1._comment b/blog/2022-06-17-matrix-notes/comment_1_0831643ad9bd71bd92b12a1114c421d1._comment
new file mode 100644
index 00000000..599f9a99
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_1_0831643ad9bd71bd92b12a1114c421d1._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ ip="108.49.51.125"
+ claimedauthor="Kat"
+ url="https;//cambridgeport90.org"
+ subject="Agree With the Issues Concerning Reliance on Matrix.org's Code Of Conduct Violations"
+ date="2022-08-16T03:29:29Z"
+ content="""
+Earlier somebody mentioned how it's impossible to appeal a ban? I just got that issue last night; I made the mistake of posting an article which criticized the company that funded Matrix for the first three years of it's life, and then furthered the opinion that the author might have had a problem with the fact that the company is from Israel (my comment was about the author not myself), but Mjolnir still thought that I deserved to be banned for \"trolling, false information, spreading fud\". Either way, I'm going to have to rejoin via my alternate account on the Opensuse.org homeserver, and hope to God that it doesn't use the ban list held by Matrix.org, because chat.schncs.de does, causing me to lose nearly all of my presences in every room that agregates bans from those lists. 
+
+The other flaw with this is how smaller admins of smaller services use the ban list from the larger ones; was told by an admin that Mjolnir is not smart enough to be able to lift bans on different servers if the ban is still in affect for the server holding the ban list. I'm not whining or complaining here, for I'll obviously take my punishment like a woman, but having a nearly universal ban on nearly every server (I was just banned from even some bridges, for goodness sakes), does make things difficult without starting again. And if I start with another account, how then do I know that I won't get caught out and banned again arbitrarily by my alias alone? Not to mention,the fact that my alias on Matrix has the same prefix as nearly everything else involving me on the internet? What could these sweeping ban hammers do for reputations in the future? 
+"""]]

more articles
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
index 96917763..361bb5ff 100644
--- a/blog/rogers-monopoly.md
+++ b/blog/rogers-monopoly.md
@@ -446,4 +446,10 @@ equity or reliability. It's time to try something new.
 TODO: spell check
 TODO: reread, edit
 
+TODO:
+https://www.ledevoir.com/societe/732681/remise-en-question-dans-le-monde-des-telecoms
+https://www.ledevoir.com/opinion/libre-opinion/732636/libre-opinion-a-quand-une-loi-antitrust-pour-les-telecoms
+https://ici.radio-canada.ca/nouvelle/1896908/panne-rogers-reseaux-prives-telecom-economie-canadienne
+https://ici.radio-canada.ca/nouvelle/1897295/telecoms-entraide-urgence-panne-rogers-champagne
+
 [[!tag draft]]

no, YOU are wrong
diff --git a/blog/2022-06-17-matrix-notes/comment_7_9d9c2db84d14dee8531280b1d671ba96._comment b/blog/2022-06-17-matrix-notes/comment_7_9d9c2db84d14dee8531280b1d671ba96._comment
new file mode 100644
index 00000000..6cd6fd46
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_7_9d9c2db84d14dee8531280b1d671ba96._comment
@@ -0,0 +1,19 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""talking in "you""""
+ date="2022-07-21T15:58:05Z"
+ content="""
+Wow, there's a lot in there for a "really short" comment. I'll just address this, which seems to be the gist of that last comment:
+
+> I could debate your "already taken over" statement about [...] but I see no point, apart from saying you're wrong (in my opinion, which is, you know, a thing everyone have)
+
+See, that's the thing with opinions. Everyone has them and can throw them around like a kid throws rocks in a park. But eventually, their parents show them that throwing rocks isn't great for the other kids and they stop.
+
+In this case, just telling me "I'm wrong" is not a productive opinion to express on this blog. I really hesitated in publishing your comment at all, because it really doesn't seem to bring anything new to the conversation (other than, you know, "you're wrong").
+
+I would have preferred recommendations on how to correct or rephrase what you feel is so unfair about my article. You can disagree with it all you like, but I'm looking at better ways to improve the article, not an endless debates about peccadilles.
+
+In the meantime, I won't be approving further "you're wrong" comments unless they bring constructive corrections to the article.
+
+I don't believe I have been "unjust" in my analysis or have made "false assumptions". If I did, I am happy to correct those, but I will point out that a 8000 words text is bound to be interpreted in many different ways, so assuming intent of the writer is a mistake you should probably avoid.
+"""]]

matrix update: sharding is a thing
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index d2bdc1ab..2477b8a2 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -557,6 +557,15 @@ the meantime, I'll just point out this is a solution that's handled
 somewhat more gracefully in IRC, by having the possibility of
 delegating the authentication layer.
 
+Update: this was previously undocumented, but not only can you scale
+the frontend workers to multiple hosts, you can also *shard* the
+backend so that tables are distributed across multiple database
+hots. This has been [documented only on 2022-07-11](https://github.com/matrix-org/synapse/pull/13212/), weeks after
+this article was written, so you will forgive me for that omission,
+hopefully. Obviously, this doesn't resolve the "high availability"
+scenario since you still have a central server for that data, but it
+might help resolving performance problems for very large instances.
+
 ## Delegations
 
 If you do not want to run a Matrix server yourself, it's possible to

dkim delivery changes
diff --git a/blog/2020-04-14-opendkim-debian.mdwn b/blog/2020-04-14-opendkim-debian.mdwn
index f3647c11..7299168a 100644
--- a/blog/2020-04-14-opendkim-debian.mdwn
+++ b/blog/2020-04-14-opendkim-debian.mdwn
@@ -106,4 +106,7 @@ using a wildcard key in the key table:
 This is a copy of a subset of my more complete [[email
 configuration|services/mail]].
 
+Update: Debian.org now provides an outgoing email submission service,
+see the [[following blog post|2022-07-20-debian-relay-no-dkim]].
+
 [[!tag tutorial debian-planet sysadmin debian email]]
diff --git a/blog/2022-07-20-debian-relay-no-dkim.md b/blog/2022-07-20-debian-relay-no-dkim.md
new file mode 100644
index 00000000..be8208db
--- /dev/null
+++ b/blog/2022-07-20-debian-relay-no-dkim.md
@@ -0,0 +1,73 @@
+[[!meta title="Relaying mail through debian.org"]]
+
+Back in 2020, I wrote [[this article about using DKIM to sign outgoing
+debian.org mail|blog/2020-04-14-opendkim-debian/]]. This worked well
+for me for a while: outgoing mail was signed with DKIM and somehow was
+delivered. Maybe. Who knows.
+
+But now [we have a relay server](https://lists.debian.org/debian-devel-announce/2022/07/msg00003.html) which makes this kind of moot. So
+I have changed my configuration to use that relay instead of sending
+email on my own. It seems more reliable that mail seems to be coming
+from a real `debian.org` machine, so I'm hoping this will have better
+reputation than my current setup.
+
+In general, you should follow the [DSA documentation which includes a
+Postfix configuration](https://dsa.debian.org/user/mail-submit/). In my case, it was basically this patch:
+
+    diff --git a/postfix/main.cf b/postfix/main.cf
+    index 7fe6dd9e..eabe714a 100644
+    --- a/postfix/main.cf
+    +++ b/postfix/main.cf
+    @@ -55,3 +55,4 @@ smtp_sasl_security_options =
+     smtp_sender_dependent_authentication = yes
+     sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay
+     sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport
+    +smtp_tls_policy_maps = hash:/etc/postfix/tls_policy
+    diff --git a/postfix/sender_relay b/postfix/sender_relay
+    index b486d687..997cce19 100644
+    --- /dev/null
+    +++ b/postfix/sender_relay
+    @@ -0,0 +1,2 @@
+    +# Per-sender provider; see also /etc/postfix/sasl_passwd.
+    +@debian.org    [mail-submit.debian.org]:submission
+    diff --git a/postfix/sender_transport b/postfix/sender_transport
+    index ca69bc7a..c506c1fc 100644
+    --- /dev/null
+    +++ b/postfix/sender_transport
+    @@ -0,0 +1,1 @@
+    +anarcat@debian.org     smtp:
+    diff --git a/postfix/tls_policy b/postfix/tls_policy
+    new file mode 100644
+    index 00000000..9347921a
+    --- /dev/null
+    +++ b/postfix/tls_policy
+    @@ -0,0 +1,1 @@
+    +submission.torproject.org:submission   verify ciphers=high
+
+This configuration differs from the one provided by DSA because I
+already had the following configured:
+
+    sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay
+    smtp_sender_dependent_authentication = yes
+    smtp_sasl_auth_enable = yes
+    smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
+    smtp_sasl_tls_security_options = noanonymous
+
+I also don't show the patch on `/etc/postfix/sasl_passwd` for obvious
+security reasons.
+
+I also had to setup a `tls_policy` map, because I couldn't use `dane`
+for all my remotes. You'll notice I also had to setup a
+`sender_transport` because I use a non-default `default_transport` as
+well.
+
+It also seems like you can keep the previous DKIM configuration in
+parallel with this one, as long as you don't double-sign outgoing
+mail. Since this configuration here is done on my mail *client*
+(i.e. not on the *server* where I am running OpenDKIM), I'm not
+double-signing so I left the DKIM configuration alone. But if I wanted
+to remove it, the magic command is:
+
+    echo "del dkimPubKey" | gpg --clearsign | mail changes@db.debian.org
+
+[[!tag tutorial debian-planet sysadmin debian email]]

approve comment
diff --git a/blog/2022-06-17-matrix-notes/comment_1_7e0304d6997c5f64e8f35d5771f6315d._comment b/blog/2022-06-17-matrix-notes/comment_1_7e0304d6997c5f64e8f35d5771f6315d._comment
new file mode 100644
index 00000000..27466b44
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_1_7e0304d6997c5f64e8f35d5771f6315d._comment
@@ -0,0 +1,31 @@
+[[!comment format=mdwn
+ ip="77.221.43.197"
+ claimedauthor="grin"
+ subject="I will be really short... still not enough"
+ date="2022-07-13T21:26:44Z"
+ content="""
+1) I **have** read the article, also read again the sections I was responding. What you see is that my phrasing was specific but was not detailed enough - apologies; this commenting isn't really good for discussions. (You can reach me on matrix as @grin:grin.hu, I guess that isn't a shocking surprise for you.) [Just responding to your first sentence: I have seen you're an ircop, I was missing that you expressively stated that what you write is not the general, neutral review but a very specific ircop-centered (and skewed) one; \"why irc is the best, compared to matrix\"]
+
+2) You compare a 35 years old (and very mature) technology to a betatesting and fairly young system, and you do it by comparing features specifically require some maturity to be even comparable. You compare a reference non-performant betatest server to multiple production-ready irc servers. You think that's okay (so it is, really, since this is your article after all), and I think it's a mistake (and it is, probably, for a reader not realising that your view is strongly biased). I also have grown up on irc [mainly IrcNet], I am intimately familiar with it (or, rather, what it was 15+ years ago), and I see its pros and cons. I am also way too familiar with matrix internals, both technically and politically, and it really have effects on my views.
+
+3) There are more than 200 open, stable matrix servers (and also some 2000+ used by fewer than a dozen users). Every time you say \"the matrix network equals to matrix.org, its server or its organisation\" you offend a lot of people (and not many of them have high opinions of the aforementioned bunch). You reject the view that one specific and pretty problematic server isn't The Network, and while this is your freedom (like picking an irc server with a mad admin and use it as an example for the whole network) it's probably very unfair to everyone involved, also makes any debate pointless. We cannot help whatever Element, Inc. does, and we do it differently. Whatever problem you prove by pointing at morg isn't necessarily valid elsewhere on the network.
+
+4) re 500ms is horrible: I laughed out loud. I still remember 42 SECONDS lag, that's, uh, 42000 ms. And continuous connection losses. Reconnects. Actually, plenty of disconnect/reconnect messages seeing that the network is unstable. And of course the netsplits. I am not sure you said that in all seriousness.
+(For the record: the minimal full [user-server1-server2-bot-server2-server1-user] round-trip is about 70ms [measured on Construct servers last year], anything longer is caused by dozens of factors I will not detail here, apart from Synapse.)
+
+5) re trolls: I am using both python antispam modules and database backed scripts, and I have my fair share of trolls. But you seem not to be pretty familiar how it works, and I am not sure you really want to know, so let's just say guests cannot join random rooms.
+
+6) You do not seem to understand what \"homeserver\" means. It means that the server has one, or a few users, run by individuals at home. Show me a [large] irc network where the users run their own servers. (In fact, show me some irc networks with 2+ million users and full message history, and let us compare them.)
+
+7) I agree that \"don't see Matrix trying to address those problems\" of openness vs. idiots. I (and many others) have been told that from the beginning, and they ignored it. I am pretty sure it's not worse off than irc, and I can't decide whether it's better in this field.
+
+8) \"No one can take over IRC either\" - Freenode, friend. It has been taken over. If an organisation owns or controls all the servers, or controls the services they can do whatever they please to that network, and all the ircops can do is to create a new network, like Libera did. It is not Freenode - freenode was taken from them. They created a new network. 
+In contrast, morg/element can't do anything to even bring down the matrix network: they can switch off the morg server, or blank the git repo, but they cannot even have effect the communication between servers and can't do squat about forks of the spec.
+
+I could debate your \"already taken over\" statement about email or XMPP but I see no point, apart from saying you're wrong (in my opinion, which is, you know, a thing everyone have). There are threats, as there've been for the whole internet, but not less or more than other continuous factors like dictatoric states with powers, local governments or large companies in general.
+
+Maybe the matrix network needs a fork, like irc had many times, to streamline protocol development, and honestly throw out a lot of unnecessary crap and bad hacks. Time will tell. Until then, it is __not__ irc, and I don't think they can simply compared. You think \"message history\" is a feature and distributed servers are just an interesting addon but these actually define matrix, which is an \"eventually consistent, distributed, direct acyclic graph of tagged and organised json messages\". Very different from irc, which is a store-and-forward text service with multiple interconnected servers.
+
+All that said - your review had many important and good insights and listed very real and important problems; my triggers were the unjust weights and false assumptions about the real reasons for those problems, and whether these are unsolvable (by being inherent to the system) or not.
+
+"""]]

i3: update from running config, switched back to py3status, mainly
diff --git a/software/desktop/i3.conf b/software/desktop/i3.conf
index 3dbd5d08..09bcc8c9 100644
--- a/software/desktop/i3.conf
+++ b/software/desktop/i3.conf
@@ -4,8 +4,6 @@
 
 # TODO:
 #
-# * use Xresources for colors (see below)
-#
 # * assign certain clients to certain windows, to bootstrap session
 #   properly: https://i3wm.org/docs/userguide.html#assign_workspace
 #
@@ -18,32 +16,17 @@
 #   configuration - way too geeky abstraction that is not really
 #   necessary...
 #
-# * initialize 9 empty layouts? wmii used to be uninitialized, maybe
-#   that's just a matter of habit...
-#
 # * i miss "alt-shift-enter" from xmonad, which would bring the
 #   focused window in the primary workspace/container. alt-shift-space
 #   (float this window) is a good alternative
 #
-# * volume keys can't be held to continuously adjust volume
-#
 # * sticky mode for writeroom-mode
 #
-# * consider xautolock + i3lock + nice image?
-#
 # * i3 doesn't respect $PATH set in ~/.shenv from .xsession? look for
 #   /home/anarcat/bin/ once this is fixed.
 #
 # * resize keybindings are weird: "up" should move the *boundary* up,
 #   not shrink/grow, because that's context-specific
-#
-# * use variables (or a theme?) for colors, that way they can be
-#   reused in (say) demenu or whatever
-#
-# * add nicer icons in the status bar, using font awesome, e.g.
-#   https://extendedreality.wordpress.com/2014/11/20/i3-tips-n-tricks/
-#   that's in taffybar now, but anyways...
-# 
 
 # left "windows" key
 set $mod Mod4
@@ -324,44 +307,51 @@ for_window [class="Pavucontrol"] floating enable
 
 #exec --no-startup-id /home/anarcat/bin/i3-load-layouts
 
-# those should be extracted from Xresources:
+# extract colors from Xresources:
 # https://i3wm.org/docs/userguide.html#xresources
+#
+# this actually doesn't work (!?) so we fallback to srcery colors
+# here, see: https://github.com/i3/i3/discussions/5051
+set_from_resource $black i3wm.color0 #1C1B19
+set_from_resource $bright_black i3wm.color8 #918175
 
-# cargo-culted from https://github.com/srcery-colors/srcery-gui/blob/master/i3wm/i3-config
-set $black #1C1B19
-set $bright_black #918175
-
-set $red #EF2F27
-set $bright_red #F75341
+set_from_resource $red i3wm.color1 #EF2F27
+set_from_resource $bright_red i3wm.color9 #F75341
 
-set $green #519F50
-set $bright_green #98BC37
+set_from_resource $green i3wm.color2 #519F50
+set_from_resource $bright_green i3wm.color10 #98BC37
 
-set $yellow #FBB829
-set $bright_yellow #FED06E
+set_from_resource $yellow i3wm.color3 #FBB829
+set_from_resource $bright_yellow i3wm.color11 #FED06E
 
-set $blue #2C78BF
-set $bright_blue #68A8E4
+set_from_resource $blue i3wm.color4 #2C78BF
+set_from_resource $bright_blue i3wm.color12 #68A8E4
 
-set $magenta #E02C6D
-set $bright_magenta #FF5C8F
+set_from_resource $magenta i3wm.color5 #E02C6D
+set_from_resource $bright_magenta i3wm.color13 #FF5C8F
 
-set $cyan #0AAEB3
-set $bright_cyan #53FDE9
+set_from_resource $cyan i3wm.color6 #0AAEB3
+set_from_resource $bright_cyan i3wm.color14 #53FDE9
 
-set $white #D0BFA1
-set $bright_white #FCE8C3
+set_from_resource $white i3wm.color7 #D0BFA1
+set_from_resource $bright_white i3wm.color15 #FCE8C3
 
-set $orange #D75F00
-set $bright_orange #FF8700
+# those are not defined in the srcery .Xresources as they fall outside
+# of the 16 colors set. but there are other 256 colors in xterm that
+# srcery just defaults to. we still give Xresources a chance to
+# override those. see also
+# https://github.com/srcery-colors/srcery-terminal/blob/ca6ac7406ced8b0b7c4e4b2188d3d5b48668f9c3/README.md
+# https://github.com/srcery-colors/srcery-terminal/issues/161
+set_from_resource $orange i3wm.color202 #D75F00
+set_from_resource $bright_orange i3wm.color208 #FF8700
 
-set $xgray1 #262626
-set $xgray2 #303030
-set $xgray3 #3A3A3A
-set $xgray4 #444444
-set $xgray5 #4E4E4E
+set_from_resource $xgray1 i3wm.color235 #262626
+set_from_resource $xgray2 i3wm.color236 #303030
+set_from_resource $xgray3 i3wm.color237 #3A3A3A
+set_from_resource $xgray4 i3wm.color238 #444444
+set_from_resource $xgray5 i3wm.color239 #4E4E4E
 
-set $hard_black #121212
+set_from_resource $hard_black i3wm.color233 #121212
 
 # Colors                border        background  text          indicator child_border
 client.focused          $bright_black $xgray3     $yellow       $yellow   $bright_black
@@ -376,9 +366,11 @@ client.background       $black
 bar {
         # pango-list can help finding fonts here
         font pango:FontAwesome, Fira mono 10
-        # window sound cpu memory load date
-        status_command bumblebee-status --iconset awesome-fonts
-        #status_command py3status
+        # a status bar with:
+        # window network cpu-health cpu load memory battery volume datetime date
+        status_command exec py3status
+        # i used bumblebee-status briefly but switched because it used too much CPU
+        # https://github.com/tobi-wan-kenobi/bumblebee-status/issues/891
         position top
         # obey Fitt's law, ie. reduce the empty space
         tray_padding 0

some contacts made
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
index ba270bdc..96917763 100644
--- a/blog/rogers-monopoly.md
+++ b/blog/rogers-monopoly.md
@@ -445,9 +445,5 @@ equity or reliability. It's time to try something new.
 
 TODO: spell check
 TODO: reread, edit
-TODO: check with folks in stockholm
-TODO: ask mesh friends
-TODO: bounce by koumbit board
-TODO: pierre
 
 [[!tag draft]]

hydro has fiber already!
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
index f3360048..ba270bdc 100644
--- a/blog/rogers-monopoly.md
+++ b/blog/rogers-monopoly.md
@@ -150,7 +150,13 @@ every house in the province, with high power lines running hundreds of
 kilometers very far north. The logistics of long distance maintenance
 are already partly solved by that institution, and I believe running
 fiber next to power lines shouldn't prove that much of a technical
-challenge.
+challenge. In fact, Hydro already [has fiber all over the
+province](https://www.lapresse.ca/affaires/2020-01-30/internet-en-region-hydro-quebec-prete-a-ceder-de-la-fibre-optique), but it is a private network, separate from the internet
+for security reasons (and that should probably remain so). But this
+only shows they already have the expertise to lay down fiber: we would
+"just" need to lay down a parallel network to the existing one.
+
+In that architecture, Hydro would be a "dark fiber" provider.
 
 ## International public internet
 

respond to FDN preemptively
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
index 2a782387..f3360048 100644
--- a/blog/rogers-monopoly.md
+++ b/blog/rogers-monopoly.md
@@ -312,6 +312,14 @@ Canada, the [first site-blocking order](https://torrentfreak.com/canadas-supreme
 request of, you guessed it, major media companies including Rogers,
 and Bell Canada.
 
+Nevertheless, there are some strong arguments to be made against
+having a centralised, state-owned monopoly on internet service
+providers. [FDN makes a good point on this](https://blog.fdn.fr/?post/2011/06/21/Il-ne-faut-pas-nationaliser-les-FAI-%21). But this is not what I
+am suggesting: at the provincial level, the network would be purely
+physical, and regional entities (which could include private
+companies) would peer over that physical network, ensuring
+decentralization.
+
 ## What about Google and Facebook?
 
 This proposal does not yet propose to nationalise other service

another idea
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
index 041196ed..2a782387 100644
--- a/blog/rogers-monopoly.md
+++ b/blog/rogers-monopoly.md
@@ -434,5 +434,6 @@ TODO: reread, edit
 TODO: check with folks in stockholm
 TODO: ask mesh friends
 TODO: bounce by koumbit board
+TODO: pierre
 
 [[!tag draft]]

response
diff --git a/blog/2022-06-17-matrix-notes/comment_4_92722f6ab2f040d45b23d9ab26289c69._comment b/blog/2022-06-17-matrix-notes/comment_4_92722f6ab2f040d45b23d9ab26289c69._comment
new file mode 100644
index 00000000..9a25cd37
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_4_92722f6ab2f040d45b23d9ab26289c69._comment
@@ -0,0 +1,102 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""response"""
+ date="2022-07-11T13:40:00Z"
+ content="""
+Well, there's a lot to unpack here, but let's start with this:
+
+> I believe the most important - and missing - statement is that it is az ircop point of view. 
+
+I don't understand how you can simultaneously come to the conclusion that I am an IRCop *and* claim that I did not state this in the article. Maybe you'd like to reread the second paragraph, which says:
+
+> I am the operator of an IRC network
+
+Now onto specifics.
+
+> It is just not proper to compare to a system with a fire-and-forget architecture.
+
+I think it's perfectly fair to compare the two, since it's two technologies I am considering. I will also note that IRCv3 has provisions for storing messages server-side as well, just not the way Matrix does it.
+
+> Also you mentioned that severs log even one-to-one rooms: that's right but it has no means to abuse: all the messages are end-to-end crypted by default
+
+Sure, but E2EE is not all: it encrypts only the content, not the metadata, and there's lots of metadata that flies around not encrypted. That is the whole point here.
+
+> 2.3 matrix.org policies, 2.5 url previews,
+> 3.1 mjolnir bot, 3.2 cmd tool,
+
+... which are basically:
+
+> Morg is just one server. [...] These are just tools.
+
+Yes, but those tools *matter*. Matrix.org is where people will first go and will greatly influence policy. It should set the right example. mjolnir is something that's *actively* proposed by Matrix people as a solution to moderation. Yes, it's just one tool, but there is no other. I think such tools should be built-in to clients more tightly, without people having to run their own moderation bot (and therefore home server). Anyone considering moderation problems in Matrix should first [read this admin's post](https://www.aminda.eu/blog/english/2021/12/05/matrix-community-abuse-security-by-obscurity.html) and see how better to solve this problem.
+
+Just waving your hand saying "there's an API" does not help users in any useful way.
+
+> 3.4 Fundamental federation problems
+>
+> I don't think you're right here. [...]
+
+I don't understand the point you're trying to make here, at all. I'm trying to argue that moderating a federation is hard, including on IRC. You're responding that ... you have a guest mode? and you agree bridges are hell? Which part am I wrong about here specifically?
+
+> 3.5 hijacking room admins
+>
+> It is more secure than IRC, where a server admin or a service admin can hijack anything.
+
+That section of mine claims that Matrix admins can hijack users (and therefore rooms as well), how is that more secure than IRC? Maybe you can't hijack rooms directly, but that doesn't seem like a large enough improvement to me.
+
+> 4. Availability
+>
+> You are looking for a wrong solution. In Synapse, [...]
+>
+> 5. Performance
+>
+> This is all about Synapse. Synapse is a not-so-good server, to say it politely.[...] I've been used The Construct (an alternative server, not production ready) for many years[...]
+
+Okay let's pretend we're reviewing the web here. The main website is Gulag.com and it's the search engine. Its performance is mediocre and it disappears off the internet once in a while because it's not built for availability. Yet there's this other "alternative server" that is "not production ready" yet out there, somewhere. Will the web succeed?
+
+Maybe. Maybe everyone will start using that mysterious search engine instead of the Gulag. But that's not what is happening with Matrix right now. Every time this discussion comes up, people mention Conduit, dendrite, and what not: none of those servers have feature parity with Matrix yet, and they do not, as far as I know, provide a highly available setup. So maybe they fix some performance problems, but you lose features.
+
+Synapse is currently the flagship server: if it fails at something, Matrix fails. Claiming those problems are solved by some "not production ready" server does not help user or defeat my point about those limitations.
+
+> 5.3 Latency
+>
+> [500-1500ms latency]
+
+That sounds horrible.
+
+> 6.1 onboarding
+>
+> You're unfair. My server's onboarding is: 1. get on the login webpage, enter your data 2. click on the link in your verifiaction email 3. you're in.
+
+No I'm not: the above is just your server (and not Matrix.org). And it's more onboarding than Signal, which I'm comparing against here.
+
+>  Also since my server have guest allowed you can even just pop in a room and start talking. Zero onboarding (but pretty inconvenient for everyone).
+
+Let me know when the trolls find out abuot that guest system of yours and how you handle it, then we'll talk about onboarding again.
+
+> 6.3 bots
+
+Did you *read* that section? It's one of those where I actually have basically only positive comments about Matrix...
+
+[...]
+
+> 7.2 Everyone's behind IRC
+
+You changed the title of this section, it is called "Where the federated services are in history".
+
+> IRC went the closed-circle professional run server way, Matrix choose the homeserver way.
+
+It's really strange to make such a parallel because it seems to me that Matrix.org is way more engaged towards the business side of things (working with major european institutions for example) than IRC ever was. I think that IRC servers are way closer to "home servers" (ie. servers you run from your home) than Matrix is.
+
+> I do not believe that the cabal-operated networks are the best way, and you've seen what happened to freenode.
+
+What happened to freenode is actually quite interesting, and I'm kind of sorry I didn't talk more about it here. But what happened is somewhat of a good thing, in the end: they got rid of a critical flaw in their administrative structure and survived. It's now called libera.chat, and freenode is dead, long live libera.
+
+> Nobody can really takeover matrix, xmpp or activitypub, since they are distributed systems, and not a tightly controlled, closed circle. Their weakness (from your POV: openness) is also their strength (resilience).
+
+I don't think it's black and white. Openness, in networks, brings weakness. It's a fundamental property of distributed systems, and I don't see Matrix trying to address those problems directly enough to believe they will fix the issues that are plaguing other protocols including email, IRC, XMPP, and Signal.
+
+Sure, no one can take over Matrix, XMPP, or ActivityPub. No one can take over IRC or email either, but someone is certainly trying to anyways, and it seems they (Google and Microsoft) are succeeding with email. One could argue that XMPP has been taken over and scuttled by Google and Facebook already.
+
+Not taking those threats under consideration would be a serious mistake that Matrix should definitely avoid if it wants to survive in the long run.
+"""]]
diff --git a/blog/2022-06-17-matrix-notes/comment_5_7d9583c457af48188f3c0d3bae685184._comment b/blog/2022-06-17-matrix-notes/comment_5_7d9583c457af48188f3c0d3bae685184._comment
new file mode 100644
index 00000000..25a518a7
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_5_7d9583c457af48188f3c0d3bae685184._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""further comments"""
+ date="2022-07-11T14:07:32Z"
+ content="""
+Before anyone else piles on the comments here, I urge you to actually read the article before responding: don't just plunder the table of contents and assume what the content is. I approved the previous comment as an example, but I take a dim view of people that do not actually read the text they comment on, and will not approve further comments of that nature.
+
+Thank you in advance.
+"""]]

fix typo, thanks grin
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index 2e082dcc..d2bdc1ab 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -868,7 +868,7 @@ something that *anyone* working on a federated system should study in
 detail, because they are *bound* to make the same mistakes if they are
 not familiar with it. The "short" version is:
 
- * 1988: Finish researcher publishes first IRC source code
+ * 1988: Finnish researcher publishes first IRC source code
  * 1989: 40 servers worldwide, mostly universities
  * 1990: EFnet ("eris-free network") fork which blocks the "open
    relay", named [Eris][] - followers of Eris form the A-net, which

approve comment
diff --git a/blog/2022-06-17-matrix-notes/comment_1_7ac99c665211b7bfdc60ec47f0d7011d._comment b/blog/2022-06-17-matrix-notes/comment_1_7ac99c665211b7bfdc60ec47f0d7011d._comment
new file mode 100644
index 00000000..6678c1e6
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_1_7ac99c665211b7bfdc60ec47f0d7011d._comment
@@ -0,0 +1,66 @@
+[[!comment format=mdwn
+ ip="77.221.43.197"
+ claimedauthor="grin"
+ subject="Lot of correct insights but from a very specific point of view"
+ date="2022-07-10T20:20:41Z"
+ content="""
+Thank you for this detailed writeup. 
+I believe the most important - and missing - statement is that it is az ircop point of view. Many of the menioned \"problems\" are related to the huge difference between irc and matrix, which may seem to be small from an irc-users' point of view. I try not to get into details, just some samples.
+
+# comments
+
+## 2.1 data retention, 5.4 transport, 
+It is a fundamental function of Matrix to keep history, which is a fundamental shortcoming of IRC. There are bazillion separate solutions on irc to keep history, in various forms, places, and processability. 
+Matrix histoy keeping results a few attributes, like scrolling back, synchronising, out of order display of ordered messages etc. It is just not proper to compare to a system with a fire-and-forget architecture.
+Also you mentioned that severs log even one-to-one rooms: that's right but it has no means to abuse: all the messages are end-to-end crypted by default.
+
+## 2.3 matrix.org policies, 2.5 url previews, 
+Morg is just _one_ server, and not a particularly loved one. There are 2000+ servers on the network and a very few shares settings and policies with them. Admins keep trying to create a system to help new users to find good servers. It's still in infancy.
+
+## 3.1 mjolnir bot, 3.2 cmd tool, 
+These are just tools. It's like complaining about eggdrop in an irc server review. The potocol is pretty simple (REST), writing a bot is not really a hard task, only nobody have given one enough love yet.
+
+## 3.4 Fundamental federation problems
+I don't think you're right here. First, room moderation is not connected to homeserver policies at all (they are federated). If you mean room ops don't see IP addresses - indeed, similar to masked irc users.
+Second, my server have guest accounts and it works with more or less success, but indeed if it was abused I'd switch it off.
+However it is true that bridges are a hell to moderate but the reason is because they use different restriction solutions which aren't simple to convert.
+
+## 3.5 hijacking room admins
+It is more secure than IRC, where a server admin or a service admin can hijack anything. 
+The signing may change, it's up to debate as of now, along with account decoupling from servers.
+
+## 4. Availability
+You are looking for a wrong solution. In Synapse (which is, remember, just one of the possible many servers) Redundancy works on call level (running parallel workers) and db and redis can be easily replicated (my test replication was up in 10 minutes, it is much simpler today than 5 years ago). But you're comparing with a database-less irc.
+
+## 5. Performance
+This is all about Synapse. Synapse is a not-so-good server, to say it politely. It can now scale horizontally just okay (I use that) but still a resource hog vertically.
+I've been used The Construct (an alternative server, not production ready) for many years and it was extremely unlike Synapse, both in performance and resource use, also it has been designed to scale in both ways. It's not ready for use (and maybe never will) but shows that your concerns are similar like the early irc servers were, many decades ago. 
+
+## 5.3 Latency, 
+My internal latency is about 500ms, the gross round-trip is usually about 1500ms. First sync/join is braindead, mostly due to bad protocol design and it is being fixed AFAIR. Normal sync is not that bad now, especially since lazy sync was implemented. (a few secs)
+
+## 6.1 onboarding
+You're unfair. My server's onboarding is:
+1. get on the login webpage, enter your data
+2. click on the link in your verifiaction email
+3. you're in.
+Also since my server have guest allowed you can even just pop in a room and start talking. Zero onboarding (but pretty inconvenient for everyone).
+
+## 6.2 clients, 6.3 bots
+You can use a bash script with a few curl calls as a client/bot, you cannot get simpler than that.
+Still, it's a young system, needs better clients.
+
+## 6.4 specifiaction
+Exactly as you say, it's a mess.
+But you're not right in saying \"matrix people\", as MSCs are from everyone. There are useful ones and there are which is not.
+
+## 7.1 History
+[\"Finnish\", not \"finish\" ;-)]
+But you see, 1988 was 34 years ago and matrix is a few years old.
+Also you see that irc has been split and broken 4-5 times in epic proportions and multiple times afterwards. Matrix can be split, and if it's run by Element, Inc. the way it is now it's bound to be split. There are a lot of people toying with the idea of redesigning the protocol, keeping the concept. We'll see.
+
+## 7.2 Everyone's behind IRC
+Nah, funny but.. not quite. IRC went the closed-circle professional run server way, Matrix choose the homeserver way. I do not believe that the cabal-operated networks are the best way, and you've seen what happened to freenode. Nobody can really takeover matrix, xmpp or activitypub, since they are _distributed_ systems, and not a tightly controlled, closed circle. Their weakness (from your POV: openness) is also their strength (resilience).
+
+
+"""]]

more tldr, another objection
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
index 2759615c..041196ed 100644
--- a/blog/rogers-monopoly.md
+++ b/blog/rogers-monopoly.md
@@ -26,6 +26,12 @@ that should be reverted. The question is how. Opposition parties are
 quick to point out that we need more ISP diversity and competition,
 but I think that's missing the point.
 
+I believe the solution to the problem of large, private, centralised
+telcos and ISPs is to replace them with smaller, public, decentralised
+service providers. The only way to ensure that works is to make sure
+that public money ends up creating infrastructure controlled by the
+public, which means treating ISPs as a public utility.
+
 [[!toc levels=2]]
 
 # A modest proposal
@@ -37,10 +43,6 @@ managed is therefore inherently political, yet people don't seem to
 question the idea that only the market (ie. "competition") can solve
 this problem. I disagree.
 
-I believe the solution to the problem of large, private, centralised
-telcos and ISPs is to replace them with smaller, public, decentralised
-service providers.
-
 [10 years ago](https://anarc.at/blog/2012-06-20-pourquoi-un-monopole-sur-linternet-et-une-solution-reseau-quebec/) (in french), I suggested we, in Québec, should
 nationalize large telcos and internet service providers. Now i don't
 feel this is a realistic approach: even though most of those companies
@@ -310,6 +312,21 @@ Canada, the [first site-blocking order](https://torrentfreak.com/canadas-supreme
 request of, you guessed it, major media companies including Rogers,
 and Bell Canada.
 
+## What about Google and Facebook?
+
+This proposal does not yet propose to nationalise other service
+providers like Google and Facebook, although I do think those need to
+be broken up as well. I am not sure the state should get into the
+business of organising the web or providing content services however,
+but I will point out it already does do some of that through websites
+it publishes online. It should probably keep itself to this.
+
+(And I would also be ready to argue that Google and Facebook already
+act as extensions of the state: certainly if Facebook didn't exist,
+the CIA or the NSA would like to create it at this point. And Google
+has lucrative business with many nations, including the US Defense
+industry.)
+
 ## Isn't this like communism?
 
 Call it what you will. I prefer to call it not to be screwed over by

finish first draft, still lots to do
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
index 46bd4b8c..2759615c 100644
--- a/blog/rogers-monopoly.md
+++ b/blog/rogers-monopoly.md
@@ -1,17 +1,24 @@
-Rogers have seen a catastrophic failure of their infrastructure this
-week. This affected emergency services (as people couldn't call 911,
-despite this being a strong requirement of the CRTC), hospitals (which
-couldn't issue meds because of missing internet access), banks and
-payment systems, and regular users as well. The outage lasted almost a
-full day and its impact on the economy has yet to be measured, but it
-probably cost billions of dollars in wasted time and probably lead to
-more than one life-threatening situations.
+[Rogers](https://en.wikipedia.org/wiki/Rogers_Communications) had a catastrophic failure of their infrastructure this
+week. It affected emergency services (as people couldn't call 911),
+hospitals (which couldn't issue prescriptions because of missing
+internet access), banks and payment systems (as payment terminals
+simply stopped working), and regular users as well. The outage lasted
+almost a full day, and Rogers still has to give a proper technical
+explanation on the outage. So far the only reliable account is from
+outside actors like [Cloudflare](https://blog.cloudflare.com/cloudflares-view-of-the-rogers-communications-outage-in-canada/) which seem to point at an internal
+BGP failure.
 
-TODO:
-https://ici.radio-canada.ca/nouvelle/1854653/bell-panne-television-internet-telephonie-froid
+Its impact on the economy has yet to be measured, but it probably cost
+billions of dollars in wasted time and probably lead to more than one
+life-threatening situations. Apart from holding Rogers (criminally?)
+responsible for this, what should be done in the future to avoid such
+problems?
 
-Apart from holding Rogers (criminally?) responsible for this, what
-should be done in the future to avoid such problems?
+It's not the first time something like this happens: it happened to
+[Bell Canada](https://ici.radio-canada.ca/nouvelle/1854653/bell-panne-television-internet-telephonie-froid) as well. The Rogers outage is also strangely similar
+to the [Facebook outage](https://blog.cloudflare.com/during-the-facebook-outage/) last year, but, to its credit, Facebook
+did post a [fairly detailed explanation only a day later](https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/). We are
+still waiting for an explanation from Rogers.
 
 It's obvious: the internet is designed to be decentralised, and having
 large companies like Rogers hold so much power is a crucial mistake
@@ -19,6 +26,10 @@ that should be reverted. The question is how. Opposition parties are
 quick to point out that we need more ISP diversity and competition,
 but I think that's missing the point.
 
+[[!toc levels=2]]
+
+# A modest proposal
+
 Global wireless services (like phone services) and home internet
 inevitably grow into monopolies. They are public utilities, just like
 water, power, railways, and roads. The question of how they should be
@@ -28,10 +39,293 @@ this problem. I disagree.
 
 I believe the solution to the problem of large, private, centralised
 telcos and ISPs is to replace them with smaller, public, decentralised
-service providers. There are many possible ways of accomplishing
-this. 
+service providers.
+
+[10 years ago](https://anarc.at/blog/2012-06-20-pourquoi-un-monopole-sur-linternet-et-une-solution-reseau-quebec/) (in french), I suggested we, in Québec, should
+nationalize large telcos and internet service providers. Now i don't
+feel this is a realistic approach: even though most of those companies
+have basically crap copper-based networks (at least for the last
+mile), a hodge-podge mix of POTS and cable and whatnot, they are
+valued into billions of dollars and would be prohibitive to buy
+out. (And I am also aware of how ridiculous the proposal to
+nationalize *anything* sounds these days, even though it's been a key
+economic driver in our history.)
+
+Back then, I called this idea "Réseau-Québec", a reference to the
+already nationalized power company, Hydro-Québec. (This idea,
+incidentally, made it into [the plan one of the provincial political
+parties](https://quebecsolidaire.net/nouvelle/internet-quebec-solidaire-veut-couper-les-prix-et-garantir-lacces-partout-sur-le-territoire), at least in name.
+
+Now, I think we should instead build our own, public internet. Start
+setting up municipal internet services, fiber to the home in all
+cities, progressively. Then interconnect cities with fiber, and build
+peering agreements with other providers. This also includes a bid on
+wireless spectrum to start competing with phone providers as well.
+
+And while that sounds really ambitious, I think it's possible to take
+this one step at a time.
+
+## Municipal broadband
+
+In many parts of the world, [municipal broadband](https://en.wikipedia.org/wiki/Municipal_broadband) is an elegant
+solution to this problem, with solutions ranging from [Stockholm's
+city-owned fiber network](https://stokab.se/en/stokab) (dark fiber, basically [layer 1](https://en.wikipedia.org/wiki/Physical_layer)) to
+[Utah's UTOPIA network](https://en.wikipedia.org/wiki/Utah_Telecommunication_Open_Infrastructure_Agency) ([fiber to the premises](https://en.wikipedia.org/wiki/Fiber_to_the_premises), [layer 2](https://en.wikipedia.org/wiki/Data_link_layer))
+and [municipal wireless networks](https://en.wikipedia.org/wiki/Municipal_wireless_network) like [Guifi.net](https://en.wikipedia.org/wiki/Guifi.net) which
+connects about 40,000 nodes in Catalonia.
+
+A good first step would be for cities to start providing broadband
+services to its residents, directly. Cities normally own sewage
+and water systems that interconnect most residences and therefore have
+direct physical access everywhere. In Montreal, in particular, there
+is an ongoing project to replace a *lot* of old led-based plumbing
+(for obvious health reasons) which would give an excellent opportunity
+to lay down a wired fiber network across the city.
+
+This is obviously a wild guess, but I suspect this would be much less
+expensive than one would think. [Some people agree with me and quote
+this as low as 1000$ per household](https://news.ycombinator.com/item?id=32036208). There is about [800,000
+households in the city of Montreal](https://ville.montreal.qc.ca/pls/portal/docs/PAGE/MTL_STATS_FR/MEDIA/DOCUMENTS/PROFIL_MENAGES_LOGEMENTS_2016-VILLE_MONTR%C9AL.PDF), so we're talking about a 800
+million dollars investment here, to *connect every household with
+fiber*. And this is not an up-front cost: this can be built
+progressively, with expenses amortized over many years.
+
+Obviously, such a network should be built with redundancy in mind,
+with a redundant topology. I leave it as an open question whether we
+should adopt Stockholm's more minimalist approach or provide direct IP
+connectivity. I would tend to favor the latter, because then you can
+immediately start to offer the service to households and generate
+revenues to compensate for the capital expenditures.
+
+Given the ridiculous profit margins telcos currently have — 8 billion
+$CAD net income for BCE ([2019](https://en.wikipedia.org/wiki/BCE_Inc.)), 2 billion $CAD for Rogers
+([2020](https://investors.rogers.com/2020-annual-report/)) — I also believe this would actually turn into a
+profitable revenue stream for the city, the same way Hydro-Québec is
+more and more considered as a revenue stream for the state. (I
+personally believe that's actually wrong and we should treat those
+resources as human rights and not money cows, but I disgress. The
+point is: this is not a cost point, it's a revenue.)
+
+The other major challenge here is that the city will need competent
+engineers to drive this project forward. But this is not very
+different from the way other public utilities are run: we have
+electrical engineers at Hydro, sewer and water engineers at the city,
+this is just another profession. If anything, the computing science
+sector might be more at fault than the city here in its failure to
+[provide competent and accountable engineers to society](https://cacm.acm.org/magazines/2022/6/261171-the-software-industry-is-still-the-problem/fulltext)...
+
+## Provincial public internet
+
+As part of building a municipal network, the question of getting
+access to "the internet" will immediately come up. Naturally, this
+will first be solved by using already existing commercial providers to
+hook up residents to the rest of the global network.
+
+But eventually, networks will reach other: Montreal will cross-connect
+with Laval, and then Trois-Rivières, then Québec City. This will
+require long haul fiber runs, but those links are not actually that
+expensive, and *many* of those already exist as a public resource in
+the form of the [RISQ](https://en.wikipedia.org/wiki/R%C3%A9seau_d%27informations_scientifiques_du_Qu%C3%A9bec), which cross-connects universities and
+colleges across the province. Obviously, the RISQ network is not big
+enough to cover the needs of the entire province right now, but it
+could probably be upgraded progressively as the demand grows.
+
+There are two crucial mistakes to avoid at this point. First, the
+network needs to remain decentralised. Long haul links should be IP
+links with BGP sessions, and each city (or [MRC](https://en.wikipedia.org/wiki/Regional_county_municipality)) should have its
+own independent network, to avoid Rogers-class catastrophic failures.
+
+Second, skill needs to remain in-house: RISQ has already made that
+mistake, to a certain extent, by selling out its neutral datacenter
+(the [QIX](https://en.wikipedia.org/wiki/Montreal_Internet_Exchange)), and to not expand faster to commercial
+markets. Tellingly, [MetroOptic](https://metrooptic.com/en), probably the largest commercial
+dark fiber provider in the province, now operates the QIX, the second
+largest "public" internet exchange in Canada.
+
+Still, we have a lot of infrastructure we can leverage here. If RISQ
+cannot be up to the task, Hydro-Québec has power lines running into
+every house in the province, with high power lines running hundreds of
+kilometers very far north. The logistics of long distance maintenance
+are already partly solved by that institution, and I believe running
+fiber next to power lines shouldn't prove that much of a technical
+challenge.
+
+## International public internet
+
+It should also be noted that none of the above solves the problem for
+the entire population of Québec, which is notoriously dispersed, with
+an area three times the size of France, but with only an eight of its
+population (8 million vs 67). More specifically, Québec is an european
+colony that was violently stolen from native people who have lived
+here for generations. Many of those people now live in reservations,
+sometimes [far from urban centers](https://en.wikipedia.org/wiki/Obedjiwan) (but definitely [not
+always](https://en.wikipedia.org/wiki/Kanesatake)). So the idea of leveraging the Hydro-Québec infrastructure
+doesn't always work to solve this.
+
+Here, I think we step into another problem space which is
+international connectivity. (How else could we consider those
+communities than peer nations anyways.) Québec has basically zero
+international connectivity. Even in Montréal, which likes to style
+itself a major player in gaming, AI, and technology, most peering goes
+through either Toronto or New York. Depending on your politics,
+basically any long haul in Québec is international.
+
+So that's clearly a problem that should be fixed, and I believe it's
+one that should be fixed *regardless* of the other problems stated in
+this article. We should have international peering landing in Québec
+(and Canada, for that matter). Looking at the [submarine cable
+map](https://www.submarinecablemap.com/), we see very few international links actually landing in
+Canada. There is the [Greenland connect](https://www.submarinecablemap.com/submarine-cable/greenland-connect) which connects
+Newfoundland to Iceland through Greenland. There's the [EXA](https://www.submarinecablemap.com/submarine-cable/exa-north-and-south) which
+lands in Ireland, the UK and the US, and Google has the [Topaz
+link](https://www.submarinecablemap.com/submarine-cable/topaz) on the west coast. That's about it, and none of those land
+anywhere near any major urban center in Québec.
+

(Diff truncated)
found a hack for xterm and one of the reasons i switched back
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index 12eea0b9..891fed70 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -45,7 +45,12 @@ In the previous article, I touched on those projects:
 [Alacritty]: https://github.com/jwilm/alacritty
 
 At this point I'm still using urxvt, bizarrely. I briefly played
-around with Konsole and xterm, but somehow reverted back to it.
+around with Konsole and xterm, but somehow reverted back to it. When I
+started drafting that article, i switched to xterm as my default
+`x-terminal-emulator` "alternative" in my Debian system and quickly
+noticed why I had stopped using it: I miss my clickable links.
+
+TODO: talk about the xterm clickable link hack
 
 I would really, really like to like Alacritty, but it's still not
 packaged in Debian, and they [haven't fully addressed the latency

start draft about public internetz
diff --git a/blog/rogers-monopoly.md b/blog/rogers-monopoly.md
new file mode 100644
index 00000000..46bd4b8c
--- /dev/null
+++ b/blog/rogers-monopoly.md
@@ -0,0 +1,117 @@
+Rogers have seen a catastrophic failure of their infrastructure this
+week. This affected emergency services (as people couldn't call 911,
+despite this being a strong requirement of the CRTC), hospitals (which
+couldn't issue meds because of missing internet access), banks and
+payment systems, and regular users as well. The outage lasted almost a
+full day and its impact on the economy has yet to be measured, but it
+probably cost billions of dollars in wasted time and probably lead to
+more than one life-threatening situations.
+
+TODO:
+https://ici.radio-canada.ca/nouvelle/1854653/bell-panne-television-internet-telephonie-froid
+
+Apart from holding Rogers (criminally?) responsible for this, what
+should be done in the future to avoid such problems?
+
+It's obvious: the internet is designed to be decentralised, and having
+large companies like Rogers hold so much power is a crucial mistake
+that should be reverted. The question is how. Opposition parties are
+quick to point out that we need more ISP diversity and competition,
+but I think that's missing the point.
+
+Global wireless services (like phone services) and home internet
+inevitably grow into monopolies. They are public utilities, just like
+water, power, railways, and roads. The question of how they should be
+managed is therefore inherently political, yet people don't seem to
+question the idea that only the market (ie. "competition") can solve
+this problem. I disagree.
+
+I believe the solution to the problem of large, private, centralised
+telcos and ISPs is to replace them with smaller, public, decentralised
+service providers. There are many possible ways of accomplishing
+this. 
+
+# Legal
+
+In 1984 (of all years), the US Department of Justice finally broke
+[broke up AT&T](https://en.wikipedia.org/wiki/Breakup_of_the_Bell_System) a about half a dozen corporations, after a 10 year
+legal battle. Yet a few decades later, we're back to only *three*
+large providers doing essentially what AT&T was doing back then, and
+those are basically regional monopolies: AT&T, Verizon, and Lumen. So
+the legal approach really didn't work that well, especially
+considering the political landscape changed in the US, and the FTC
+seems [perfectly happy to let those major mergers continue](https://techcrunch.com/2019/11/05/fcc-approves-t-mobile-sprint-merger-despite-serious-concerns/).
+
+In Canada, we never even pretended we would solve this problem at all:
+Bell Canada (a literal "cousin" of AT&T) didn't quite reach the level
+of power AT&T did, but we're basically in the same situation. We have
+either a regional monopoly (e.g. Videotron for cable in Québec) or an
+oligopoly (Bell, Rogers, and Telus controlling more than 90% of the
+market). Telus does have *one* competitor in the west of Canada,
+[Shaw](https://en.wikipedia.org/wiki/Shaw_Communications), but Rogers has been [trying to buy it out](https://en.wikipedia.org/wiki/Shaw_Communications#Acquisition_by_Rogers). That merger
+seems to have been blocked by the [competition bureau](https://en.wikipedia.org/wiki/Competition_Bureau) for now, but
+it didn't stop other recent mergers like [Bell's acquisition of one of
+its main competitors in Québec, eBox](https://www.ledevoir.com/economie/685926/telecommunications-exode-des-clients-d-ebox-apres-son-acquisition-par-bell).
+
+In effect, it doesn't seem like states have the will to break up those
+monopolies, with, for example, the CRTC currently arguing that this
+state of affairs is fine, because there is competition.
+
+Regulation also doesn't seem capable of ensuring those extremely
+profitable corporations (8 billion $CAD in 2019 for BCE) provide us
+with decent pricing, which makes Canada is one of the most expensive
+countries to get internet on Earth. The recent failure of CRTC to
+properly protect smaller providers has even lead to [price hikes](https://www.lapresse.ca/affaires/entreprises/2022-01-13/internet/des-hausses-minimales-des-prix-en-attendant-ottawa.php)
+for those. Meanwhile the oligopoly is actually [agreeing on price
+hikes](https://plus.lapresse.ca/screens/a83efcd5-196b-4868-96a8-1cdb73d5eb9c__7C___0.html?utm_content=ulink&utm_source=lpp&utm_medium=referral&utm_campaign=internal+share) in what seems to be becoming a real cartel, complete with
+price fixing.
+
+TODO: move the 911 thing here, see #debian-quebec links
+
+# Subsidies
+
+Those absurd prices do not actually mean everyone gets high speed
+internet at home. Large swathes of the Québec countryside don't get
+broadband at all, and haven't had it for ages, despite the provincial
+and federal government pouring hundreds of millions of dollars of
+subsidies to stimulate that development. Recently, those companies
+have received half a billion in subsidies to give broadband access to
+all of Québec, an effort that is supposed to be completed in September
+2022, obviously right on time for the next election. 
+
+But Québec is a big area to cover, and you can guess what happens
+next: the telcos threw up their hand and said some areas just can't be
+connected. The story then takes the obvious twist of [giving more
+money out to the private sector, subsidizing now Musk's Starlink
+system](https://ici.radio-canada.ca/nouvelle/1881709/internet-haute-vitesse-branchement-regions-quebec-couts) to connect those remote areas.
+
+TODO:
+https://www.cbc.ca/news/canada/ottawa/bell-ceo-cottage-pemichangan-lake-1.5925882?cmp=rss
+
+We will have spent billions of dollars for the private sector to build
+us a private internet, over decades, without any assurance of quality,
+equity or reliability.
+
+# Public ownership
+
+[10 years ago](https://anarc.at/blog/2012-06-20-pourquoi-un-monopole-sur-linternet-et-une-solution-reseau-quebec/), I suggested we, in Québec, should nationalize this
+infrastructure. Now i don't feel this is a realistic approach: even
+though most of those companies have basically crap copper-based
+networks (at least for the last mile), a hodge-podge mix of POTS and
+cable and whatnot, they are valued into billions of dollars and would
+be prohibitive to buy.
+
+TODO:
+https://quebecsolidaire.net/nouvelle/internet-quebec-solidaire-veut-couper-les-prix-et-garantir-lacces-partout-sur-le-territoire
+
+We should instead build our own, public internet. Start setting up
+municipal internet services, but eventually bring fiber to every the
+home we currently get power to. We have a public electricity utility,
+let's make it bring internet everywhere as well..
+
+TODO: https://news.ycombinator.com/item?id=32036208 
+
+TODO: guesstimates on costs, talk about iles de la madelein, rez,
+north coast, gaspésie, etc
+
+[[!tag draft]]

fix year on réseau-québec article
diff --git a/blog.mdwn b/blog.mdwn
index 4d4862b7..19ad4aee 100644
--- a/blog.mdwn
+++ b/blog.mdwn
@@ -46,7 +46,7 @@ trail=yes
  * [Internet in Cuba](https://anarc.at/blog/2016-01-24-internet-in-cuba/) (2016)
  * [The Downloadable Internet](https://anarc.at/blog/2016-01-24-internet-in-cuba/) (2016)
  * [Comment l'oligopole a pris le contrôle de l'internet et une
-   solution: Réseau-Québec](https://anarc.at/blog/2012-06-20-pourquoi-un-monopole-sur-linternet-et-une-solution-reseau-quebec/) (2015, français)
+   solution: Réseau-Québec](https://anarc.at/blog/2012-06-20-pourquoi-un-monopole-sur-linternet-et-une-solution-reseau-quebec/) (2012, français)
  * [La censure sur internet: notre antidote](https://anarc.at/blog/2012-03-03-la-censure-sur-internet-notre-antidote/) (2012, français)
  * [Comment contourner la censure en Tunisie](https://anarc.at/blog/2011-01-05-comment-contourner-la-censure-en-tunisie/) (2011, français)
  * [Comment la Tunisie censure l'internet](https://anarc.at/blog/2005-11-23-comment-la-tunisie-censure-linternet/) (2005, français)

another emulator
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index 68a6b8b3..12eea0b9 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -89,6 +89,7 @@ Those are the projects I am considering.
  * [foot](https://codeberg.org/dnkl/foot) - Wayland only, daemon-mode, sixel images, scrollback
    search, true color, font resize, URLs not clickable, but
    keyboard-driven selection, proper clipboard support
+ * [havoc](https://github.com/ii8/havoc)
  * [kitty](https://github.com/kovidgoyal/kitty)
  * [sakura](https://www.pleyades.net/david/projects/sakura) - libvte, wayland support, tabs, no menu bar, original
    libvte gangster, dynamic font size

add hovercraft, link to my tor presentations
diff --git a/blog/2020-09-30-presentation-tools.mdwn b/blog/2020-09-30-presentation-tools.mdwn
index a59ab6d1..d560218b 100644
--- a/blog/2020-09-30-presentation-tools.mdwn
+++ b/blog/2020-09-30-presentation-tools.mdwn
@@ -38,6 +38,7 @@ Some of my presentations are available [in my GitLab.com account](https://gitlab
  * [Ethics in computing](https://gitlab.com/anarcat/presentation-ethics), based on [this blog post](https://anarc.at/blog/2018-05-26-kubecon-rant/)
  * [Presentation about the Maple Spring, at OHM2013](https://gitlab.com/anarcat/ohm2013/)
  * [First presentation at Tor](https://gitlab.torproject.org/anarcat/onion-tex/-/tree/main/src/pandoc/anarcat-demo-2020)
+ * [Tor presentations](https://gitlab.torproject.org/anarcat/presentations)
 
 See also my [list of talks and presentations](/communication) which I can't seem to
 keep up to date.
@@ -59,6 +60,12 @@ keep up to date.
    code samples, auto-reload
  * [Home page](https://github.com/ionelmc/python-darkslide), [demo](https://ionelmc.github.io/python-darkslide/#slide:1)
 
+## Hovercraft
+
+ * reStructuredText, impress.js, written in Python
+ * presenter notes, HTML output, needs Javascript
+ * [Source code](https://github.com/regebro/hovercraft), [demo](https://regebro.github.io/hovercraft/)
+
 ## Impress.js
 
  * Javascript
@@ -118,6 +125,7 @@ supports annotations (in a separate screen), caching, timing, and
 embedded videos. [dspdfviewer](https://dspdfviewer.danny-edel.de/) is another such viewer.
 
 Others just [use their IDE directly](https://staltz.com/your-ide-as-a-presentation-tool.html).
+
 ## Pinpoint
 
  * Native GNOME app

mnt reform launched a new product
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index e024a55d..51a9053b 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -224,6 +224,18 @@ with a quad-core ARM CPU.
 There was a possibility for an e-ink screen and hot-swappable
 keyboard, but that was scraped during production.
 
+Update: I haven't bought a MNT reform, on two grounds:
+
+ * it's not very powerful, for the price
+ * it's bulky, so not ideal for a travel laptop (which is why I own a
+   laptop in the first place)
+
+That said, MNT now launched a new product called the [MNT Pocket
+Reform](https://mntre.com/media/reform_md/2022-06-20-introducing-mnt-pocket-reform.html). Now it all makes sense: the MNT Reform parts can be
+reused in the Pocket, and it seems the original Reform was a good
+prototype to the end goal, a pocketable computer. That, then, becomes
+really interesting as a travel laptop. Maybe. :)
+
 Wootbook
 --------
 

zutty crash
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index e6769956..68a6b8b3 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -98,7 +98,7 @@ Those are the projects I am considering.
    port support, Sixel, Kitty, iTerm graphics, built-in SSH client (!?)
  * xterm
  * [zutty](https://github.com/tomszilagyi/zutty): OpenGL rendering, true color, clipboard support, small
-   codebase, no wayland support
+   codebase, no wayland support, [crashes on bremner's](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014080)
 
 Stay tuned for more happy days.
 

vagrant likes sakura, worth investigating
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index 3eab1cbb..e6769956 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -90,6 +90,8 @@ Those are the projects I am considering.
    search, true color, font resize, URLs not clickable, but
    keyboard-driven selection, proper clipboard support
  * [kitty](https://github.com/kovidgoyal/kitty)
+ * [sakura](https://www.pleyades.net/david/projects/sakura) - libvte, wayland support, tabs, no menu bar, original
+   libvte gangster, dynamic font size
  * [termonad](https://github.com/cdepillabout/termonad) - Haskell?
  * [wez](https://wezfurlong.org/wezterm/) - Rust, Wayland, multiplexer, ligatures, scrollback
    search, clipboard support, bracketed paste, panes, tabs, serial

link to an example of bridge hell
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index f588635c..2e082dcc 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -384,7 +384,7 @@ IRC works: by default, anyone can join an IRC network even without
 authentication. Some channels require registration, but in general you
 are free to join and look around (until you get blocked, of course).
 
-I have heard anecdotal evidence that "moderating bridges is hell", and
+I have [seen anecdotal evidence](https://twitter.com/matrixdotorg/status/1542065092048654337) (CW: Twitter, [nitter link](https://nitter.it/matrixdotorg/status/1542065092048654337)) that "moderating bridges is hell", and
 I can imagine why. Moderation is already hard enough on one
 federation, when you bridge a room with another network, you inherit
 all the problems from *that* network but without the entire abuse

fix typo, thanks jvoisin
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index 9a40edf0..f588635c 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -375,7 +375,7 @@ register their own homeserver, which makes this limited.
 
 Server admins can block IP addresses and home servers, but those tools
 are not easily available to room admins. There is an API
-(`m.room.server_acl` in `/devtools`) but the it is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928)
+(`m.room.server_acl` in `/devtools`) but it is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928)
 (thanks Austin Huang for the clarification).
 
 Matrix has the concept of guest accounts, but it is not used very

add TODOs
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index 769bf2f6..3eab1cbb 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -100,4 +100,7 @@ Those are the projects I am considering.
 
 Stay tuned for more happy days.
 
+TODO: update https://gitlab.com/anarcat/terms-benchmarks
+TODO: cross-ref from previous articles
+
 [[!tag draft]]

more details
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
index a5e16fe6..769bf2f6 100644
--- a/blog/wayland-terminal-emulators.md
+++ b/blog/wayland-terminal-emulators.md
@@ -18,18 +18,18 @@ https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)#Adoption
 
 In the previous article, I touched on those projects:
 
-| Terminal           | Changes since review                                                                                                                                                                                          |
-|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [Alacritty][]      | releases! scrollback, better latency, URL launcher, clipboard support, still [not in Debian](http://bugs.debian.org/851639), but close                                                                        |
-| [GNOME Terminal][] | not much? couldn't find a changelog                                                                                                                                                                           |
-| [Konsole][]        | [not much](https://konsole.kde.org/changelog.html)                                                                                                                                                            |
-| [mlterm][]         | [long changelog](https://raw.githubusercontent.com/arakiken/mlterm/3.9.2/doc/en/ReleaseNote) but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!) |
-| [pterm][]          | [changes](https://www.chiark.greenend.org.uk/~sgtatham/putty/changes.html): Wayland support                                                                                                                   |
-| [st][]             | [unparseable changelog](https://git.suckless.org/st/), might include scrollback support through a third-party `scroll(1)` command I couldn't find                                                             |
-| [Terminator][]     | moved to GitHub, Python 3 support, not being dead                                                                                                                                                             |
-| [urxvt][]          | main rxvt fork, also known as rxvt-unicode                                                                                                                                                                    |
-| [Xfce Terminal][]  | uses GTK3, VTE                                                                                                                                                                                                |
-| [xterm][]          | the original X terminal                                                                                                                                                                                       |
+| Terminal           | Changes since review                                                                                                                                                                                                                           |
+|--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Alacritty][]      | releases! scrollback, better latency, URL launcher, clipboard support, still [not in Debian](http://bugs.debian.org/851639), but close                                                                                                         |
+| [GNOME Terminal][] | not much? couldn't find a changelog                                                                                                                                                                                                            |
+| [Konsole][]        | [not much](https://konsole.kde.org/changelog.html)                                                                                                                                                                                             |
+| [mlterm][]         | [long changelog](https://raw.githubusercontent.com/arakiken/mlterm/3.9.2/doc/en/ReleaseNote) but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!)                                  |
+| [pterm][]          | [changes](https://www.chiark.greenend.org.uk/~sgtatham/putty/changes.html): Wayland support                                                                                                                                                    |
+| [st][]             | [unparseable changelog](https://git.suckless.org/st/), might include scrollback support through a third-party `scroll(1)` command I couldn't find                                                                                              |
+| [Terminator][]     | moved to GitHub, Python 3 support, not being dead                                                                                                                                                                                              |
+| [urxvt][]          | no significant changes, a single release, still in CVS!                                                                                                                                                                                        |
+| [Xfce Terminal][]  | [hard to parse changelog](https://gitlab.xfce.org/apps/xfce4-terminal/-/blob/master/NEWS), presumably some improvements to paste safety?                                                                                                       |
+| [xterm][]          | notoriously [hard to parse changelog](https://invisible-island.net/xterm/xterm.log.html), improvements to paste safety (`disallowedPasteControls`), fonts, [clipboard improvements](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=901249)? |
 
 [xterm]: http://invisible-island.net/xterm/
 [Xfce Terminal]: https://docs.xfce.org/apps/terminal/start
@@ -44,4 +44,60 @@ In the previous article, I touched on those projects:
 [GNOME Terminal]: https://wiki.gnome.org/Apps/Terminal
 [Alacritty]: https://github.com/jwilm/alacritty
 
+At this point I'm still using urxvt, bizarrely. I briefly played
+around with Konsole and xterm, but somehow reverted back to it.
+
+I would really, really like to like Alacritty, but it's still not
+packaged in Debian, and they [haven't fully addressed the latency
+issues](https://github.com/alacritty/alacritty/issues/673#issuecomment-658784144) although, to be fair, maybe it's just an impossible
+task. Once it's in Debian, maybe I'll reconsider.
+
+# Requirements
+
+Figuring out my requirements is actually a pretty hard thing to do. In
+my last reviews, I just tried a bunch of stuff and collected
+*everything*, but a lot of things (like tab support) I don't actually
+care about. So here's a set of things I actually do care about:
+
+ * latency
+ * resource usage
+ * proper clipboard support, that is:
+   * mouse selection and middle button uses PRIMARY
+   * <kbd>control-shift-c</kbd> and <kbd>control-shift-v</kbd> for
+     CLIPBOARD
+ * true color support
+ * no known security issues
+ * active project
+ * paste protection
+ * clickable URLs
+ * scrollback
+ * font resize
+ * non-destructive text-wrapping (ie. resizing a window doesn't drop
+   scrollback history)
+ * proper unicode support (at least latin-1, ideally "everything")
+ * good emoji support (at least showing them, ideally "nicely"), which
+   involves font fallbacks
+
+# Candidates
+
+Those are the projects I am considering.
+
+ * [alacritty][]
+ * [darktile](https://github.com/liamg/darktile) - GPU rendering, Unicode support, themable, ligatures
+   (optional), Sixel, window transparency, clickable URLs, true color
+   support
+ * [foot](https://codeberg.org/dnkl/foot) - Wayland only, daemon-mode, sixel images, scrollback
+   search, true color, font resize, URLs not clickable, but
+   keyboard-driven selection, proper clipboard support
+ * [kitty](https://github.com/kovidgoyal/kitty)
+ * [termonad](https://github.com/cdepillabout/termonad) - Haskell?
+ * [wez](https://wezfurlong.org/wezterm/) - Rust, Wayland, multiplexer, ligatures, scrollback
+   search, clipboard support, bracketed paste, panes, tabs, serial
+   port support, Sixel, Kitty, iTerm graphics, built-in SSH client (!?)
+ * xterm
+ * [zutty](https://github.com/tomszilagyi/zutty): OpenGL rendering, true color, clipboard support, small
+   codebase, no wayland support
+
+Stay tuned for more happy days.
+
 [[!tag draft]]

start documenting current state of terminal emulators
diff --git a/blog/wayland-terminal-emulators.md b/blog/wayland-terminal-emulators.md
new file mode 100644
index 00000000..a5e16fe6
--- /dev/null
+++ b/blog/wayland-terminal-emulators.md
@@ -0,0 +1,47 @@
+Back in 2018, I made a [two part series](https://anarc.at/blog/2018-04-12-terminal-emulators-1/) about terminal emulators
+that was actually pretty painful to write. So I'm not going to retry
+this here, at all, especially since I'm not submitting this to the
+excellent [LWN editors](https://lwn.net/) so I can get away with not being very good
+at writing. Phew.
+
+Still, it seems my future self will thank me for collecting my
+thoughts on the terminal emulators I have found out about since I
+wrote that article. Back then, Wayland was not quite at the level
+where it is now, being the default in Fedora (2016), Debian (2019),
+RedHat (2019), and Ubuntu (2021). Also, a bunch of folks thought they
+would solve everything by using OpenGL for rendering. Let's see how
+things stack up.
+
+https://en.wikipedia.org/wiki/Wayland_(display_server_protocol)#Adoption
+
+# Recap
+
+In the previous article, I touched on those projects:
+
+| Terminal           | Changes since review                                                                                                                                                                                          |
+|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Alacritty][]      | releases! scrollback, better latency, URL launcher, clipboard support, still [not in Debian](http://bugs.debian.org/851639), but close                                                                        |
+| [GNOME Terminal][] | not much? couldn't find a changelog                                                                                                                                                                           |
+| [Konsole][]        | [not much](https://konsole.kde.org/changelog.html)                                                                                                                                                            |
+| [mlterm][]         | [long changelog](https://raw.githubusercontent.com/arakiken/mlterm/3.9.2/doc/en/ReleaseNote) but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!) |
+| [pterm][]          | [changes](https://www.chiark.greenend.org.uk/~sgtatham/putty/changes.html): Wayland support                                                                                                                   |
+| [st][]             | [unparseable changelog](https://git.suckless.org/st/), might include scrollback support through a third-party `scroll(1)` command I couldn't find                                                             |
+| [Terminator][]     | moved to GitHub, Python 3 support, not being dead                                                                                                                                                             |
+| [urxvt][]          | main rxvt fork, also known as rxvt-unicode                                                                                                                                                                    |
+| [Xfce Terminal][]  | uses GTK3, VTE                                                                                                                                                                                                |
+| [xterm][]          | the original X terminal                                                                                                                                                                                       |
+
+[xterm]: http://invisible-island.net/xterm/
+[Xfce Terminal]: https://docs.xfce.org/apps/terminal/start
+[urxvt]: http://software.schmorp.de/pkg/rxvt-unicode.html
+[Terminator]: https://github.com/gnome-terminator/terminator/
+[st]: https://st.suckless.org/
+[PuTTY]: https://www.chiark.greenend.org.uk/%7Esgtatham/putty/
+[pterm]: https://manpages.debian.org/pterm
+[mlterm]: http://mlterm.sourceforge.net/
+[Konsole]: https://konsole.kde.org/
+[VTE]: https://github.com/GNOME/vte
+[GNOME Terminal]: https://wiki.gnome.org/Apps/Terminal
+[Alacritty]: https://github.com/jwilm/alacritty
+
+[[!tag draft]]

fix some typos, thanks marvil!
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index b40b3fba..9a40edf0 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -443,7 +443,7 @@ On IRC, it's quite easy to setup redundant nodes. All you need is:
 That's it: the node will join the network and people can connect to it
 as usual and share the same user/namespace as the rest of the
 network. The servers take care of synchronizing state: you do not need
-about replicating a database server.
+to worry about replicating a database server.
 
 (Now, experienced IRC people will know there's a catch here: IRC
 doesn't have authentication built in, and relies on "services" which
@@ -591,7 +591,7 @@ Matrix itself, but let's now dig into that.
 There were serious scalability issues of the main Matrix server,
 [Synapse](https://github.com/matrix-org/synapse/), in the past. So the Matrix team has been working hard to
 improve its design. Since Synapse 1.22 the home server can
-horizontally to multiple workers (see [this blog post for details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability))
+horizontally scale to multiple workers (see [this blog post for details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability))
 which can make it easier to scale large servers.
 
 ## Other implementations

note some GDPR research
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index 9b372ab1..b40b3fba 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -112,6 +112,10 @@ Also keep in mind that, in the brave new [peer-to-peer](https://github.com/matri
 Matrix is heading towards, the boundary between server and client is
 likely to be fuzzier, which would make applying the GDPR even more difficult.
 
+Update: [this comment](https://lobste.rs/s/ixa4vr/matrix_notes#c_nzqaqb) links to [this post (in german)](https://www.cr-online.de/blog/2022/06/02/ein-fehler-in-der-matrix/) which
+apparently studied the question and concluded that Matrix is not
+GDPR-compliant.
+
 In fact, maybe Synapse should be designed so that there's no
 configurable flag to turn off data retention. A bit like how most
 system loggers in UNIX (e.g. syslog) come with a log retention system
@@ -970,6 +974,8 @@ dead, just like IRC is "dead" now.
 I wonder which path Matrix will take. Could it liberate us from these
 vicious cycles?
 
+Update: this generated some discussions on [lobste.rs](https://lobste.rs/s/ixa4vr/matrix_notes).
+
 [[!tag matrix irc history debian-planet python-planet review internet]]
 
 [^1]: [According to Wikipedia](https://en.wikipedia.org/wiki/Internet_Relay_Chat#Modern_IRC), there are currently about 500

another interesting build
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index b0f57245..2b0901e5 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -437,4 +437,8 @@ slots but strangely it's somewhat rare in user-level
 hardware. Most SATA controllers and disks support hot-swapping, but it
 needs to be double-checked.
 
+## Other builds
+
+See also <https://mtlynch.io/budget-nas/>.
+
 [[!tag node]]

document some shops
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index dd8eff36..693611fe 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -969,6 +969,15 @@ Conclusion:
 Looks like Fuji is targeting a more high-end market, Sony is all over
 the place, and Olympus is aiming at a lower-range.
 
+# Shops
+
+ * <https://royalphoto.com/>
+ * <https://photoservice.ca/>
+ * <https://www.camtecphoto.com/>
+ * <https://lozeau.com/>, acheté par Henry's
+ * <https://www.henrys.com/>
+ * <https://www.bhphotovideo.com/>
+
 Autres pages
 ============
 

mention the surface
diff --git a/hardware/tablet.mdwn b/hardware/tablet.mdwn
index 40bac923..70dd2f58 100644
--- a/hardware/tablet.mdwn
+++ b/hardware/tablet.mdwn
@@ -507,6 +507,18 @@ magnetic keyboard), it looks real promising.
 
 https://en.jingos.com/jingpad-a1/
 
+## Microsoft
+
+I feel really odd suggesting people buy *anything* from Microsoft, but
+there you have it, some fellow Debian Developer did, so I can't help
+but adding it to the pile:
+
+https://changelog.complete.org/archives/10396-i-finally-found-a-solid-debian-tablet-the-surface-go-2
+
+Pretty bad iFixit score (3/10):
+
+https://www.ifixit.com/Device/Surface_Go_2
+
 Phones
 ======
 

note that matrix seem aware they cannot expire messages
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index 0abd60bd..9b372ab1 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -212,7 +212,8 @@ and from where is not well protected. Compared to a tool like Signal,
 which goes through great lengths to anonymize that data with features
 like [private contact discovery][], [disappearing messages](https://signal.org/blog/disappearing-messages/),
 [sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
-behind.
+behind. (Note: there is an [issue open about message lifetimes in
+Element](https://github.com/vector-im/element-meta/issues/364) since 2020, but it's not at even at the MSC stage yet.)
 
 [private contact discovery]: https://signal.org/blog/private-contact-discovery/
 

obviously rust has its own filesystem monitoring thing
diff --git a/blog/2019-11-20-file-monitoring-tools.mdwn b/blog/2019-11-20-file-monitoring-tools.mdwn
index d8807abb..f4c924fe 100644
--- a/blog/2019-11-20-file-monitoring-tools.mdwn
+++ b/blog/2019-11-20-file-monitoring-tools.mdwn
@@ -135,6 +135,18 @@ https://github.com/tinkershack/fluffy
  * somewhat [difficult commandline interface](https://manpages.debian.org/buster/inotify-tools/inotifywait.1.en.html)
  * no event deduplication
 
+## notify-rs
+
+<https://github.com/notify-rs/notify>
+
+ * 2016-2022
+ * Rust
+ * CC0 / Artistic
+ * [Debian package](https://tracker.debian.org/pkg/rust-notify) since 2022
+ * cross-platform library, not a commandline tool
+ * used by `cargo watch`, [watchexec](https://github.com/watchexec/watchexec) ([RFP](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=946546)), and Python's
+   [watchfiles](https://watchfiles.helpmanual.io/) which features a [CLI tool](https://watchfiles.helpmanual.io/cli/)
+
 ## systemd .path units
 
 <https://www.freedesktop.org/software/systemd/man/systemd.path.html>
@@ -402,4 +414,15 @@ old inotify wrappers because I don't find them as interesting as the
 newer ones - they're really hard to use! - but I guess it's worth
 mentioning them even if just to criticise them. ;)
 
+## timetrack
+
+<https://github.com/joshmcguigan/timetrack>
+
+ * 2018-2019
+ * Rust
+ * Apache-2.0, MIT
+ * No Debian package
+ * tracks filesystem changes to report time spent on different things,
+   see also [this discussion on selfspy for other alternatives](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=873955#53)
+
 [[!tag debian debian-planet software review programming]]

will not use selfspy
diff --git a/blog/2017-10-02-free-software-activities-september-2017.mdwn b/blog/2017-10-02-free-software-activities-september-2017.mdwn
index ee64491c..e4769250 100644
--- a/blog/2017-10-02-free-software-activities-september-2017.mdwn
+++ b/blog/2017-10-02-free-software-activities-september-2017.mdwn
@@ -171,6 +171,9 @@ for people that encrypt their database.
 Next step is to [package selfspy in Debian](https://bugs.debian.org/873955) which should hopefully
 be simple enough...
 
+Update, 2022: I decided not to use Selfspy, too much of a security
+liability. See instead [this discussion on alternatives](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=873955#53).
+
 Restic documentation security
 -----------------------------
 

respond to Austin Huang
diff --git a/blog/2022-06-17-matrix-notes/comment_2_3e1ab50e7ef7fd9e4e451bd51f8f1cc0._comment b/blog/2022-06-17-matrix-notes/comment_2_3e1ab50e7ef7fd9e4e451bd51f8f1cc0._comment
new file mode 100644
index 00000000..926e4b2d
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_2_3e1ab50e7ef7fd9e4e451bd51f8f1cc0._comment
@@ -0,0 +1,65 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""some responses"""
+ date="2022-06-18T16:23:06Z"
+ content="""
+Allo!
+
+So first off, I should note that I actually put some poor Matrix.org through the pain of reading a draft of this article before publishing it, so I consider it somewhat accurate, or at least as much as reasonably possible considering the sheer size of the thing.
+
+Now, as to your specific comments...
+
+> Writing all the third parties together is quite misleading and you should definitely separate them.
+
+My reviewer also had that objection, but I will point out that the distinction is *not* made in the privacy policy. So I think it would actually be unfair to split it out here: it's not like you can actually pick and choose which part of the privacy policy you accept when you start using Matrix services.
+
+In fact I'd even say it's a problem that all of those are indiscrimenatly present in the policy.
+
+> > As an aside, I also appreciate that Matrix.org has a fairly decent code of conduct
+
+> [response summary: they ban people too easily]
+
+I err on the side of banning more people than less, to be honest, so I'm perfectly fine with this. Maybe it would be better to have two rooms, one for explicit code violations and one, more "liberal" room where anything goes?
+
+You also made comments regarding Mastodon and moderation policies which I don't quite grasp the point of.
+
+[...]
+
+Thanks for the clarifications on Mjolnir, room blocking, and guest accounts, I updated the article accordingly.
+
+> > tombstone event
+>
+> It does have a GUI in Element:
+>
+> * `/upgraderoom`
+
+That's literally not a GUI: it's a text command. 
+
+> > but Matrix.org people are trying to deprecate it in favor of \"Spaces\"
+>
+> Citation required. Also Spaces are rooms and so they can also be included in room directories.
+
+The reviewer didn't wish to be cited, so I can't actually provide a quote here. Happy to stand corrected, but it does feel like a large feature overlap, no?
+
+> > New users can be added to a space or room automatically in Synapse
+>
+> In public homeservers, this may leak account age.
+
+Considering the entire history of everything is available forever on home servers, that seems like a minor compromise to get higher room availability in a distributed cluster (which is one of my use cases, that, granted, I did not make very clear).
+
+> Only public aliases (local aliases are unrestricted afaik). Also they're not required for listing on room directories.
+
+Now I'm even more confused than I was before. Local addresses, aliases, public aliases, wtf?
+
+> Given that [this](https://arewep2pyet.com/) is a thing, it is likely to be the goal.
+
+added a link.
+
+> > register (with a password! aargh!)
+> 
+> Don't you also have a password on Signal?
+
+Not really. They really want you to set a PIN so that you can do account recovery when you lose your phone, but I am not sure it's mandatory. Even if it was, it's not something you get prompted for all the time, only when you lose a device. In Element, I frequently had to login again, including in the Android app. I never had to use my Signal PIN so far (although for a while they used to prompt for it so that you wouldn't forget it, but they thankfully stopped that annoying practice).
+
+Thanks for the review!
+"""]]

add some corrections from Austin Huang, thanks!
diff --git a/blog/2022-06-17-matrix-notes.md b/blog/2022-06-17-matrix-notes.md
index bd645ed0..0abd60bd 100644
--- a/blog/2022-06-17-matrix-notes.md
+++ b/blog/2022-06-17-matrix-notes.md
@@ -369,12 +369,12 @@ registered users can join") by default, except that anyone can
 register their own homeserver, which makes this limited.
 
 Server admins can block IP addresses and home servers, but those tools
-are not currently available to room admins. So it would be nice to
-have room admins have that capability, just like IRC channel admins
-can block users based on their IP address.
+are not easily available to room admins. There is an API
+(`m.room.server_acl` in `/devtools`) but the it is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928)
+(thanks Austin Huang for the clarification).
 
 Matrix has the concept of guest accounts, but it is not used very
-much, and virtually no client supports it. This contrasts with the way
+much, and virtually no client or homeserver supports it. This contrasts with the way
 IRC works: by default, anyone can join an IRC network even without
 authentication. Some channels require registration, but in general you
 are free to join and look around (until you get blocked, of course).
@@ -597,7 +597,7 @@ performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), Gola
 are feature-complete so there's a trade-off to be made there.  Synapse
 is also adding a lot of feature fast, so it's an open question whether
 the others will ever catch up. (I have heard that Dendrite might
-actually surpass Synapse in features within a few years, which would
+actually [surpass Synapse in features within a few years](https://arewep2pyet.com/), which would
 put Synapse in a more "LTS" situation.)
 
 ## Latency

approve comment
diff --git a/blog/2022-06-17-matrix-notes/comment_1_38498fa7053818f0dd619d58aed2b92c._comment b/blog/2022-06-17-matrix-notes/comment_1_38498fa7053818f0dd619d58aed2b92c._comment
new file mode 100644
index 00000000..c655669c
--- /dev/null
+++ b/blog/2022-06-17-matrix-notes/comment_1_38498fa7053818f0dd619d58aed2b92c._comment
@@ -0,0 +1,122 @@
+[[!comment format=mdwn
+ ip="206.180.245.232"
+ claimedauthor="Austin Huang"
+ url="https://austinhuang.me"
+ subject="Some comments and whatnot from another Montréaler"
+ date="2022-06-18T04:29:02Z"
+ content="""
+Hello there. Some comments from an active Matrix user (aussi un Montréalais, but I don't work for matrix.org or New Vector), from top to bottom. (For other readers: they're AFAIK so please correct me if necessary.)
+
+> Element 2.2.1: mentions many more third parties
+
+Writing all the third parties together is quite misleading and you should definitely separate them. Specifically, according to my understanding of the text:
+
+* Twillo, if you register with a phone number on an EMS-operated homeserver
+* Stripe and Quaderno if you're a paying customer
+* LinkedIn, Twitter, Google, Outplay, Salesforce, and Pipedrive if you click on an ad of Element from a third-party platform (presumably they're not used if you land on Element directly, so maybe they shouldn't be included)
+* Hubspot, Matomo (selfhosted) and Posthog for website analytics
+
+So I don't think they're applicable to the *client*.
+
+Whether privacy policies are actually followed is a different thing, but assumptions need to be clearly indicated, in my opinion.
+
+> As an aside, I also appreciate that Matrix.org has a fairly decent code of conduct
+
+The enforcement of it (bans in #matrix-org-coc-bl:matrix.org whose reason is `code of conduct violations`) is commonly believed to be dubious at times. Specifically, they ban people (including those not exhibiting bad faith) too easily (often without warning) and there are no real appeal process (AFAIK abuse@matrix.org doesn't manually respond to most emails). Since #matrix-org-coc-bl:matrix.org is also used outside of official Matrix rooms (it applies to public communities directly related to EMS-operated homeservers as well, eg. FOSDEM, Arch Linux, etc.), there definitely should be some constraints w/r/t the use of bans.
+
+Third-party homeservers seem to be much better moderated.
+
+> The mjolnir bot
+
+Unfortunately, *in practice*, Mjolnir requires a homeserver to be run (given that the bot expects to be free of ratelimit, which requires an exception in the homeserver, which is unlikely to be granted to someone who's not related to the homeserver's administration). This makes it inaccessible to people who cannot run a homeserver (and cannot trust one who can to handle moderation). See also [here](https://www.aminda.eu/blog/english/2021/12/05/matrix-community-abuse-security-by-obscurity.html).
+
+> Server admins can block IP addresses and home servers, but those tools are not currently available to room admins.
+
+Room admins can block homeservers through ACL, it's not intuitive (`/devtools` => Explore Room State => `m.room.server_acl`) but it is indeed *available*. But ACL itself is [not reliable](https://github.com/matrix-org/matrix-spec/issues/928).
+
+Also, Mjolnir user bans support wildcard (`@*:example.com`).
+
+> Matrix has the concept of guest accounts, but it is not used very much, and virtually no client supports it. 
+
+Element does, but virtually no *homeserver* supports it (since it is easily abusable).
+
+> as servers could refuse parts of the updates
+
+pretty sure you can't do that without affecting all future updates
+
+> I have heard anecdotal evidence that \"moderating bridges is hell\"
+
+Since bridge puppets are still users, you could ban them or redact their messages. If the platform can be accessed through multiple bridges running the same bridge software, you could use Mjolnir to ban `@bridge_username:*` with the server name as wildcard. Of course, that doesn't prevent the situation where multiple public bridges exist that run different software and do not require explicit approval (eg. Bifrost for XMPP), but that's *rare*.
+
+> and then that room will be available on your `example.com` homeserver as `#foo:example.com`
+
+But the room is already available as, say, `#foo:matrix.org`, and if that is the selected main alias then that's what's gonna be shown even when you add the room to `example.com`'s room directory. So local aliases are only for someone on *your* homeserver to link this room, and non-main public aliases are only for someone on *any* homeserver to link this room, which means neither have any actual uses other than 1. vanity, and 2. to act as a redundancy in case all other public aliases fail (due to homeserver outage).
+
+> tombstone event
+
+It does have a GUI in Element:
+
+* `/upgraderoom`
+* For rooms before version 9, room settings => Security & Privacy => \"Space members\" has an \"upgrade required\" which is clickable if you have the permission to upgrade it (IIRC)
+
+> but Matrix.org people are trying to deprecate it in favor of \"Spaces\"
+
+Citation required. Also Spaces are rooms and so they can also be included in room directories.
+
+> New users can be added to a space or room automatically in Synapse
+
+In public homeservers, this may leak account age.
+
+> It's possible to restrict who can add aliases
+
+Only public aliases (local aliases are unrestricted afaik). Also they're not required for listing on room directories.
+
+> I have heard that Dendrite might actually surpass Synapse in features within a few years
+
+Given that [this](https://arewep2pyet.com/) is a thing, it is likely to be the goal.
+
+> In Matrix, you need to learn about home servers, pick one,
+
+How Element markets effectively forces everyone onto matrix.org (or EMS) and that's a problem
+
+> register (with a password! aargh!)
+
+Don't you also have a password on Signal?
+
+> but I don't feel confident sharing my phone number there
+
+You could share email addresses, but yes, I get your point.
+
+Use of identity servers became opt-in in late? 2020 amid concerns that it makes Riot.im (then) a spyware by forcing a call-home. In fact a selling point of Matrix is the non-requirement of email or phone number (if applicable; but it is obvious that this will lead to abuse).
+
+> It does not support large multimedia rooms
+
+Rooms with more than 2 users don't have native VoIP [yet](https://element.io/blog/introducing-native-matrix-voip-with-element-call/).
+
+> Working on Matrix
+
+That's what happens when the same people effectively own both the protocol and the first client... But then you said of IRC, that
+
+> If I were to venture a guess, I'd say that infighting, lack of a standardization body, and a somewhat annoying protocol meant the network could not grow.
+
+So it could also be an advantage that Matrix has a standardization body from the get-go (whether the body itself is good is a different question).
+
+> I just want secure, simple messaging. Possibly with good file transfers, and video calls.
+
+I don't think that has been Matrix's goal from the get-go, though you could say they're *now* working towards that.
+
+For me, Matrix is still only for two purposes:
+
+1. For individuals, to run a community (replacing Telegram and Discord), and
+2. For organizational communication (replacing Slack and MS Teams).
+
+Sure, interoperability (which Matrix has probably lobbied for), but the cost of bridging is still quite high.
+
+> Mastodon has started working on a global block list of fascist servers
+
+Some (certainly not mainstream) consider \"not blocking abusive servers\" as grounds for blocking so it's already happening on the fediverse.
+
+> but matrix.org publishes a (federated) block list of hostile servers
+
+[Element effectively encourages the use of blocklists](https://element.io/blog/moderation-needs-a-radical-change/), so abuse of which will bound to happen. Look, of course I support having people choosing their own moderation policy, but that only works *in theory*: in practice greater power does not lead to greater responsibility.
+"""]]

creating tag page tag/matrix
diff --git a/tag/matrix.mdwn b/tag/matrix.mdwn
new file mode 100644
index 00000000..5e0e8bbd
--- /dev/null
+++ b/tag/matrix.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged matrix"]]
+
+[[!inline pages="tagged(matrix)" actions="no" archive="yes"
+feedshow=10]]

publish
diff --git a/blog/matrix-notes.md b/blog/2022-06-17-matrix-notes.md
similarity index 100%
rename from blog/matrix-notes.md
rename to blog/2022-06-17-matrix-notes.md

one last review
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 8b3faca6..bd645ed0 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -1,22 +1,25 @@
+[[!meta title="Matrix notes"]]
+
 I have some concerns about Matrix (the protocol, not the movie that
 came out recently, although I do have concerns about that as
 well). I've been watching the project for a long time, and it seems
 more a promising alternative to many protocols like IRC, XMPP, and
 Signal.
 
-This review will sound a bit negative, because it focuses on the
-concerns I have. I am the operator of an IRC network and people keep
-asking me to bridge it with Matrix. I have myself considered a few
-times the idea of just giving up and converting to Matrix. This space
-is a living document exploring my research of that problem space.
+This review may sound a bit negative, because it focuses on those
+concerns. I am the operator of an IRC network and people keep asking
+me to bridge it with Matrix. I have myself considered just giving up
+on IRC and converting to Matrix. This space is a living document
+exploring my research of that problem space. The TL;DR: is that no,
+I'm not setting up a bridge just yet, and I'm still on IRC.
 
 This article was written over the course of the last three months, but
 I have been watching the Matrix project for years (my logs seem to say
-2016 at least), and is rather long. It will likely take you half an
-hour to read, so [copy this over to your ebook reader](https://gitlab.com/anarcat/wallabako), your
-tablet, dead trees, and sit back comfortably. Or, alternatively, just
-jump to a section that interest you or, more likely, the
-[conclusion](#conclusion).
+2016 at least). The article is rather long. It will likely take you
+half an hour to read, so [copy this over to your ebook reader](https://gitlab.com/anarcat/wallabako),
+your tablet, or dead trees, and lean back and relax as I show you
+around the Matrix. Or, alternatively, just jump to a section that
+interest you, most likely the [conclusion](#conclusion).
 
 [[!toc levels=2]]
 
@@ -60,7 +63,8 @@ normally tightly controlled. So, if you trust your IRC operators, you
 should be fairly safe. Obviously, clients *can* (and often do, even if
 [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all messages, but this is generally not
 the default. [Irssi](https://irssi.org/), for example, does [not log by
-default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412). Some IRC bouncers *do* log do disk however...
+default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412). IRC bouncers are more likely to log to disk, of course,
+to be able to do what they do.
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -68,8 +72,8 @@ database. Then it will transmit that message to all clients connected
 to that server and room, and to all other servers that have clients
 connected to that room. Those remote servers, in turn, will keep a
 copy of that message and all its metadata in their own database, by
-default forever. On encrypted rooms, thankfully, those messages are
-encrypted, but not their metadata.
+default forever. On encrypted rooms those messages are encrypted, but
+not their metadata.
 
 There is a mechanism to expire entries in Synapse, but it is [not
 enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one should generally assume that a message
@@ -84,7 +88,7 @@ will log all content and metadata from that room. That includes
 private, one-on-one conversations, since those are essentially rooms
 as well.
 
-In the context of the GDPR, this is really tricky: who's the
+In the context of the GDPR, this is really tricky: who is the
 responsible party (known as the "data controller") here? It's
 basically any yahoo who fires up a home server and joins a room.
 
@@ -100,7 +104,9 @@ enforce your right to be forgotten in a given room, you would have to:
 I recognize this is a hard problem to solve while still keeping an
 open ecosystem. But I believe that Matrix should have much stricter
 defaults towards data retention than right now. Message expiry should
-be enforced *by default*, for example.
+be enforced *by default*, for example. (Note that there are also
+redaction policies that could be used to implement part of the GDPR
+automatically, see the privacy policy discussion below on that.)
 
 Also keep in mind that, in the brave new [peer-to-peer](https://github.com/matrix-org/pinecone) world that
 Matrix is heading towards, the boundary between server and client is
@@ -134,7 +140,7 @@ When I first looked at Matrix, five years ago, Element.io was called
 
 When I asked Matrix people about why they were using Google Analytics,
 they explained this was for development purposes and they were aiming
-for velocity at the time, not privacy.
+for velocity at the time, not privacy (paraphrasing here).
 
 They also included a "free to snitch" clause:
 
@@ -143,16 +149,17 @@ They also included a "free to snitch" clause:
 > obligation, the instructions or requests of a governmental authority
 > or regulator, including those outside of the UK.
 
-Those are really *broad* terms.
+Those are really *broad* terms, above and beyond what is typically
+expected legally.
 
 Like the current retention policies, such user tracking and
-... "liberal" collaboration practices with the state sets a bad
+... "liberal" collaboration practices with the state set a bad
 precedent for other home servers.
 
 Thankfully, since the above policy was published (2017), the GDPR was
 "implemented" ([2018](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation)) and it seems like both the [Element.io
 privacy policy](https://element.io/privacy) and the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been
-somewhat improved since then.
+somewhat improved since.
 
 Notable points of the new privacy policies:
 
@@ -209,22 +216,22 @@ behind.
 
 [private contact discovery]: https://signal.org/blog/private-contact-discovery/
 
-This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (open in 2019) in Synapse, but this is not
-just an implementation issue, it's a flaw in the protocol itself. Home
-servers keep join/leave of all rooms, which gives clear information
-about who is talking to. Synapse logs are also quite verbose and may
+This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (opened in 2019) in Synapse, but this is
+not just an implementation issue, it's a flaw in the protocol
+itself. Home servers keep join/leave of all rooms, which gives clear
+text information about who is talking to. Synapse logs may also
 contain privately identifiable information that home server admins
-might not be aware of in the first place. Those logs rotation are also
-separate from the server-level retention policy, which may be
+might not be aware of in the first place. Those log rotation policies
+are separate from the server-level retention policy, which may be
 confusing for a novice sysadmin.
 
 Combine this with the federation: even if you trust your home server
 to do the right thing, the second you join a public room with
 third-party home servers, those ideas kind of get thrown out because
 those servers can do whatever they want with that information. Again,
-a problem that is hard to solve in a federation.
+a problem that is hard to solve in any federation.
 
-To be fair, IRC doesn't have a great story here either: any client
+To be fair, IRC doesn't have a great story here either: any *client*
 knows not only who's talking to who in a room, but also typically
 their client IP address. Servers *can* (and often do) *obfuscate*
 this, but often that obfuscation is trivial to reverse. Some servers
@@ -246,10 +253,10 @@ servers because some people connect to IRC using Matrix. This, in
 turn, means that Matrix will connect to that URL to generate a link
 preview.
 
-I feel this is a security issue, especially because those sockets
-would be kept open seemingly *forever*. I tried to warn the Matrix
-security team but somehow, I don't think this issue was taken very
-seriously. Here's the disclosure timeline:
+I feel this outlines a security issue, especially because those
+sockets would be kept open seemingly *forever*. I tried to warn the
+Matrix security team but somehow, I don't think this issue was taken
+very seriously. Here's the disclosure timeline:
 
  * January 18: contacted Matrix security
  * January 19: response: already [reported as a bug](https://github.com/matrix-org/synapse/issues/8302)
@@ -269,14 +276,18 @@ There are a couple of problems here:
  1. the bug was publicly disclosed in September 2020, and not
     considered a security issue until I notified them, and even then,
     I had to insist
+
  2. no clear disclosure policy timeline was proposed or seems
     established in the project (there is a [security disclosure
     policy](https://matrix.org/security-disclosure-policy/) but it doesn't include any predefined timeline)
+
  3. I wasn't informed of the disclosure
+
  4. the actual solution is a size limit (10MB, already implemented), a
     time limit (30 seconds, implemented in [PR 11784][]), and a
     content type allow list (HTML, "media" or JSON, implemented in [PR
     11936][]), and I'm not sure it's adequate
+
  5. (pure vanity:) I did not make it to their [Hall of fame](https://matrix.org/security-disclosure-policy/)
 
 [PR 11784]: https://github.com/matrix-org/synapse/pull/11784
@@ -285,12 +296,12 @@ There are a couple of problems here:
 I'm not sure those solutions are adequate because they all seem to
 assume a single home server will pull that one URL for a little while
 then stop. But in a federated network, *many* (possibly thousands)
-home servers may connected to a single room at once. If an attacker
-would drop a link into such a room, *all* those servers would connect
-to that link *all at once*. This is basically an amplification attack:
-a small packet will generate a lot of traffic to a single target. It
-doesn't matter there are size or time limits: the amplification is
-what matters here.
+home servers may be connected in a single room at once. If an attacker
+drops a link into such a room, *all* those servers would connect to
+that link *all at once*. This is an amplification attack: a small
+amount of traffic will generate a lot more traffic to a single
+target. It doesn't matter there are size or time limits: the
+amplification is what matters here.
 
 It should also be noted that *clients* that generate link previews
 have more amplification because they are more numerous than
@@ -300,18 +311,16 @@ generate link previews as well.
 That said, this is possibly not a problem specific to Matrix: any
 federated service that generates link previews may suffer from this.
 
-I'm honestly not sure what the solution is here.

(Diff truncated)
details about message signing
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 856ed39a..8b3faca6 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -386,12 +386,22 @@ themselves.
 
 That said, if server B administrator hijack user `joe` on server B,
 they will hijack that room *on that specific server*. This will not
-(necessarily) affect users on the other servers. It does seem like a
-major flaw that room credentials are bound to Matrix identifiers, as
-opposed to the E2E encryption credentials. This means that even in an
-encrypted room with fully verified members, a compromised or hostile
-home server could take over the room, inject an hostile party and
-start injecting hostile content or listen in on the conversations.
+(necessarily) affect users on the other servers, as servers could
+refuse parts of the updates. In practice, it's not clear how to block
+such an attack to me at the moment.
+
+It does seem like a major flaw that room credentials are bound to
+Matrix identifiers, as opposed to the E2E encryption credentials. So
+in an encrypted room even with fully verified members, a compromised
+or hostile home server can still take over the room by impersonating
+an admin and injecting an hostile user. That user can then send events
+or listen on the conversations.
+
+This is even more frustrating when you consider that Matrix events are
+actually [signed](https://spec.matrix.org/latest/#architecture) and therefore have *some* authentication attached to
+them. That signature, however, is made from the homeserver PKI keys,
+*not* the client's E2E keys, which makes E2E feel like it has been
+"bolted on" later.
 
 # Availability
 

spell check, one last TODO left (architecture)
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index b5c5f1ad..856ed39a 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -132,7 +132,7 @@ When I first looked at Matrix, five years ago, Element.io was called
 > browse our Website and use our Service and also allows us to improve
 > our Website and our Service. 
 
-When I asked Matrix people about why they were using Google analytics,
+When I asked Matrix people about why they were using Google Analytics,
 they explained this was for development purposes and they were aiming
 for velocity at the time, not privacy.
 
@@ -202,7 +202,7 @@ As an aside, I also appreciate that Matrix.org has a fairly decent
 Overall, privacy protections in Matrix mostly concern message
 contents, not metadata. In other words, who's talking with who, when
 and from where is not well protected. Compared to a tool like Signal,
-which goes through great lengths to anonymise that data with features
+which goes through great lengths to anonymize that data with features
 like [private contact discovery][], [disappearing messages](https://signal.org/blog/disappearing-messages/),
 [sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
 behind.
@@ -559,9 +559,9 @@ blog post for details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-
 servers.
 
 There are other promising home servers implementations from a
-performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), golang, [entered beta in late
+performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), Golang, [entered beta in late
 2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit), Rust, beta; [others](https://matrix.org/faq/#can-i-write-a-matrix-homeserver%3F)), but none of those
-are feature-complete so there's a tradeoff to be made there.  Synapse
+are feature-complete so there's a trade-off to be made there.  Synapse
 is also adding a lot of feature fast, so it's an open question whether
 the others will ever catch up.
 
@@ -590,7 +590,7 @@ time each message has to take.
 
 (I assume, here, that each Matrix message is delivered through at
 least two new HTTP sessions, which therefore require up to 8 packet
-roundtrips whereas in IRC, the existing socket is reused so it's
+round-trips whereas in IRC, the existing socket is reused so it's
 basically 2 round-trips.)
 
 Some [courageous person](https://blog.lewman.com/) actually made some [tests of various
@@ -649,7 +649,7 @@ pretty cool that it works, and they actually did it [pretty well][private contac
 
 Registration is also less obvious: in Signal, the app just needs to
 confirm your phone number and it's generally automated. It's
-frictionless and quick. In Matrix, you need to learn about home
+friction-less and quick. In Matrix, you need to learn about home
 servers, pick one, register (with a password! aargh!), and then setup
 encryption keys (not default), etc. It's really a lot more friction.
 
@@ -823,7 +823,7 @@ something that *anyone* working on a federated system should study in
 detail, because they are *bound* to make the same mistakes if they are
 not familiar with it. The short version is:
 
- * 1988: Finish researcher publishes first IRC codebase publicly
+ * 1988: Finish researcher publishes first IRC source code
  * 1989: 40 servers worldwide, mostly universities
  * 1990: EFnet ("eris-free network") fork which blocks the "open
    relay", named [Eris][] - followers of Eris form the A-net, which
@@ -832,7 +832,7 @@ not familiar with it. The short version is:
    routing improvements and timestamp-based channel synchronisation
  * 1994: DALnet fork, from Undernet, again on a technical disagreement
  * 1995: Freenode founded
- * 1996: IRCnet forks from EFnet, following a flamewar of historical
+ * 1996: IRCnet forks from EFnet, following a flame war of historical
    proportion, splitting the network between Europe and the Americas
  * 1997: Quakenet founded
  * 1999: (XMPP founded)
@@ -892,7 +892,7 @@ more machine-learning tools to sort through email and those systems
 are, fundamentally, unknowable.
 
 HTTP has somehow managed to live in a parallel universe, as it's
-technically still completely federated: anyone can start a webserver
+technically still completely federated: anyone can start a web server
 if they have a public IP address and anyone can connect to it. The
 catch, of course, is how you find the darn thing. Which is how Google
 became one of the most powerful corporations on earth, and how they
@@ -928,8 +928,6 @@ dead. Just like IRC is dead now.
 I wonder which path Matrix will take. Could it liberate us from those
 vicious cycles?
 
-TODO: spellcheck
-
 [[!tag draft]]
 
 [^1]: [According to Wikipedia](https://en.wikipedia.org/wiki/Internet_Relay_Chat#Modern_IRC), there are currently about 500

fix more TODOs, minor tweaks
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 2a6eedf1..b5c5f1ad 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -32,9 +32,9 @@ It's also (when [compared with XMPP](https://matrix.org/faq/#what-is-the-differe
 global JSON database with an HTTP API and pubsub semantics - whilst
 XMPP can be thought of as a message passing protocol."
 
-TODO: expand with
-https://matrix.org/faq/#what-is-the-current-project-status
-TODO: watch out for dupes with the numbers in conclusion
+[According to their FAQ](https://matrix.org/faq/#what-is-the-current-project-status), the project started in 2014, has about
+20,000 servers, and millions of users. Matrix works over HTTPS but
+over a [special port](https://matrix.org/faq/#what-ports-do-i-have-to-open-up-to-join-the-global-matrix-federation%3F): 8448.
 
 # Security and privacy
 
@@ -386,10 +386,12 @@ themselves.
 
 That said, if server B administrator hijack user `joe` on server B,
 they will hijack that room *on that specific server*. This will not
-(necessarily) affect users on the other servers.
-
-TODO: so what happens here, a fork? how does Matrix resolve this?
-TODO: why isn't this bound to E2E credentials?
+(necessarily) affect users on the other servers. It does seem like a
+major flaw that room credentials are bound to Matrix identifiers, as
+opposed to the E2E encryption credentials. This means that even in an
+encrypted room with fully verified members, a compromised or hostile
+home server could take over the room, inject an hostile party and
+start injecting hostile content or listen in on the conversations.
 
 # Availability
 
@@ -544,7 +546,7 @@ on the other hand, use the [client-server discovery API](https://spec.matrix.org
 what allows a given client to find your home server when you type your
 Matrix ID on login.
 
-TODO: review FAQ
+TODO: review architecture: https://spec.matrix.org/latest/#architecture
 
 # Performance
 
@@ -948,7 +950,8 @@ TODO: spellcheck
        * Pinterest: 480M
        * Twitter: 397M
 
-      Notable omission: Youtube, with 2.6B users...
+      Notable omission from that list: Youtube, with its mind-boggling
+      2.6 billion users...
 
       Those are not the kind of numbers you just "need to convince a
       brother or sister" to grow the network...

forgot about bots and voip
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f6015277..2a6eedf1 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -716,6 +716,39 @@ actually fit well in there. Going with `gomuks`, on the other hand,
 would mean running it in parallel with Irssi or ... ditching IRC,
 which is a leap I'm not quite ready to take just yet.
 
+Oh, and basically none of those clients (except Nheko and Element)
+support VoIP, which is still kind of a second-class citizen in
+Matrix. It does not support large rooms, for example: [Jitsi was used
+for FOSDEM](https://matrix.org/blog/2022/02/07/hosting-fosdem-2022-on-matrix/) instead of the native videoconferencing system.
+
+## Bots
+
+This falls a little aside the "usability" section, but I didn't know
+where to put this... There's a few Matrix bots out there, and you are
+likely going to be able to replace your existing bots with Matrix
+bots. It's true that IRC has a long and impressive history with lots
+of various bots doing various things, but given how young Matrix is,
+there's still a good variety:
+
+ * [maubot](https://github.com/maubot/maubot): generic bot with tons of usual plugins like sed, dice,
+   karma, xkcd, echo, rss, reminder, translate, react, exec,
+   gitlab/github webhook receivers, weather, etc
+ * [opsdroid](https://github.com/opsdroid/opsdroid): framework to implement "chat ops" in Matrix,
+   connects with Matrix, GitHub, GitLab, Shell commands, Slack, etc
+ * [matrix-nio](https://github.com/poljar/matrix-nio): another framework, used to build [lots more
+   bots](https://matrix-nio.readthedocs.io/en/latest/examples.html) like:
+   * [hemppa](https://github.com/vranki/hemppa): generic bot with various functionality like weather,
+     RSS feeds, calendars, cron jobs, OpenStreetmaps lookups, URL
+     title snarfing, wolfram alpha, astronomy pic of the day, Mastodon
+     bridge, room bridging, oh dear
+   * [devops](https://github.com/rdagnelie/devops-bot): ping, curl, etc
+   * [podbot](https://github.com/interfect/podbot): play podcast episodes from AntennaPod
+   * [cody](https://gitlab.com/carlbordum/matrix-cody): Python, Ruby, Javascript REPL
+   * [eno](https://github.com/8go/matrix-eno-bot): generic bot, "personal assistant"
+ * [mjolnir](https://github.com/matrix-org/mjolnir): moderation bot
+ * [hookshot](https://github.com/Half-Shot/matrix-hookshot): bridge with GitLab/GitHub
+ * [matrix-monitor-bot](https://github.com/turt2live/matrix-monitor-bot): latency monitor
+
 ## Working on Matrix
 
 As a developer, I find Matrix kind of intimidating. The specification

forgot youtube
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 82f30502..f6015277 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -915,5 +915,7 @@ TODO: spellcheck
        * Pinterest: 480M
        * Twitter: 397M
 
+      Notable omission: Youtube, with 2.6B users...
+
       Those are not the kind of numbers you just "need to convince a
       brother or sister" to grow the network...

note about sbuild-qemu-boot: not in bullseye, and typo in argument
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index 89fc0ccf..4b6ebebf 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -94,6 +94,10 @@ account with an empty password.
 
 ## Other useful tasks
 
+Note that some of the commands below (namely the ones depending on
+`sbuild-qemu-boot`) assume you are running Debian 12 (bookworm) or
+later.
+
  * enter the VM to make test, changes will be discarded  (thanks Nick
    Brown for the `sbuild-qemu-boot` tip!):
  
@@ -109,7 +113,7 @@ account with an empty password.
  * enter the VM to make *permanent* changes, which will *not* be
    discarded:
 
-        sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
+        sudo sbuild-qemu-boot --read-write /srv/sbuild/qemu/unstable-amd64.img
 
    Equivalent command:
 

lvm copy
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 3a518f8a..82627eac 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -871,6 +871,22 @@ Once that's done, export the pools to disconnect the drive:
     zpool export bpool-tubman
     zpool export rpool-tubman
 
+## LVM benchmark
+
+Copied the 512GB SSD/M.2 device to *another* 1024GB NVMe/M.2 device:
+
+    anarcat@curie:~$ sudo dd if=/dev/sdb of=/dev/sdc bs=4M status=progress conv=fdatasync
+    499944259584 octets (500 GB, 466 GiB) copiés, 1713 s, 292 MB/s
+    119235+1 enregistrements lus
+    119235+1 enregistrements écrits
+    500107862016 octets (500 GB, 466 GiB) copiés, 1719,93 s, 291 MB/s
+
+... while both over USB, whoohoo 300MB/s!
+
+TODO: Next step is to make a benchmark of LVM vs ZFS, since have (in
+theory) both the same hardware now (although the LVM copy is lagging
+behind the ZFS one, naturally).
+
 # Remaining issues
 
 TODO: move send/receive backups to offsite host

i3.conf: switch to fira, bumblebee, fix tray output
diff --git a/software/desktop/i3.conf b/software/desktop/i3.conf
index 71d66751..3dbd5d08 100644
--- a/software/desktop/i3.conf
+++ b/software/desktop/i3.conf
@@ -52,7 +52,7 @@ set $mod Mod4
 # is used in the bar {} block below.
 # This font is widely installed, provides lots of unicode glyphs, right-to-left
 # text rendering and scalability on retina/hidpi displays (thanks to pango).
-font pango:DejaVu Sans Mono 10
+font pango:Fira mono 10
 # Before i3 v4.8, we used to recommend this one as the default:
 # font -misc-fixed-medium-r-normal--13-120-75-75-C-70-iso10646-1
 # The font above is very space-efficient, that is, it looks good, sharp and
@@ -374,13 +374,20 @@ client.background       $black
 # Start i3bar to display a workspace bar (plus the system information i3status
 # finds out, if available)
 bar {
-        status_command py3status
+        # pango-list can help finding fonts here
+        font pango:FontAwesome, Fira mono 10
+        # window sound cpu memory load date
+        status_command bumblebee-status --iconset awesome-fonts
+        #status_command py3status
         position top
         # obey Fitt's law, ie. reduce the empty space
         tray_padding 0
+        # show tray, and on primary workspace
+        tray_output primary
         # colors are documented here:
         # https://i3wm.org/docs/userguide.html#_colors
-        # there's also some colors in ~/.config/i3status/config
+        # that reuses the colors set above, and might not actually
+        # affect the status bar for anything else than i3statu
         colors {
               background $black
               statusline $white

another note
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index be2a5a2e..89fc0ccf 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -182,11 +182,14 @@ autopkgtest:
       sudo qemu-img create -f qcow2 -o backing_file=/srv/sbuild/qemu/unstable-autopkgtest-amd64.img,backing_fmt=qcow2  /var/lib/libvirt/images/unstable-autopkgtest-amd64.img 10G
       sudo chown qemu-libvirt '/var/lib/libvirt/images/unstable-autopkgtest-amd64.img'
 
-Then this VM can be adopted fairly normally in virt-manager. One twist
-I found is that the "normal" networking doesn't seem to work anymore,
-possibly because I messed it up with vagrant. Using the bridge doesn't
-work either out of the box, but that can be fixed with the following
-`sysctl` changes:
+Then this VM can be adopted fairly normally in virt-manager. Note that
+it's possible that you can set that up through the libvirt XML as
+well, but I haven't quite figured it out.
+
+One twist I found is that the "normal" networking doesn't seem to work
+anymore, possibly because I messed it up with vagrant. Using the
+bridge doesn't work either out of the box, but that can be fixed with
+the following `sysctl` changes:
 
     net.bridge.bridge-nf-call-ip6tables=0
     net.bridge.bridge-nf-call-iptables=0

figured out unification
diff --git a/blog/2022-04-27-sbuild-qemu.md b/blog/2022-04-27-sbuild-qemu.md
index b5cd6290..be2a5a2e 100644
--- a/blog/2022-04-27-sbuild-qemu.md
+++ b/blog/2022-04-27-sbuild-qemu.md
@@ -169,6 +169,56 @@ The `nc` socket interface is ... not great, but it works well
 enough. And you can probably fire up an SSHd to get a better shell if
 you feel like it.
 
+## Unification with libvirt
+
+Those images created by autopkgtest can actually be used by libvirt to
+boot real, [fully operational battle stations](https://www.youtube.com/watch?v=v5lDKjA_7I0), sorry, virtual
+machines. But it needs some tweaking.
+
+First, we need a snapshot image to work with, because we don't want
+libvirt to work directly on the pristine images created by
+autopkgtest:
+
+      sudo qemu-img create -f qcow2 -o backing_file=/srv/sbuild/qemu/unstable-autopkgtest-amd64.img,backing_fmt=qcow2  /var/lib/libvirt/images/unstable-autopkgtest-amd64.img 10G
+      sudo chown qemu-libvirt '/var/lib/libvirt/images/unstable-autopkgtest-amd64.img'
+
+Then this VM can be adopted fairly normally in virt-manager. One twist
+I found is that the "normal" networking doesn't seem to work anymore,
+possibly because I messed it up with vagrant. Using the bridge doesn't
+work either out of the box, but that can be fixed with the following
+`sysctl` changes:
+
+    net.bridge.bridge-nf-call-ip6tables=0
+    net.bridge.bridge-nf-call-iptables=0
+    net.bridge.bridge-nf-call-arptables=0
+
+That trick was found in [this good libvirt networking guide](https://jamielinux.com/docs/libvirt-networking-handbook/bridged-network.html#initial-steps).
+
+Finally, networking should work transparently inside the VM now. To
+share files, autopkgtest expects a 9p filesystem called
+`sbuild-qemu`. It might be difficult to get it just right in
+virt-manager, so here's the XML:
+
+    <filesystem type="mount" accessmode="passthrough">
+      <source dir="/home/anarcat/dist"/>
+      <target dir="sbuild-qemu"/>
+      <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
+    </filesystem>
+
+The above shares the `/home/anarcat/dist` folder with the VM. Inside
+the VM, it will be mounted because there's this `/etc/fstab` line:
+
+    sbuild-qemu /shared 9p trans=virtio,version=9p2000.L,auto,nofail 0 0
+
+By hand, that would be:
+
+    mount -t 9p -o trans=virtio,version=9p2000.L sbuild-qemu /shared
+
+I probably forgot something else important here, but surely I will
+remember to put it back here when I do.
+
+Note that this at least partially overlaps with [[services/hosting]].
+
 # Nitty-gritty details no one cares about
 
 ## Fixing hang in sbuild cleanup
diff --git a/services/hosting.mdwn b/services/hosting.mdwn
index 9aa93349..cd0b367c 100644
--- a/services/hosting.mdwn
+++ b/services/hosting.mdwn
@@ -220,6 +220,15 @@ will show the right MAC:
 And obviously, connecting to the console and running `ip a` will show
 the right IP address, see below for console usage.
 
+Note that netfilter might be firewalling the bridge. To disable, use:
+
+    sysctl net.bridge.bridge-nf-call-ip6tables=0
+    sysctl net.bridge.bridge-nf-call-iptables=0
+    sysctl net.bridge.bridge-nf-call-arptables=0
+
+See also [[the sbuild / qemu blog post|blog/2022-04-27-sbuild-qemu]]
+for details on how to integrate sbuild images with libvirt.
+
 Maintenance
 -----------
 
@@ -268,6 +277,8 @@ References
 
  * [libvirt handbook bridge configuration](https://jamielinux.com/docs/libvirt-networking-handbook/bridged-network.html)
  * [libvirt wiki networking configuration](https://wiki.libvirt.org/page/Networking#Creating_network_initscripts)
+ * a good [libvirt networking handbook](https://jamielinux.com/docs/libvirt-networking-handbook/)
+ * [Arch Linux wiki page](https://wiki.archlinux.org/title/Libvirt)
  * [Debian wiki KVM reference](https://wiki.debian.org/KVM) - also includes tuning options for
    disks, CPU, I/O
  * [nixCraft guide](https://www.cyberciti.biz/faq/install-kvm-server-debian-linux-9-headless-server/) - which gave me the `virt-builder` shortcut

more matrix edits
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 58bf5154..82f30502 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -20,6 +20,22 @@ jump to a section that interest you or, more likely, the
 
 [[!toc levels=2]]
 
+# Introduction to Matrix
+
+Matrix is an "open standard for interoperable, decentralised,
+real-time communication over IP. It can be used to power Instant
+Messaging, VoIP/WebRTC signalling, Internet of Things communication -
+or anywhere you need a standard HTTP API for publishing and
+subscribing to data whilst tracking the conversation history".
+
+It's also (when [compared with XMPP](https://matrix.org/faq/#what-is-the-difference-between-matrix-and-xmpp%3F)) "an eventually consistent
+global JSON database with an HTTP API and pubsub semantics - whilst
+XMPP can be thought of as a message passing protocol."
+
+TODO: expand with
+https://matrix.org/faq/#what-is-the-current-project-status
+TODO: watch out for dupes with the numbers in conclusion
+
 # Security and privacy
 
 I have some concerns about the security promises of Matrix. It's
@@ -302,13 +318,19 @@ rejoin the room from another server. This is why spam is such a
 problem in Email, and why IRC networks have stopped federating ages
 ago (see [the IRC history](https://en.wikipedia.org/wiki/Internet_Relay_Chat#History) for that fascinating story).
 
+## The mjolnir bot
+
 The [mjolnir moderation bot](https://github.com/matrix-org/mjolnir) is designed to help with some of those
 things. It can kick and ban users, redact all of a user's message (as
 opposed to one by one), all of this across multiple rooms. It can also
 subscribe to a federated block list published by `matrix.org` to block
-known abusers (users or servers). It's suggested by Matrix people to
-make the bot admin of your channels, because you can't take back admin
-from a user once given.
+known abusers (users or servers). Bans are [pretty flexible](https://github.com/matrix-org/mjolnir/blob/main/docs/moderators.md#bans) and
+can operate at the user, room, or server level.
+
+It's suggested by Matrix people to make the bot admin of your
+channels, because you can't take back admin from a user once given.
+
+## The command-line tool
 
 This is based on an [admin API](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html) built into Synapse. There's also a
 [new command line tool](https://git.fout.re/pi/matrixadminhelpers) designed to do things like:
@@ -320,6 +342,15 @@ This is based on an [admin API](https://matrix-org.github.io/synapse/latest/usag
 > * purge history of theses rooms
 > * shutdown rooms
 
+## Rate limiting
+
+Synapse has pretty good [built-in rate-limiting](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L879-L996) which blocks
+repeated login, registration, joining, or messaging attempts. It may
+also end up throttling servers on the federation based on those
+settings.
+
+## Fundamental federation problems
+
 Because users joining a room may come from another server, room
 moderators are at the mercy of the registration and moderation
 policies of those servers. Matrix is like IRC's `+R` mode ("only
@@ -343,13 +374,22 @@ federation, when you bridge a room with another network, you inherit
 all the problems from *that* network and the bridge is unlikely to
 have as many tools as the original network's API to control abuse...
 
-Synapse has pretty good [built-in rate-limiting](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L879-L996) which blocks
-repeated login, registration, joining, or messaging attempts. It may
-also end up throttling servers on the federation based on those
-settings.
+## Room admins
+
+Matrix, in particular, has the problem that room administrators (which
+have the power to redact messages, ban users, and promote other users)
+are bound to their Matrix ID which is, in turn, bound to their home
+servers. This implies that a home server administrators could (1)
+impersonate a given user and (2) use that to hijack the room. So in
+practice, the home server is the trust anchor for rooms, not the user
+themselves.
+
+That said, if server B administrator hijack user `joe` on server B,
+they will hijack that room *on that specific server*. This will not
+(necessarily) affect users on the other servers.
 
-TODO: you can use the admin API to impersonate a room admin? YES! see also
-the other TODO below
+TODO: so what happens here, a fork? how does Matrix resolve this?
+TODO: why isn't this bound to E2E credentials?
 
 # Availability
 
@@ -423,13 +463,6 @@ user from server B joins, the room will be replicated on server B as
 well. If server A fails, server B will keep relaying traffic to
 connected users and servers.
 
-TODO: how does admin work again? can server B hijack a room on server
-A? I had noted "admin from serverB join and belong as admin. each
-server needs to have admin." answer: no. server B would need to
-impersonate a room admin on server B to hijack the room. they can
-basically fork the room to modify it, but that only affects users on
-that server.
-
 a room is therefore not fundamentally addressed with the above alias,
 instead ,it has a internal Matrix ID, which basically a random
 string. It has a server name attached to it, but that was made just to
@@ -458,10 +491,12 @@ notice.) The point here is to have a way to pre-populate a list of
 rooms on the server, even if they are not necessarily present on that
 server directly, in case another server that has connected users hosts it.
 
-Rooms, by default, live forever, even after the last user quits. There
-is a [tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
-part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows a room
-admin to close a room, with a message and a pointer to another room.
+Rooms, by default, live forever, even after the last user
+quits. There's an [admin API to delete rooms](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#version-2-new-version) and a [tombstone
+event](https://spec.matrix.org/v1.2/client-server-api/#events-17) to redirect to another one, but neither have a GUI yet. The
+latter is part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows
+a room admin to close a room, with a message and a pointer to another
+room.
 
 ## Home server
 
@@ -501,15 +536,14 @@ explicitly configured for your domain. You can't just put:
 `@you:example.com` as a Matrix ID. That's because Matrix doesn't
 support "virtual hosting" and you'd still be connecting to rooms and
 people with your `matrix.org` identity, not `example.com` as you would
-normally expect.
+normally expect. This is also why you cannot [rename your home
+server](https://matrix.org/faq/#why-can't-i-rename-my-homeserver%3F) after the fact.
 
 That specification is what allows servers to find each other. Clients,
 on the other hand, use the [client-server discovery API](https://spec.matrix.org/v1.2/client-server-api/#server-discovery): this is
 what allows a given client to find your home server when you type your
 Matrix ID on login.
 
-TODO: https://matrix.org/faq/#why-can't-i-rename-my-homeserver%3F
-
 TODO: review FAQ
 
 # Performance
@@ -524,10 +558,10 @@ servers.
 
 There are other promising home servers implementations from a
 performance standpoint ([dendrite](https://github.com/matrix-org/dendrite), golang, [entered beta in late
-2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit), Rust, beta), but none of those are
-feature-complete so there's a tradeoff to be made there.  Synapse is
-also adding a lot of feature fast, so it's unlikely those other
-servers will ever catch up.
+2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit), Rust, beta; [others](https://matrix.org/faq/#can-i-write-a-matrix-homeserver%3F)), but none of those
+are feature-complete so there's a tradeoff to be made there.  Synapse
+is also adding a lot of feature fast, so it's an open question whether
+the others will ever catch up.
 
 Matrix can feel slow sometimes. For example, joining the "Matrix HQ"
 room in Element (from matrix.debian.social) takes a few *minutes* and

edits...
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 291228bd..3a518f8a 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -760,32 +760,16 @@ Main pool creation is:
 
 ## first sync
 
-sanoid... syncoid... probably better to do it by hand, but this is
-easier. slooow everything feels like it has ~30-50ms latency extra:
+I used syncoid to copy all pools over to the external device. syncoid
+is a thing that's part of the [sanoid project](https://github.com/jimsalterjrs/sanoid) which is
+specifically designed to sync snapshots between pool, typically over
+SSH links but it can also operate locally.
 
-    anarcat@curie:sanoid$ LANG=C top -b  -n 1 | head -20
-    top - 13:07:05 up 6 days,  4:01,  1 user,  load average: 16.13, 16.55, 11.83
-    Tasks: 606 total,   6 running, 598 sleeping,   0 stopped,   2 zombie
-    %Cpu(s): 18.8 us, 72.5 sy,  1.2 ni,  5.0 id,  1.2 wa,  0.0 hi,  1.2 si,  0.0 st
-    MiB Mem :  15898.4 total,   1387.6 free,  13170.0 used,   1340.8 buff/cache
-    MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1319.8 avail Mem 
+The `sanoid` command had a `--readonly` argument to simulate changes,
+but `syncoid` didn't so I [tried to fix that with an upstream PR](https://github.com/jimsalterjrs/sanoid/pull/748).
 
-        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
-         70 root      20   0       0      0      0 S  83.3   0.0   6:12.67 kswapd0
-    4024878 root      20   0  282644  96432  10288 S  44.4   0.6   0:11.43 puppet
-    3896136 root      20   0   35328  16528     48 S  22.2   0.1   2:08.04 mbuffer
-    3896135 root      20   0   10328    776    168 R  16.7   0.0   1:22.93 zfs
-    3896138 root      20   0   10588    788    156 R  16.7   0.0   1:49.30 zfs
-        350 root       0 -20       0      0      0 R  11.1   0.0   1:03.53 z_rd_int
-        351 root       0 -20       0      0      0 S  11.1   0.0   1:04.15 z_rd_int
-    3896137 root      20   0    4384    352    244 R  11.1   0.0   0:44.73 pv
-    4034094 anarcat   30  10   20028  13960   2428 S  11.1   0.1   0:00.70 mbsync
-    4036539 anarcat   20   0    9604   3464   2408 R  11.1   0.0   0:00.04 top
-        352 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
-        353 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
-        354 root       0 -20       0      0      0 S   5.6   0.0   1:04.01 z_rd_int
-
-The full first sync was:
+It seems it would be better to do this by hand, but this was much 
+easier. The full first sync was:
 
     root@curie:/home/anarcat# ./bin/syncoid -r  bpool bpool-tubman
 
@@ -850,8 +834,37 @@ The full first sync was:
 
 Funny how the `CRITICAL ERROR` doesn't actually stop `syncoid` and it
 just carries on merrily doing when it's telling you it's "cowardly
-refusing to destroy your existing target"... Maybe that's [my pull
-request that broke something though](https://github.com/jimsalterjrs/sanoid/pull/748).
+refusing to destroy your existing target"... Maybe that's because my pull
+request broke something though...
+
+During the transfer, the computer was very sluggish: everything feels
+like it has ~30-50ms latency extra:
+
+    anarcat@curie:sanoid$ LANG=C top -b  -n 1 | head -20
+    top - 13:07:05 up 6 days,  4:01,  1 user,  load average: 16.13, 16.55, 11.83
+    Tasks: 606 total,   6 running, 598 sleeping,   0 stopped,   2 zombie
+    %Cpu(s): 18.8 us, 72.5 sy,  1.2 ni,  5.0 id,  1.2 wa,  0.0 hi,  1.2 si,  0.0 st
+    MiB Mem :  15898.4 total,   1387.6 free,  13170.0 used,   1340.8 buff/cache
+    MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1319.8 avail Mem 
+
+        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
+         70 root      20   0       0      0      0 S  83.3   0.0   6:12.67 kswapd0
+    4024878 root      20   0  282644  96432  10288 S  44.4   0.6   0:11.43 puppet
+    3896136 root      20   0   35328  16528     48 S  22.2   0.1   2:08.04 mbuffer
+    3896135 root      20   0   10328    776    168 R  16.7   0.0   1:22.93 zfs
+    3896138 root      20   0   10588    788    156 R  16.7   0.0   1:49.30 zfs
+        350 root       0 -20       0      0      0 R  11.1   0.0   1:03.53 z_rd_int
+        351 root       0 -20       0      0      0 S  11.1   0.0   1:04.15 z_rd_int
+    3896137 root      20   0    4384    352    244 R  11.1   0.0   0:44.73 pv
+    4034094 anarcat   30  10   20028  13960   2428 S  11.1   0.1   0:00.70 mbsync
+    4036539 anarcat   20   0    9604   3464   2408 R  11.1   0.0   0:00.04 top
+        352 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        353 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        354 root       0 -20       0      0      0 S   5.6   0.0   1:04.01 z_rd_int
+
+I wonder how much of that is due to syncoid, particularly because I
+often saw `mbuffer` and `pv` in there which are not strictly necessary
+to do those kind of operations, as far as I understand.
 
 Once that's done, export the pools to disconnect the drive:
 

fix some typos, seems like i need a spellcheck
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 5789fbb1..58bf5154 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -459,7 +459,7 @@ rooms on the server, even if they are not necessarily present on that
 server directly, in case another server that has connected users hosts it.
 
 Rooms, by default, live forever, even after the last user quits. There
-is a[tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
+is a [tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
 part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows a room
 admin to close a room, with a message and a pointer to another room.
 
@@ -548,7 +548,7 @@ that's a first problem.
 But even in conversations, I "feel" people don't immediately respond
 as fast. In fact, an interesting double-blind experiment that could be
 made would be to have people guess whether the person they are talking
-to is on IRC or Matrix. My theory would be thatq people could notice
+to is on IRC or Matrix. My theory would be that people could notice
 that Matrix users are slower, if only because of the TCP round-trip
 time each message has to take.
 
@@ -703,7 +703,7 @@ Just taking the [latest weekly Matrix report](https://matrix.org/blog/2022/05/27
 *three* new MSCs proposed, just last week! There's even a graph that
 shows the number of MSCs is progressing steadily, at 600+ proposals
 total, with the majority (300+) "new". I would guess the "merged" ones
-are at about 150. That's a lot of of text. 
+are at about 150. That's a lot of of text.
 
 That includes kind of useless stuff like [3D worlds](https://github.com/matrix-org/matrix-spec-proposals/pull/3815) which,
 frankly, I don't think you should be working on when you have such
@@ -859,6 +859,8 @@ dead. Just like IRC is dead now.
 I wonder which path Matrix will take. Could it liberate us from those
 vicious cycles?
 
+TODO: spellcheck
+
 [[!tag draft]]
 
 [^1]: [According to Wikipedia](https://en.wikipedia.org/wiki/Internet_Relay_Chat#Modern_IRC), there are currently about 500

prendre des commentaires de Thib, merci!
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 81d46acf..5789fbb1 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -43,7 +43,8 @@ is a separate, if valid, concern.)  Obviously, an hostile server
 normally tightly controlled. So, if you trust your IRC operators, you
 should be fairly safe. Obviously, clients *can* (and often do, even if
 [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all messages, but this is generally not
-the default. [Irssi](https://irssi.org/), for example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
+the default. [Irssi](https://irssi.org/), for example, does [not log by
+default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412). Some IRC bouncers *do* log do disk however...
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -51,7 +52,8 @@ database. Then it will transmit that message to all clients connected
 to that server and room, and to all other servers that have clients
 connected to that room. Those remote servers, in turn, will keep a
 copy of that message and all its metadata in their own database, by
-default forever.
+default forever. On encrypted rooms, thankfully, those messages are
+encrypted, but not their metadata.
 
 There is a mechanism to expire entries in Synapse, but it is [not
 enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one should generally assume that a message
@@ -101,7 +103,7 @@ log retention policies well defined for installed packages, and those
 
 ## Matrix.org privacy policy
 
-When I first looked at Matrix, a long time ago, Element.io was called
+When I first looked at Matrix, five years ago, Element.io was called
 [Riot.im](https://riot.im/) and had a [rather dubious privacy policy](https://web.archive.org/web/20170317115535/https://riot.im/privacy):
 
 > We currently use cookies to support our use of Google Analytics on
@@ -116,7 +118,7 @@ When I first looked at Matrix, a long time ago, Element.io was called
 
 When I asked Matrix people about why they were using Google analytics,
 they explained this was for development purposes and they were aiming
-for velocity at this point, not privacy.
+for velocity at the time, not privacy.
 
 They also included a "free to snitch" clause:
 
@@ -153,7 +155,7 @@ Notable points of the new privacy policies:
  * [Element 2.2.1](https://element.io/privacy): mentions many more third parties (Twilio,
    Stripe, [Quaderno](https://www.quaderno.io/), LinkedIn, Twitter, Google, [Outplay](https://www.outplayhq.com/),
    [PipeDrive](https://www.pipedrive.com/), [HubSpot](https://www.hubspot.com/), [Posthog](https://posthog.com/), Sentry, and [Matomo](https://matomo.org/)
-   (phew!)
+   (phew!) used when you are paying Matrix.org for hosting
 
 I'm not super happy with all the trackers they have on the Element
 platform, but then again you don't have to use that service. Your
@@ -318,23 +320,16 @@ This is based on an [admin API](https://matrix-org.github.io/synapse/latest/usag
 > * purge history of theses rooms
 > * shutdown rooms
 
-Matrix doesn't have IP-specific moderation mechanisms to block users
-from (say) Tor or known VPNs to limit abuse. Furthermore, because
-users joining a room may come from another server, room moderators are
-at the mercy of the registration and moderation policies of those
-servers. Matrix is like IRC's `+R` mode ("only registered users can
-join") by default, except that anyone can register their own
-homeserver, which makes this limited. There's no API to block a
-specific homeserver, so this must be done at the system
-(e.g. netfilter / firewall) level, which, again, might not be obvious
-to a novice.
-
-Furthermore, it can be tricky to block a hostile homeserver if they
-are ready to move around the IP space. It would be better to be *also*
-able to block server *names* altogether, and have those tools
-available to room admins, just like IRC channel admins can block users
-based on their "netmask" (basically their reverse IP address lookup)
-or IP address.
+Because users joining a room may come from another server, room
+moderators are at the mercy of the registration and moderation
+policies of those servers. Matrix is like IRC's `+R` mode ("only
+registered users can join") by default, except that anyone can
+register their own homeserver, which makes this limited.
+
+Server admins can block IP addresses and home servers, but those tools
+are not currently available to room admins. So it would be nice to
+have room admins have that capability, just like IRC channel admins
+can block users based on their IP address.
 
 Matrix has the concept of guest accounts, but it is not used very
 much, and virtually no client supports it. This contrasts with the way
@@ -353,7 +348,7 @@ repeated login, registration, joining, or messaging attempts. It may
 also end up throttling servers on the federation based on those
 settings.
 
-TODO: you can use the admin API to impersonate a room admin? see also
+TODO: you can use the admin API to impersonate a room admin? YES! see also
 the other TODO below
 
 # Availability
@@ -430,7 +425,10 @@ connected users and servers.
 
 TODO: how does admin work again? can server B hijack a room on server
 A? I had noted "admin from serverB join and belong as admin. each
-server needs to have admin."
+server needs to have admin." answer: no. server B would need to
+impersonate a room admin on server B to hijack the room. they can
+basically fork the room to modify it, but that only affects users on
+that server.
 
 a room is therefore not fundamentally addressed with the above alias,
 instead ,it has a internal Matrix ID, which basically a random
@@ -440,17 +438,19 @@ avoid collisions.
 This can get a little confusing. For example, the `#fractal:gnome.org`
 room is an alias on the `gnome.org` server, but the room ID is
 `!hwiGbsdSTZIwSRfybq:matrix.org`. That's because the room was created
-on `matrix.org`, but admins are on `gnome.org` now.
+on `matrix.org`, but the preferred branding is `gnome.org` now.
 
 Discovering rooms can therefore be tricky: there *is* a per-server room
 directory, but Matrix.org people are trying to deprecate it in favor
 of "Spaces". Room directories were ripe for abuse: anyone can create a
-room, so anyone can show up in there. In contrast, a "Space" is
-basically a room that's an index of other rooms (including other
-spaces), so existing moderation and administration mechanism that work
-in rooms can (somewhat) work in spaces as well. This also allows rooms
-to work across federation, regardless on which server they were
-originally created.
+room, so anyone can show up in there. It's possible to restrict who
+can add aliases, but directories were still seen as too limited.
+
+In contrast, a "Space" is basically a room that's an index of other
+rooms (including other spaces), so existing moderation and
+administration mechanism that work in rooms can (somewhat) work in
+spaces as well. This also allows rooms to work across federation,
+regardless on which server they were originally created.
 
 New users can be added to a space or room [automatically](https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1378-L1388) in
 Synapse. (Existing users can be told about the space with a server
@@ -580,6 +580,10 @@ seem stalled at the time of writing. The Matrix people have also
 solving large, internet-scale routing problems Matrix is coming
 to. See also [this talk at FOSDEM 2022](https://www.youtube.com/watch?v=diwzQtGgxU8&list=PLl5dnxRMP1hW7HxlJiHSox02MK9_KluLH&index=19).
 
+Room join performance improvements are also coming down the pipeline,
+with [sliding sync](https://github.com/matrix-org/matrix-spec-proposals/blob/kegan/sync-v3/proposals/3575-sync.md), [lazy loading over federation](https://github.com/matrix-org/matrix-spec-proposals/pull/2775), and [fast
+room joins](https://github.com/matrix-org/synapse/milestone/6). So there's hope there as well.
+
 # Usability
 
 ## Onboarding and workflow
@@ -594,9 +598,11 @@ great:
     <https://app.element.io/#/room%2F%23matrix-dev%3Amatrix.org> and
     then you need to register, aaargh
 
-As you might have guessed by now, there is a [proposed
-specification](https://github.com/matrix-org/matrix-spec-proposals/blob/f295e828dc3107260a7979c40175442bf3a7fdc4/proposals/2312-matrix-uri.md) to solve this, but web browsers need to adopt it
-as well, so that's far from actually being solved.
+As you might have guessed by now, there is a [specification](https://github.com/matrix-org/matrix-spec-proposals/blob/f295e828dc3107260a7979c40175442bf3a7fdc4/proposals/2312-matrix-uri.md) to
+solve this, but web browsers need to adopt it as well, so that's far
+from actually being solved. At least browsers generally know about the
+`matrix:` scheme, it's just not exactly clear what they should do with
+it.
 
 In general, when compared with tools like Signal or Whatsapp, Matrix
 doesn't fare as well in terms of user discovery. I probably have some
@@ -688,6 +694,11 @@ literally *hundreds* of MSCs that are flying around. It's hard to tell
 what's been adopted and what hasn't, and even harder to figure out if
 *your* specific client has implemented it.
 
+As a lot of people, one answer is "rewrite it in rust": Matrix are
+working to implement a lot of those specifications in a
+[matrix-rust-sdk](https://github.com/matrix-org/matrix-rust-sdk) library that's designed to take the
+implementation details away from users. But it's a lot of work!
+
 Just taking the [latest weekly Matrix report](https://matrix.org/blog/2022/05/27/this-week-in-matrix-2022-05-27#dept-of-spec-), you find that
 *three* new MSCs proposed, just last week! There's even a graph that
 shows the number of MSCs is progressing steadily, at 600+ proposals

more todos
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index fa35eb3c..291228bd 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -858,14 +858,14 @@ Once that's done, export the pools to disconnect the drive:
     zpool export bpool-tubman
     zpool export rpool-tubman
 
-TODO: move to offsite host
-TODO: setup cron job (or timer?)
+# Remaining issues
+
+TODO: move send/receive backups to offsite host
+TODO: setup backup cron job (or timer?)
 
 TODO: consider alternatives to syncoid, considering the code issues
 (large functions, lots of `system` calls without arrays...)
 
-# Remaining issues
-
 TODO: swap. how do we do it?
 
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
@@ -873,11 +873,18 @@ TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
 here.
 
-TODO: send/recv, automated snapshots
-
 TODO: merge this documentation with the [[hardware/tubman]]
 documentation. maybe create a separate zfs primer?
 
+TODO: debugging tools:
+
+    tail -f /proc/spl/kstat/zfs/dbgmsg
+    zpool iostat 1 -l
+    -q queues
+    -r size histogram per vdev
+    -w latency histogram
+    -v verbose include vdevq
+
 TODO: review this blog post
 https://github.com/djacu/nixos-on-zfs/blob/main/blog/2022-03-24.md
 which seems to explain a bit the layout behind the installer

incorporate lots of edits
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index 32d555fd..81d46acf 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -1,35 +1,49 @@
-I have some concerns about Matrix (the protocol, not the apparently
-horrible movie that came out recently, although I do have concerns
-about that as well). I've been watching the project for a long time,
-and it seems more and more like a truly promising alternative to many
-protocols like IRC, XMPP, and Signal.
+I have some concerns about Matrix (the protocol, not the movie that
+came out recently, although I do have concerns about that as
+well). I've been watching the project for a long time, and it seems
+more a promising alternative to many protocols like IRC, XMPP, and
+Signal.
 
 This review will sound a bit negative, because it focuses on the
 concerns I have. I am the operator of an IRC network and people keep
 asking me to bridge it with Matrix. I have myself considered a few
-times the idea of just giving up and converting it. This space
-documents why neither of those have happened yet.
+times the idea of just giving up and converting to Matrix. This space
+is a living document exploring my research of that problem space.
+
+This article was written over the course of the last three months, but
+I have been watching the Matrix project for years (my logs seem to say
+2016 at least), and is rather long. It will likely take you half an
+hour to read, so [copy this over to your ebook reader](https://gitlab.com/anarcat/wallabako), your
+tablet, dead trees, and sit back comfortably. Or, alternatively, just
+jump to a section that interest you or, more likely, the
+[conclusion](#conclusion).
 
 [[!toc levels=2]]
 
 # Security and privacy
 
+I have some concerns about the security promises of Matrix. It's
+advertised as a "secure" with "E2E [end-to-end] encryption", but how
+does it actually work?
+
 ## Data retention defaults
 
-One of my main concerns with Matrix is data retention.
-
-In IRC, servers don't actually keep messages all that long: they pass
-them along to other servers and clients basically as fast as they can,
-only keep them from memory, and move on to the next message. There are
-no concerns about data retention on messages (and their metadata)
-other than the network layer. (I'm ignoring the issues with user
-registration here, which is a separate, if valid, concern.)
-Obviously, an hostile server *could* start everything it gets of
-course, but typically IRC federations are tightly controlled and, if
-you trust your IRC server, you should be fairly safe. Obviously,
-clients *can* (and often do, even if [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all
-messages, but this is typically not the default. [Irssi](https://irssi.org/), for
-example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
+One of my main concerns with Matrix is data retention, which is a key
+part of security in a threat model where (for example) an hostile
+state actor wants to surveil your communications and can seize your
+devices.
+
+On IRC, servers don't actually keep messages all that long: they pass
+them along to other servers and clients as fast as they can, only keep
+them in memory, and move on to the next message. There are no concerns
+about data retention on messages (and their metadata) other than the
+network layer. (I'm ignoring the issues with user registration, which
+is a separate, if valid, concern.)  Obviously, an hostile server
+*could* log everything passing through it, but IRC federations are
+normally tightly controlled. So, if you trust your IRC operators, you
+should be fairly safe. Obviously, clients *can* (and often do, even if
+[OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all messages, but this is generally not
+the default. [Irssi](https://irssi.org/), for example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -37,30 +51,28 @@ database. Then it will transmit that message to all clients connected
 to that server and room, and to all other servers that have clients
 connected to that room. Those remote servers, in turn, will keep a
 copy of that message and all its metadata in their own database, by
-default basically forever.
+default forever.
 
-Indeed, there is a mechanism to expire entries in Synapse, but it is
-[not enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one can safely assume that a message
+There is a mechanism to expire entries in Synapse, but it is [not
+enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one should generally assume that a message
 sent on Matrix is never expired.
 
 ## GDPR in the federation
 
 But even if that setting was enabled by default, how do you control
-it? This is a fundamental problem of the federation: if anyone is
-allowed to join a given room (which is basically the default
-configuration of any room), anyone will log (deliberately or
-inadvertently) all content and metadata in that room.
+it? This is a fundamental problem of the federation: if any user is
+allowed to join a room (which is the default), those user's servers
+will log all content and metadata from that room. That includes
+private, one-on-one conversations, since those are essentially rooms
+as well.
 
 In the context of the GDPR, this is really tricky: who's the
-responsible party (know as the "data controller") here? It's basically
-any yahoo who fires up a home server and joins a room. Good luck
-enforcing the GDPR on those folks. In the brave new "peer-to-peer"
-world that Matrix is heading towards, it's, also, basically any client
-whatsoever, which also brings its own set of problems. 
+responsible party (known as the "data controller") here? It's
+basically any yahoo who fires up a home server and joins a room.
 
 In a federated network, one has to wonder whether GDPR enforcement is
-even possible at all. Assuming you want to enforce your right to be
-forgotten in a given room, you would have to:
+even possible at all. But in Matrix in particular, if you want to
+enforce your right to be forgotten in a given room, you would have to:
 
  1. enumerate all the users that ever joined the room while you were
     there
@@ -70,7 +82,11 @@ forgotten in a given room, you would have to:
 I recognize this is a hard problem to solve while still keeping an
 open ecosystem. But I believe that Matrix should have much stricter
 defaults towards data retention than right now. Message expiry should
-be enforce *by default*. 
+be enforced *by default*, for example.
+
+Also keep in mind that, in the brave new [peer-to-peer](https://github.com/matrix-org/pinecone) world that
+Matrix is heading towards, the boundary between server and client is
+likely to be fuzzier, which would make applying the GDPR even more difficult.
 
 In fact, maybe Synapse should be designed so that there's no
 configurable flag to turn off data retention. A bit like how most
@@ -80,27 +96,47 @@ this was designed to keep hard drives from filling up, but it also has
 the added benefit of limiting the amount of personal information kept
 on disk in this modern day. (Arguably, syslog doesn't rotate logs on
 its own, but, say, Debian GNU/Linux, as an installed system, does have
-log retention policies well defined for installed packages. And "no
-expiry" is basically a bug.)
+log retention policies well defined for installed packages, and those
+[can be discussed](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=759382). And "no expiry" is definitely a bug.
 
 ## Matrix.org privacy policy
 
 When I first looked at Matrix, a long time ago, Element.io was called
-[Vector.im](https://vector.im/) and had a rather dubious privacy policy. I
-unfortunately cannot find a copy of it now on the internet archive,
-but it openly announced it was collecting (Google!) analytics on its
-users. When I asked Matrix people about this, they explained this was
-for development purposes and they were aiming for velocity at this
-point, not privacy. I am paraphrasing: I am sorry I lost track of that
-conversation that happened so long ago, you will have just to trust me
-on this.
-
-I think that, like the current retention policies, this set a bad
-precedent. Thankfully, since that policy was drafted, the GDPR
-happened and it seems like both the [Element.io privacy policy](https://element.io/privacy) and
-the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been somewhat improved.
-
-Notable points of the privacy policies:
+[Riot.im](https://riot.im/) and had a [rather dubious privacy policy](https://web.archive.org/web/20170317115535/https://riot.im/privacy):
+
+> We currently use cookies to support our use of Google Analytics on
+> the Website and Service. Google Analytics collects information about
+> how you use the Website and Service.
+>
+> [...]
+>
+> This helps us to provide you with a good experience when you
+> browse our Website and use our Service and also allows us to improve
+> our Website and our Service. 
+
+When I asked Matrix people about why they were using Google analytics,
+they explained this was for development purposes and they were aiming
+for velocity at this point, not privacy.
+
+They also included a "free to snitch" clause:
+
+> If we are or believe that we are under a duty to disclose or share
+> your personal data, we will do so in order to comply with any legal
+> obligation, the instructions or requests of a governmental authority
+> or regulator, including those outside of the UK.
+
+Those are really *broad* terms.
+
+Like the current retention policies, such user tracking and
+... "liberal" collaboration practices with the state sets a bad
+precedent for other home servers.
+
+Thankfully, since the above policy was published (2017), the GDPR was
+"implemented" ([2018](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation)) and it seems like both the [Element.io
+privacy policy](https://element.io/privacy) and the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been
+somewhat improved since then.
+
+Notable points of the new privacy policies:
 
  * [2.3.1.1](https://matrix.org/legal/privacy-notice#2311-federation): the "federation" section actually outlines that
    "*Federated homeservers and Matrix clients which respect the Matrix
@@ -112,17 +148,17 @@ Notable points of the privacy policies:
    `matrix.org` service
  * [2.10](https://matrix.org/legal/privacy-notice#210-who-else-has-access-to-my-data): Upcloud, Mythic Beast, Amazon, and CloudFlare possibly
    have access to your data (it's nice to at least mention this in the

(Diff truncated)
monitoring
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 6f3e2b19..fa35eb3c 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -886,6 +886,11 @@ TODO: bpool and rpool are both pools and datasets. that's pretty
 confusing, but also very useful because it allows for pool-wide
 recursive snapshots, which are used for the backup system
 
+TODO: ZFS monitoring?
+https://pieterbakker.com/monitoring-zfs-with-zed/ mentions
+email... something deployed on tubman, probably needs deploy or at
+least testing on curie as well.
+
 ## fio improvements
 
 I really want to improve my experience with `fio`. Right now, I'm just

some more TODOs
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 4185290e..6f3e2b19 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -878,6 +878,14 @@ TODO: send/recv, automated snapshots
 TODO: merge this documentation with the [[hardware/tubman]]
 documentation. maybe create a separate zfs primer?
 
+TODO: review this blog post
+https://github.com/djacu/nixos-on-zfs/blob/main/blog/2022-03-24.md
+which seems to explain a bit the layout behind the installer
+
+TODO: bpool and rpool are both pools and datasets. that's pretty
+confusing, but also very useful because it allows for pool-wide
+recursive snapshots, which are used for the backup system
+
 ## fio improvements
 
 I really want to improve my experience with `fio`. Right now, I'm just

exported the pools, technically ready to move offsite
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 13c005a8..4185290e 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -853,7 +853,13 @@ just carries on merrily doing when it's telling you it's "cowardly
 refusing to destroy your existing target"... Maybe that's [my pull
 request that broke something though](https://github.com/jimsalterjrs/sanoid/pull/748).
 
-TODO: move to offsite host, setup cron job / timer?
+Once that's done, export the pools to disconnect the drive:
+
+    zpool export bpool-tubman
+    zpool export rpool-tubman
+
+TODO: move to offsite host
+TODO: setup cron job (or timer?)
 
 TODO: consider alternatives to syncoid, considering the code issues
 (large functions, lots of `system` calls without arrays...)

some more "benchmarks"
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 1d0427c8..13c005a8 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -596,6 +596,87 @@ Another test was performed while in "rescue" mode but was ultimately
 lost. It's actually still in the old M.2 drive, but I cannot mount
 that device with the external USB controller I have right now.
 
+## Real world experience
+
+This section document not synthetic backups, but actual real world
+workloads, comparing before and after I switched my workstation to
+ZFS.
+
+### Docker performance
+
+I had the feeling that running some git hook (which was firing a
+Docker container) was "slower" somehow. It seems that, at runtime, ZFS
+backends are significant slower than their overlayfs/ext4 equivalent:
+
+    May 16 14:42:52 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87\x2dinit-merged.mount: Succeeded.
+    May 16 14:42:52 curie systemd[5161]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87\x2dinit-merged.mount: Succeeded.
+    May 16 14:42:52 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+    May 16 14:42:53 curie dockerd[1723]: time="2022-05-16T14:42:53.087219426-04:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10 pid=151170
+    May 16 14:42:53 curie systemd[1]: Started libcontainer container af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10.
+    May 16 14:42:54 curie systemd[1]: docker-af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10.scope: Succeeded.
+    May 16 14:42:54 curie dockerd[1723]: time="2022-05-16T14:42:54.047297800-04:00" level=info msg="shim disconnected" id=af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10
+    May 16 14:42:54 curie dockerd[998]: time="2022-05-16T14:42:54.051365015-04:00" level=info msg="ignoring event" container=af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
+    May 16 14:42:54 curie systemd[2444]: run-docker-netns-f5453c87c879.mount: Succeeded.
+    May 16 14:42:54 curie systemd[5161]: run-docker-netns-f5453c87c879.mount: Succeeded.
+    May 16 14:42:54 curie systemd[2444]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+    May 16 14:42:54 curie systemd[5161]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+    May 16 14:42:54 curie systemd[1]: run-docker-netns-f5453c87c879.mount: Succeeded.
+    May 16 14:42:54 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
+
+Translating this:
+
+ * container setup: ~1 second
+ * container runtime: ~1 second
+ * container teardown: ~1 second
+ * total runtime: 2-3 seconds
+
+Obviously, those timestamps are not quite accurate enough to make
+precise measurements...
+
+After I switched to ZFS:
+
+    mai 30 15:31:39 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf\x2dinit.mount: Succeeded. 
+    mai 30 15:31:39 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf\x2dinit.mount: Succeeded. 
+    mai 30 15:31:40 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
+    mai 30 15:31:40 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
+    mai 30 15:31:41 curie dockerd[3199]: time="2022-05-30T15:31:41.551403693-04:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 pid=141080 
+    mai 30 15:31:41 curie systemd[1]: run-docker-runtime\x2drunc-moby-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142-runc.ZVcjvl.mount: Succeeded. 
+    mai 30 15:31:41 curie systemd[5287]: run-docker-runtime\x2drunc-moby-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142-runc.ZVcjvl.mount: Succeeded. 
+    mai 30 15:31:41 curie systemd[1]: Started libcontainer container 42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142. 
+    mai 30 15:31:45 curie systemd[1]: docker-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142.scope: Succeeded. 
+    mai 30 15:31:45 curie dockerd[3199]: time="2022-05-30T15:31:45.883019128-04:00" level=info msg="shim disconnected" id=42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 
+    mai 30 15:31:45 curie dockerd[1726]: time="2022-05-30T15:31:45.883064491-04:00" level=info msg="ignoring event" container=42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 
+    mai 30 15:31:45 curie systemd[1]: run-docker-netns-e45f5cf5f465.mount: Succeeded. 
+    mai 30 15:31:45 curie systemd[5287]: run-docker-netns-e45f5cf5f465.mount: Succeeded. 
+    mai 30 15:31:45 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
+    mai 30 15:31:45 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded.
+
+That's double or triple the run time, from 2 seconds to 6
+seconds. Most of the time is spent in run time, inside the
+container. Here's the breakdown:
+
+ * container setup: ~2 seconds
+ * container run: ~4 seconds
+ * container teardown: ~1 second
+ * total run time: about ~6-7 seconds
+
+That's a two- to three-fold increase! Clearly something is going on
+here that I should tweak. It's possible that code path is less
+optimized in Docker. I also worry about podman, but apparently [it
+also supports ZFS backends](https://www.jwillikers.com/podman-with-btrfs-and-zfs). Possibly it would perform better, but
+at this stage I wouldn't have a good comparison: maybe it would have
+performed better on non-ZFS as well...
+
+### Interactivity
+
+While doing the offsite backups (below), the system became somewhat
+"sluggish". I felt everything was slow, and I estimate it introduced
+~50ms latency in any input device.
+
+Arguably, those are all USB and the external drive was connected
+through USB, but I suspect the ZFS drivers are not as well tuned with
+the scheduler as the regular filesystem drivers...
+
 # Recovery procedures
 
 For test purposes, I unmounted all systems during the procedure:

first sync done
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2201eeb6..1d0427c8 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -618,6 +618,165 @@ then you mount the root filesystem and all the others:
     mount -t tmpfs tmpfs /mnt/run &&
     mkdir /mnt/run/lock
 
+# Offsite backup
+
+TODO: explain why I'm doing, and how it works broadly.
+
+## Partitioning
+
+The above partitioning procedure used `sgdisk`, but I couldn't figure
+out how to do this with `sgdisk`, so this uses `sfdisk` to dump the
+partition from the first disk to an external, identical drive:
+
+    sfdisk -d /dev/nvme0n1 | sfdisk --no-reread /dev/sda --force
+
+## Pool creation
+
+This is similar to the main pool creation, except we tweaked a few
+bits after changing the upstream procedure:
+
+    zpool create \
+            -o cachefile=/etc/zfs/zpool.cache \
+            -o ashift=12 -d \
+            -o feature@async_destroy=enabled \
+            -o feature@bookmarks=enabled \
+            -o feature@embedded_data=enabled \
+            -o feature@empty_bpobj=enabled \
+            -o feature@enabled_txg=enabled \
+            -o feature@extensible_dataset=enabled \
+            -o feature@filesystem_limits=enabled \
+            -o feature@hole_birth=enabled \
+            -o feature@large_blocks=enabled \
+            -o feature@lz4_compress=enabled \
+            -o feature@spacemap_histogram=enabled \
+            -o feature@zpool_checkpoint=enabled \
+            -O acltype=posixacl -O xattr=sa \
+            -O compression=lz4 \
+            -O devices=off \
+            -O relatime=on \
+            -O canmount=off \
+            -O mountpoint=/boot -R /mnt \
+            bpool-tubman /dev/sdb3
+
+The change from the main boot pool are:
+
+ * [no unicode normalization](https://github.com/openzfs/openzfs-docs/pull/306)
+ * different device path (`sdb` used to be the M.2 device, it's now
+   `nvme0n1`)
+ * [reordered parameters](https://github.com/openzfs/openzfs-docs/pull/308)
+
+Main pool creation is:
+
+    zpool create \
+            -o ashift=12 \
+            -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
+            -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
+            -O compression=zstd \
+            -O relatime=on \
+            -O canmount=off \
+            -O mountpoint=/ -R /mnt \
+            rpool-tubman /dev/sdb4
+
+## first sync
+
+sanoid... syncoid... probably better to do it by hand, but this is
+easier. slooow everything feels like it has ~30-50ms latency extra:
+
+    anarcat@curie:sanoid$ LANG=C top -b  -n 1 | head -20
+    top - 13:07:05 up 6 days,  4:01,  1 user,  load average: 16.13, 16.55, 11.83
+    Tasks: 606 total,   6 running, 598 sleeping,   0 stopped,   2 zombie
+    %Cpu(s): 18.8 us, 72.5 sy,  1.2 ni,  5.0 id,  1.2 wa,  0.0 hi,  1.2 si,  0.0 st
+    MiB Mem :  15898.4 total,   1387.6 free,  13170.0 used,   1340.8 buff/cache
+    MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1319.8 avail Mem 
+
+        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
+         70 root      20   0       0      0      0 S  83.3   0.0   6:12.67 kswapd0
+    4024878 root      20   0  282644  96432  10288 S  44.4   0.6   0:11.43 puppet
+    3896136 root      20   0   35328  16528     48 S  22.2   0.1   2:08.04 mbuffer
+    3896135 root      20   0   10328    776    168 R  16.7   0.0   1:22.93 zfs
+    3896138 root      20   0   10588    788    156 R  16.7   0.0   1:49.30 zfs
+        350 root       0 -20       0      0      0 R  11.1   0.0   1:03.53 z_rd_int
+        351 root       0 -20       0      0      0 S  11.1   0.0   1:04.15 z_rd_int
+    3896137 root      20   0    4384    352    244 R  11.1   0.0   0:44.73 pv
+    4034094 anarcat   30  10   20028  13960   2428 S  11.1   0.1   0:00.70 mbsync
+    4036539 anarcat   20   0    9604   3464   2408 R  11.1   0.0   0:00.04 top
+        352 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        353 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
+        354 root       0 -20       0      0      0 S   5.6   0.0   1:04.01 z_rd_int
+
+The full first sync was:
+
+    root@curie:/home/anarcat# ./bin/syncoid -r  bpool bpool-tubman
+
+    CRITICAL ERROR: Target bpool-tubman exists but has no snapshots matching with bpool!
+                    Replication to target would require destroying existing
+                    target. Cowardly refusing to destroy your existing target.
+
+              NOTE: Target bpool-tubman dataset is < 64MB used - did you mistakenly run
+                    `zfs create bpool-tubman` on the target? ZFS initial
+                    replication must be to a NON EXISTENT DATASET, which will
+                    then be CREATED BY the initial replication process.
+
+    INFO: Sending oldest full snapshot bpool/BOOT@test (~ 42 KB) to new target filesystem:
+    44.2KiB 0:00:00 [4.19MiB/s] [========================================================================================================================] 103%            
+    INFO: Updating new target filesystem with incremental bpool/BOOT@test ... syncoid_curie_2022-05-30:12:50:39 (~ 4 KB):
+    2.13KiB 0:00:00 [ 114KiB/s] [===============================================================>                                                         ] 53%            
+    INFO: Sending oldest full snapshot bpool/BOOT/debian@install (~ 126.0 MB) to new target filesystem:
+     126MiB 0:00:00 [ 308MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Updating new target filesystem with incremental bpool/BOOT/debian@install ... syncoid_curie_2022-05-30:12:50:39 (~ 113.4 MB):
+     113MiB 0:00:00 [ 315MiB/s] [=======================================================================================================================>] 100%
+
+    root@curie:/home/anarcat# ./bin/syncoid -r  rpool rpool-tubman
+
+    CRITICAL ERROR: Target rpool-tubman exists but has no snapshots matching with rpool!
+                    Replication to target would require destroying existing
+                    target. Cowardly refusing to destroy your existing target.
+
+              NOTE: Target rpool-tubman dataset is < 64MB used - did you mistakenly run
+                    `zfs create rpool-tubman` on the target? ZFS initial
+                    replication must be to a NON EXISTENT DATASET, which will
+                    then be CREATED BY the initial replication process.
+
+    INFO: Sending oldest full snapshot rpool/ROOT@syncoid_curie_2022-05-30:12:50:51 (~ 69 KB) to new target filesystem:
+    44.2KiB 0:00:00 [2.44MiB/s] [===========================================================================>                                             ] 63%            
+    INFO: Sending oldest full snapshot rpool/ROOT/debian@install (~ 25.9 GB) to new target filesystem:
+    25.9GiB 0:03:33 [ 124MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Updating new target filesystem with incremental rpool/ROOT/debian@install ... syncoid_curie_2022-05-30:12:50:52 (~ 3.9 GB):
+    3.92GiB 0:00:33 [ 119MiB/s] [======================================================================================================================>  ] 99%            
+    INFO: Sending oldest full snapshot rpool/home@syncoid_curie_2022-05-30:12:55:04 (~ 276.8 GB) to new target filesystem:
+     277GiB 0:27:13 [ 174MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/home/root@syncoid_curie_2022-05-30:13:22:19 (~ 2.2 GB) to new target filesystem:
+    2.22GiB 0:00:25 [90.2MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var@syncoid_curie_2022-05-30:13:22:47 (~ 5.6 GB) to new target filesystem:
+    5.56GiB 0:00:32 [ 176MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/cache@syncoid_curie_2022-05-30:13:23:22 (~ 627.3 MB) to new target filesystem:
+     627MiB 0:00:03 [ 169MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/lib@syncoid_curie_2022-05-30:13:23:28 (~ 69 KB) to new target filesystem:
+    44.2KiB 0:00:00 [1.40MiB/s] [===========================================================================>                                             ] 63%            
+    INFO: Sending oldest full snapshot rpool/var/lib/docker@syncoid_curie_2022-05-30:13:23:28 (~ 442.6 MB) to new target filesystem:
+     443MiB 0:00:04 [ 103MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254 (~ 6.3 MB) to new target filesystem:
+    6.49MiB 0:00:00 [12.9MiB/s] [========================================================================================================================] 102%            
+    INFO: Updating new target filesystem with incremental rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254 ... syncoid_curie_2022-05-30:13:23:34 (~ 4 KB):
+    1.52KiB 0:00:00 [27.6KiB/s] [============================================>                                                                            ] 38%            
+    INFO: Sending oldest full snapshot rpool/var/lib/flatpak@syncoid_curie_2022-05-30:13:23:36 (~ 2.0 GB) to new target filesystem:
+    2.00GiB 0:00:17 [ 115MiB/s] [=======================================================================================================================>] 100%            
+    INFO: Sending oldest full snapshot rpool/var/tmp@syncoid_curie_2022-05-30:13:23:55 (~ 57.0 MB) to new target filesystem:
+    61.8MiB 0:00:01 [45.0MiB/s] [========================================================================================================================] 108%            
+    INFO: Clone is recreated on target rpool-tubman/var/lib/docker/ed71ddd563a779ba6fb37b3b1d0cc2c11eca9b594e77b4b234867ebcb162b205 based on rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254
+    INFO: Sending oldest full snapshot rpool/var/lib/docker/ed71ddd563a779ba6fb37b3b1d0cc2c11eca9b594e77b4b234867ebcb162b205@syncoid_curie_2022-05-30:13:23:58 (~ 218.6 MB) to new target filesystem:
+     219MiB 0:00:01 [ 151MiB/s] [=======================================================================================================================>] 100%
+
+Funny how the `CRITICAL ERROR` doesn't actually stop `syncoid` and it
+just carries on merrily doing when it's telling you it's "cowardly
+refusing to destroy your existing target"... Maybe that's [my pull
+request that broke something though](https://github.com/jimsalterjrs/sanoid/pull/748).
+
+TODO: move to offsite host, setup cron job / timer?
+
+TODO: consider alternatives to syncoid, considering the code issues
+(large functions, lots of `system` calls without arrays...)
+
 # Remaining issues
 
 TODO: swap. how do we do it?

clarify how to change dscverify behavior, fix typo
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 1d7fe18e..96ffe4a2 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -212,10 +212,15 @@ ran by hand.
     need to find the key yourself, add it to your keyring, and then
     adding the following to `~/.devscripts` will leverage your
     personal keys into the web of trust:
-    
+
         DSCVERIFY_KEYRINGS=~/.gnupg/pubring.gpg
 
-    You can also use `dscvrify --keyring key.gpg *.dsc` to check the
+    Note that this is *NOT* an environment variable, it needs to be
+    put in the file... An alternative is to inject keys into the
+    `~/.gnupg/trustedkeys.gpg` files which is checked by `dscverify`
+    by default.
+
+    You can also use `dscverify --keyring key.gpg *.dsc` to check the
     signature by hand against a given key file.
 
 [debian-keyring package]: https://packages.debian.org/debian-keyring

more TODOs
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index ddf1cf3b..32d555fd 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -446,6 +446,10 @@ on the other hand, use the [client-server discovery API](https://spec.matrix.org
 what allows a given client to find your home server when you type your
 Matrix ID on login.
 
+TODO: https://matrix.org/faq/#why-can't-i-rename-my-homeserver%3F
+
+TODO: review FAQ
+
 # Performance
 
 This brings us to the performance of Matrix itself. Many people feel

that todo was done
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index e66882a0..ddf1cf3b 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -720,8 +720,4 @@ be dead.
 I wonder which path Matrix will take. Could it liberate us from those
 vicious cycles?
 
-TODO: irc vs email vs mastodon federation / forks
-
-
-
 [[!tag draft]]

try to finish this, still have to do a final edit
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f24c2f35..e66882a0 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -149,10 +149,12 @@ Overall, privacy protections in Matrix mostly concern message
 contents, not metadata. In other words, who's talking with who, when
 and from where is not well protected. Compared to a tool like Signal,
 which goes through great lengths to anonymise that data with features
-like [private contact discovery](https://signal.org/blog/private-contact-discovery/), [disappearing messages](https://signal.org/blog/disappearing-messages/),
+like [private contact discovery][], [disappearing messages](https://signal.org/blog/disappearing-messages/),
 [sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
 behind.
 
+[private contact discovery]: https://signal.org/blog/private-contact-discovery/
+
 This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (open in 2019) in Synapse, but this is not
 just an implementation issue, it's a flaw in the protocol itself. Home
 servers keep join/leave of all rooms, which gives clear information
@@ -262,19 +264,47 @@ matrix.org to block known abusers (users or servers). It's a good idea
 to make the bot admin of your channels, because you can't take back
 admin from a user once given.
 
-Matrix doesn't have tor/vpn-specific moderation mechanisms. It has
-the concept of guest accounts, not very used, and virtually no client
-support it. matrix is like +R by default. 
-
-TODO: rate limiting https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L833
-
-TODO: irc vs email vs mastodon federation / forks
+This is basically based on an [admin API](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html) built into
+Synapse. There's also a [new commandline tool](https://git.fout.re/pi/matrixadminhelpers) designed to do
+things like:
+
+> * System notify users (all users/users from a list, specific user)
+> * delete sessions/devices not seen for X days
+> * purge the remote media cache
+> * select rooms with various criteria (external/local/empty/created by/encrypted/cleartext)
+> * purge history of theses rooms
+> * shutdown rooms
+
+Matrix doesn't have IP-specific moderation mechanisms which would
+allow one to block users from Tor or known VPNs to limit abuse, for
+example. Furthermore, because users joining a room may come from
+another server, room moderators are at the mercy of the registration
+policies of those servers. Matrix is like IRC's `+R` mode ("only
+registered users can join") by default, except that anyone can
+register their own homeserver, which makes this very limited. There's
+no API to block a specific homeserver, so this must be done at the
+system (e.g. netfilter / firewall) level.
+
+Matrix has the concept of guest accounts, but it is not very used, and
+virtually no client supports it. This contrasts with the way IRC
+works: by default, anyone can join an IRC network even without
+authentication. Some channels require registration, but in general you
+are free to join and look around (until you get blocked, of course).
+
+I have heard anecdotal evidence that "moderating bridges is hell", and
+I can somewhat imagine why. Moderation is already hard enough on one
+federation, when you bridge a room with another network, you inherit
+all the problems from *that* network and the bridge is unlikely to
+have as many tools as the original network's API to control abuse...
+
+Synapse has pretty good [built-in rate-limiting](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L879-L996) which blocks
+repeated login, registration, joining, or messaging attempts. It may
+also end up throttling servers on the federation based on those
+settings.
 
 TODO: you can use the admin API to impersonate a room admin? see also
 the other TODO below
 
-TODO: bridge moderation is hell
-
 # Availability
 
 While Matrix has a strong advantage over Signal in that it's
@@ -333,9 +363,6 @@ such an alias (in Element), you need to go in the room settings'
 name (e.g. `foo`), and then that room will be available on your
 `example.com` homeserver as `#foo:example.com`.)
 
-TODO: by default the room belongs to the public federation,. spaces as
-directory. the 
-
 A room doesn't belong to a server, it belongs to the federation.
 Anyone invited to a room (if private) can join from the room on any
 server. You can create a room on server A and when a user from server
@@ -353,6 +380,16 @@ room is an alias on the `gnome.org` server, but the room ID is
 `HASH:matrix.org`. That's because the room was created on matrix.org,
 but admins are on `gnome.org` now.
 
+Discovering rooms can therefore be tricky: there *is* a room
+directory, but Matrix.org people are trying to deprecate it in favor
+of "Spaces". Room directories were ripe for abuse: anyone can create a
+room, so anyone can show up in there. In contrast, a "Space" is
+basically a room that's an index of other rooms (including other
+spaces), so existing moderation and administration mechanism that work
+in rooms can (somewhat) work in spaces as well. This also allows rooms
+to work across federation, regardless on which server they were
+originally created.
+
 New users can be added to a space or room [automatically](https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1378-L1388) in
 Synapse. (Existing users can be told about the space with a server
 notice.) The point here is to have a way to pre-populate a list of
@@ -446,13 +483,26 @@ person they are talking to is on IRC or Matrix in the backend. I bet
 people could notice that Matrix users are slower, if only because of
 the TCP round-trip time each message has to take.
 
-TODO: https://blog.lewman.com/internet-messaging-versus-congested-network.html
-
 (I assume, here, that each Matrix message is delivered through at
 least two new HTTP sessions, which therefore require up to 8 packet
 roundtrips whereas in IRC, the existing socket is reused so it's
 basically 2 round-trips.)
 
+Some [courageous person](https://blog.lewman.com/) actually made some [tests of various
+messaging platforms on a congested network](https://blog.lewman.com/internet-messaging-versus-congested-network.html). His evaluation was
+basically:
+
+ * [Briar](https://briarproject.org/): uses Tor, so unusable except locally
+ * Matrix: "struggled to send and receive messages", joining a room
+   takes forever as it has to sync all history, "took 20-30 seconds
+   for my messages to be sent and another 20 seconds for further
+   responses"
+ * [XMPP](https://xmpp.org/): "worked in real-time, full encryption, with nearly zero
+   lag"
+
+So that was interesting. I suspect IRC would have also fared better,
+but that's just a feeling.
+
 Possible improvements to this include [support for websocket](https://github.com/matrix-org/matrix-doc/issues/1148) (to
 reduce latency and overhead) and the [CoAP proxy](https://matrix.org/docs/projects/iot/coap-proxy) work [from
 2019](https://matrix.org/blog/2019/03/12/breaking-the-100-bps-barrier-with-matrix-meshsim-coap-proxy) (which allows communication over 100bps links), both of which
@@ -463,6 +513,8 @@ to. See also [this talk at FOSDEM 2022](https://www.youtube.com/watch?v=diwzQtGg
 
 # Usability
 
+## Onboarding and workflow
+
 The workflow for joining a room, when you use Element web, is not
 great:
 
@@ -477,10 +529,199 @@ As you might have guessed by now, there is a [proposed
 specification](https://github.com/matrix-org/matrix-spec-proposals/blob/f295e828dc3107260a7979c40175442bf3a7fdc4/proposals/2312-matrix-uri.md) to solve this, but web browsers need to adopt it
 as well, so that's far from actually being solved.
 
-TODO: registration and discovery workflow compared with signal
+In general, when compared with tools like Signal or Whatsapp, Matrix
+doesn't fare as well in terms of user discovery. I probably have a lot
+of my contacts on Matrix, but I wouldn't know because there's *no way*
+to know. It's kind of creepy when Signal tells you "hey, this person
+is on Signal!" but it's also pretty cool that it works, and they
+actually did it [pretty well][private contact discovery].
+
+Registration is also less obvious: in Signal, the app just needs to
+confirm your phone number and it's generally automated. It's
+friction-less and quick. In Matrix, you need to learn about home
+servers, pick one, register (with a password! aargh!), and then setup
+encryption keys (not default), etc. It's really a lot more friction.
+
+And look, I understand: giving away your phone number is a *huge*
+tradeoff. I don't like it either. But it solves a real problem and
+makes encryption accessible to a *ton* more people. Matrix *does* have
+"identity servers" that can serve that purpose, but somehow I don't
+feel confident giving away my phone number there. There's a catch-22
+here too: because no one feels like giving away their phone numbers,
+no one does, and everyone assumes that stuff doesn't work
+anyways. Like it or not, Signal *forcing* people to divulge their
+phone number actually gave them critical mass that means actually a
+lot of my relatives *are* on Signal and I don't have to install crap
+like Whatsapp to talk with them.
+
+## 5 minute clients evaluation
+
+Throughout all my tests I evaluated a handful of Matrix clients,
+mostly from Flatpak because basically none of them are packaged in
+Debian. I cannot even begin to pretend to have done a proper review,
+but here's my main takeaway: I'm using none of them. I'm still using
+Element, the flagship client from Matrix.org, in a web browser window,
+with the [PopUp Window extension](https://github.com/ettoolong/PopupWindow). This makes it look almost like a
+native app, and opens links in my main browser window, which is
+nice. But yeah, it's a web app, which is kind of meh.
+
+Coming from Irssi, Element is really "GUI-y" (pronounced
+"goo-wee"). Lots of clickety happening. To mark conversations as read,
+in particular, I need to click-click-click on *all* the tabs that have
+some activity. There's no "jump to latest message" or "mark all as
+read" functionality as far as I could tell. In Irssi the former is
+built-in (<kbd>alt-a</kbd>) and I made a custom `/READ` command for
+the latter:
+
+    /ALIAS READ script exec \$_->activity(0) for Irssi::windows
+
+And yes, that's a Perl script in my IRC client. I am not aware of any
+Matrix client that does stuff like that.
+
+As for other clients, I have looked through the [Client Matrix](https://matrix.org/clients-matrix/)
+(confusing right?) to try to figure out which one to try, and, even
+after selecting `Linux` as a filter, the chart is just too wide to
+figure out anything. So I tried those, kind of randomly:
+
+ * Fractal

(Diff truncated)
try to move forward a little more, thanks jvoisin for edits
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index a26450b2..f24c2f35 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -16,13 +16,20 @@ documents why neither of those have happened yet.
 
 ## Data retention defaults
 
-One of my main concerns with Matrix at this point is data
-retention. In IRC, servers don't actually keep messages all that long:
-they pass them along to other servers and clients basically as fast as
-they can, only keep them from memory, and move on to the next
-message. There are no concerns about data retention on messages (and
-their metadata) other than the network layer. (I'm ignoring the issues
-with user registration here, which is a separate, if valid, concern.)
+One of my main concerns with Matrix is data retention.
+
+In IRC, servers don't actually keep messages all that long: they pass
+them along to other servers and clients basically as fast as they can,
+only keep them from memory, and move on to the next message. There are
+no concerns about data retention on messages (and their metadata)
+other than the network layer. (I'm ignoring the issues with user
+registration here, which is a separate, if valid, concern.)
+Obviously, an hostile server *could* start everything it gets of
+course, but typically IRC federations are tightly controlled and, if
+you trust your IRC server, you should be fairly safe. Obviously,
+clients *can* (and often do, even if [OTR](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) is configured!) log all
+messages, but this is typically not the default. [Irssi](https://irssi.org/), for
+example, does [not log by default](https://github.com/irssi/irssi/blob/7d673653a13ed1123665e36270d1a578baefd9e5/docs/startup-HOWTO.txt#L399-L412).
 
 Compare this to Matrix: when you send a message to a Matrix
 homeserver, that server first stores it in its internal SQL
@@ -178,9 +185,9 @@ turm, meant that Matrix would try to connect to that URL to generate a
 link preview.
 
 I felt this was a security issue, especially because they would
-basically keep the socket open *forever*. I tried to warn the Matrix
-security team but somehow, I don't think it was taken very
-seriously. Here's the disclosure timeline:
+basically keep the server socket open seemingly *forever*. I tried to
+warn the Matrix security team but somehow, I don't think it was taken
+very seriously. Here's the disclosure timeline:
 
  * January 18: contacted Matrix security
  * January 19: response: already [reported as a bug](https://github.com/matrix-org/synapse/issues/8302)
@@ -263,7 +270,10 @@ TODO: rate limiting https://github.com/matrix-org/synapse/blob/12d1f82db21360397
 
 TODO: irc vs email vs mastodon federation / forks
 
-TODO: you can use the admin API to impersonate a room admin?
+TODO: you can use the admin API to impersonate a room admin? see also
+the other TODO below
+
+TODO: bridge moderation is hell
 
 # Availability
 
@@ -323,27 +333,36 @@ such an alias (in Element), you need to go in the room settings'
 name (e.g. `foo`), and then that room will be available on your
 `example.com` homeserver as `#foo:example.com`.)
 
-TODO: new users on certain can be added to a space automatically in
-synapse. existing users can be told about the space with a server
-notice. https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1387
-
-TODO: by default the room belongs to the public federation, anyone
-invited (or joins if public) can join from any server. spaces as
-directory. the room doesn't belong to a server. you can create a room
-on serverA, admin from serverB join and belong as admin. the room will
-be replicated on the two server. if serverA falls, the serverB will be
-picked up. a room doesn't have an FQDN, it has a Matrix ID (basically
-a random number). it has a server name, but that's just to avoid
-collision. you can have server-specific aliases. each server needs to
-have admin.
-
-TODO: room namespaces eg #fractal:gnome.org (alias) room id is
-HASH:matrix.org room was created on matrix.org, but admins are on
-gnome.org... room is primarily a gnome room.
-
-TODO: [tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) (no GUI for it), fait partie de
-[MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") allows a room admin to close a
-room, with a message and a pointer to another room.
+TODO: by default the room belongs to the public federation,. spaces as
+directory. the 
+
+A room doesn't belong to a server, it belongs to the federation.
+Anyone invited to a room (if private) can join from the room on any
+server. You can create a room on server A and when a user from server
+B joins, the room will be replicated on the two servers. If server A
+fails, server B will keep the room alive. A room doesn't have an FQDN,
+it has a Matrix ID (which basically a random number). It has a server
+name attached to it, but that was made just to avoid collisions.
+
+TODO: how does admin work again? can server B hijack a room on server
+A? I had noted "admin from serverB join and belong as admin. each
+server needs to have admin."
+
+This can get a little confusing. For example, the `#fractal:gnome.org`
+room is an alias on the `gnome.org` server, but the room ID is
+`HASH:matrix.org`. That's because the room was created on matrix.org,
+but admins are on `gnome.org` now.
+
+New users can be added to a space or room [automatically](https://github.com/matrix-org/synapse/blob/12d1f82db213603972d60be3f46f6a36c3c2330f/docs/sample_config.yaml#L1378-L1388) in
+Synapse. (Existing users can be told about the space with a server
+notice.) The point here is to have a way to pre-populate a list of
+rooms on the server, even if they are not necessarily present on that
+server directly, in case another server that has connected users hosts it.
+
+Rooms, by default, live forever, even after the last user quits. There
+is a[tombstone event](https://spec.matrix.org/v1.2/client-server-api/#events-17) but it doesn't have a GUI for it yet. That is
+part of [MSC1501](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/1501-room-version-upgrades.md) ("Room version upgrades") which allows a room
+admin to close a room, with a message and a pointer to another room.
 
 ## Home server
 
@@ -385,8 +404,10 @@ explicitly configured for your domain. You can't just put:
 support "virtual hosting" and you'd still be connecting to rooms and
 people with your `matrix.org` identity.
 
-TODO: what's the different between server-server and client-server API
-specs? e.g. why is there also <https://spec.matrix.org/v1.2/client-server-api/#server-discovery>?
+That specification is what allows servers to find each other. Clients,
+on the other hand, use the [client-server discovery API](https://spec.matrix.org/v1.2/client-server-api/#server-discovery): this is
+what allows a given client to find your home server when you type your
+Matrix ID on login.
 
 # Performance
 
@@ -399,7 +420,9 @@ now scale horizontally to multiple workers (see [this blog post for
 details](https://matrix.org/blog/2020/11/03/how-we-fixed-synapses-scalability)). And there are other home servers implementations
 ([dendrite](https://github.com/matrix-org/dendrite), golang, [entered beta in late 2020](https://matrix.org/blog/2020/10/08/dendrite-is-entering-beta); [conduit](https://gitlab.com/famedly/conduit),
 Rust, beta), but none of those are feature-complete, so they are not a
-solution for any performance issues that might be left with Synapse.
+solution for any performance issues that might be left with
+Synapse. And besides, Synapse is adding a lot of feature fasts, so
+it's unlikely those other servers will ever catchup.
 
 And Matrix can feel slow sometimes. For example, joining the "Matrix
 HQ" room in Element (from matrix.debian.social) takes a few *minutes*
@@ -458,4 +481,6 @@ TODO: registration and discovery workflow compared with signal
 
 TODO: admin API https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html
 
+TODO: mark all as read.
+
 [[!tag draft]]

fix broken link
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f6bc545f..a26450b2 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -189,7 +189,7 @@ seriously. Here's the disclosure timeline:
  * January 31: I respond that I believe the security issue is
    underestimated, ask for clearance to disclose
  * February 1: response: asking for two weeks delay after the next
-   release (1.53.0) including [another patch][], presumably in two
+   release (1.53.0) including [another patch][PR 11936], presumably in two
    weeks' time
  * February 22: Matrix 1.53.0 released
  * April 14: I notice the release, ask for clearance again

add toc
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index f4a37ee9..f6bc545f 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -10,6 +10,8 @@ asking me to bridge it with Matrix. I have myself considered a few
 times the idea of just giving up and converting it. This space
 documents why neither of those have happened yet.
 
+[[!toc levels=2]]
+
 # Security and privacy
 
 ## Data retention defaults

expand on security / privacy issues now that the flaw is public
diff --git a/blog/matrix-notes.md b/blog/matrix-notes.md
index df9c58c3..f4a37ee9 100644
--- a/blog/matrix-notes.md
+++ b/blog/matrix-notes.md
@@ -10,14 +10,230 @@ asking me to bridge it with Matrix. I have myself considered a few
 times the idea of just giving up and converting it. This space
 documents why neither of those have happened yet.
 
-# Security
-
-TODO
-
-# Privacy
-
-TODO: vector.im privacy policy, GDPR compliance, metadata leaks,
-message expiry, discoverability of leakage.
+# Security and privacy
+
+## Data retention defaults
+
+One of my main concerns with Matrix at this point is data
+retention. In IRC, servers don't actually keep messages all that long:
+they pass them along to other servers and clients basically as fast as
+they can, only keep them from memory, and move on to the next
+message. There are no concerns about data retention on messages (and
+their metadata) other than the network layer. (I'm ignoring the issues
+with user registration here, which is a separate, if valid, concern.)
+
+Compare this to Matrix: when you send a message to a Matrix
+homeserver, that server first stores it in its internal SQL
+database. Then it will transmit that message to all clients connected
+to that server and room, and to all other servers that have clients
+connected to that room. Those remote servers, in turn, will keep a
+copy of that message and all its metadata in their own database, by
+default basically forever.
+
+Indeed, there is a mechanism to expire entries in Synapse, but it is
+[not enabled by default](https://github.com/matrix-org/synapse/blob/28989cb301fecf5a669a634c09bc2b73f97fec5d/docs/sample_config.yaml#L559). So one can safely assume that a message
+sent on Matrix is never expired.
+
+## GDPR in the federation
+
+But even if that setting was enabled by default, how do you control
+it? This is a fundamental problem of the federation: if anyone is
+allowed to join a given room (which is basically the default
+configuration of any room), anyone will log (deliberately or
+inadvertently) all content and metadata in that room.
+
+In the context of the GDPR, this is really tricky: who's the
+responsible party (know as the "data controller") here? It's basically
+any yahoo who fires up a home server and joins a room. Good luck
+enforcing the GDPR on those folks. In the brave new "peer-to-peer"
+world that Matrix is heading towards, it's, also, basically any client
+whatsoever, which also brings its own set of problems. 
+
+In a federated network, one has to wonder whether GDPR enforcement is
+even possible at all. Assuming you want to enforce your right to be
+forgotten in a given room, you would have to:
+
+ 1. enumerate all the users that ever joined the room while you were
+    there
+ 2. discover all their home servers
+ 3. start a GDPR procedure against all those servers
+
+I recognize this is a hard problem to solve while still keeping an
+open ecosystem. But I believe that Matrix should have much stricter
+defaults towards data retention than right now. Message expiry should
+be enforce *by default*. 
+
+In fact, maybe Synapse should be designed so that there's no
+configurable flag to turn off data retention. A bit like how most
+system loggers in UNIX (e.g. syslog) come with a log retention system
+that typically rotate logs after a few weeks or month. Historically,
+this was designed to keep hard drives from filling up, but it also has
+the added benefit of limiting the amount of personal information kept
+on disk in this modern day. (Arguably, syslog doesn't rotate logs on
+its own, but, say, Debian GNU/Linux, as an installed system, does have
+log retention policies well defined for installed packages. And "no
+expiry" is basically a bug.)
+
+## Matrix.org privacy policy
+
+When I first looked at Matrix, a long time ago, Element.io was called
+[Vector.im](https://vector.im/) and had a rather dubious privacy policy. I
+unfortunately cannot find a copy of it now on the internet archive,
+but it openly announced it was collecting (Google!) analytics on its
+users. When I asked Matrix people about this, they explained this was
+for development purposes and they were aiming for velocity at this
+point, not privacy. I am paraphrasing: I am sorry I lost track of that
+conversation that happened so long ago, you will have just to trust me
+on this.
+
+I think that, like the current retention policies, this set a bad
+precedent. Thankfully, since that policy was drafted, the GDPR
+happened and it seems like both the [Element.io privacy policy](https://element.io/privacy) and
+the [Matrix.org privacy policy](https://matrix.org/legal/privacy-notice) have been somewhat improved.
+
+Notable points of the privacy policies:
+
+ * [2.3.1.1](https://matrix.org/legal/privacy-notice#2311-federation): the "federation" section actually outlines that
+   "*Federated homeservers and Matrix clients which respect the Matrix
+   protocol are expected to honour these controls and
+   redaction/erasure requests, but other federated homeservers are
+   outside of the span of control of Element, and we cannot guarantee
+   how this data will be processed*"
+ * [2.6](https://matrix.org/legal/privacy-notice#26-our-commitment-to-childrens-privacy): users under the age of 16 should *not* use the
+   `matrix.org` service
+ * [2.10](https://matrix.org/legal/privacy-notice#210-who-else-has-access-to-my-data): Upcloud, Mythic Beast, Amazon, and CloudFlare possibly
+   have access to your data (it's nice to at least mention this in the
+   privacy policy: many providers don't even bother admiting this)
+ * [Element 2.2.1](https://element.io/privacy): mentions many more third parties (Twilio,
+   Stripe, [Quaderno](https://www.quaderno.io/), LinkedIn, Twitter, Google, [Outplay](https://www.outplayhq.com/),
+   [PipeDrive](https://www.pipedrive.com/), [HubSpot](https://www.hubspot.com/), [Posthog](https://posthog.com/), Sentry, and [Matomo](https://matomo.org/)
+   (phew!)
+
+I'm not super happy with all the trackers they have on the Element
+platform, but then again you don't have to use that client
+whatsoever. Your favorite homeserver (assuming you are not on
+Matrix.org) probably has their own Element deployment, hopefully
+without all that garbage.
+
+Overall, this is all a huge improvement over the previous privacy
+policy, so hats off to the Matrix people for figuring out a reasonable
+policy in such a tricky context. I particularly like this bit:
+
+> We will forget your copy of your data upon your request. We will
+> also forward your request to be forgotten onto federated
+> homeservers. However - these homeservers are outside our span of
+> control, so we cannot guarantee they will forget your data.
+
+It's great they implemented those mechanisms and, after all, if
+there's an hostile party in there, nothing can prevent them from
+using *screenshots* to just exfiltrate your data away from the client
+side anyways, even with services typically seen as more secure because
+centralised like Signal.
+
+As an aside, I also appreciate that Matrix.org has a fairly decent
+[code of conduct](https://matrix.org/legal/code-of-conduct), based on the [TODO CoC](http://todogroup.org/opencodeofconduct/) which checks all the
+[boxes in the geekfeminism wiki](https://geekfeminism.fandom.com/wiki/Code_of_conduct_evaluations).
+
+## Metadata handling
+
+Overall, privacy protections in Matrix mostly concern message
+contents, not metadata. In other words, who's talking with who, when
+and from where is not well protected. Compared to a tool like Signal,
+which goes through great lengths to anonymise that data with features
+like [private contact discovery](https://signal.org/blog/private-contact-discovery/), [disappearing messages](https://signal.org/blog/disappearing-messages/),
+[sealed senders](https://signal.org/blog/sealed-sender/), and [private groups](https://signal.org/blog/signal-private-group-system/), Matrix is definitely
+behind.
+
+This is a [known issue](https://github.com/matrix-org/synapse/issues/4565) (open in 2019) in Synapse, but this is not
+just an implementation issue, it's a flaw in the protocol itself. Home
+servers keep join/leave of all rooms, which gives clear information
+about who is talking to. Synapse logs are also quite verbose and may
+contain privately identifiable information that home server admins
+might not be aware of in the first place. Those logs rotation are also
+separate from the server-level retention policy, which may be
+confusing.
+
+Combine this with the federation: even if you trust your home server
+to do the right thing, the second you join a public room with
+third-party home servers, those ideas kind of get thrown out because
+those servers can do whatever they want with that information. Again,
+a problem that is hard to solve in a federation.
+
+To be fair, IRC doesn't have a great story here either: any client
+knows not only who's talking to who in a room, but also typically
+their client IP address. Servers *can* (and often do) *obfuscate*
+this, but often that obfuscation is trivial to reverse. Some servers
+do provide "cloaks" (sometimes automatically), but that's kind of a
+"slap-on" solution that actually moves the problem elsewhere: now the
+server knows a little more about the user.
+
+## Amplification attacks on URL previews
+
+I (still!) run an [Icecast](https://en.wikipedia.org/wiki/Icecast) server and sometimes share links to it
+on IRC which, obviously, also ends up on (more than one!) Matrix home
+servers because many people use Matrix as an IRC bouncer. This, in
+turm, meant that Matrix would try to connect to that URL to generate a
+link preview.
+
+I felt this was a security issue, especially because they would
+basically keep the socket open *forever*. I tried to warn the Matrix
+security team but somehow, I don't think it was taken very
+seriously. Here's the disclosure timeline:
+
+ * January 18: contacted Matrix security
+ * January 19: response: already [reported as a bug](https://github.com/matrix-org/synapse/issues/8302)
+ * January 20: response: can't reproduce
+ * January 31: [timeout added][PR 11784], considered solved
+ * January 31: I respond that I believe the security issue is
+   underestimated, ask for clearance to disclose
+ * February 1: response: asking for two weeks delay after the next
+   release (1.53.0) including [another patch][], presumably in two
+   weeks' time
+ * February 22: Matrix 1.53.0 released
+ * April 14: I notice the release, ask for clearance again
+ * April 14: response: referred to the [public disclosure](https://github.com/matrix-org/synapse/security/advisories/GHSA-4822-jvwx-w47h)
+
+I think there are a couple of problems here:

(Diff truncated)
cleanup réseau pages a bit
diff --git "a/services/r\303\251seau.mdwn" "b/services/r\303\251seau.mdwn"
index dc5f0728..b20f899b 100644
--- "a/services/r\303\251seau.mdwn"
+++ "b/services/r\303\251seau.mdwn"
@@ -1,8 +1,6 @@
 Le réseau est constitué d'une ensemble d'interconnexions [[!wikipedia
 gigabit]] et d'un réseau [[wifi]], avec un uplink DSL.
 
-Update: ipv6 and dns work better with new router. see good test page: <http://en.conn.internet.nl/connection/>.
-
 Problèmes connus
 ================
 
@@ -11,10 +9,15 @@ Problèmes connus
    vérifier), cliquer sur "reconnect" de l'interface `WAN6` dans le
    GUI règle le problème. selon [ce bout de code](https://github.com/openwrt/luci/blob/e712a8a4ac896189c333400134e00977912a918a/modules/luci-mod-network/luasrc/controller/admin/network.lua#L169-L176), il suffirait de
    faire un `env -i /sbin/ifup %s >/dev/null 2>/dev/null` où `%s` est
-   l'interface réseau. à tester.
- * <del>la qualité de la bande passante varie avec les conditions météo</del> semble être résolu avec le VDSL
+   l'interface réseau. à tester. Update: ipv6 and dns work better with
+   new router. see good test page:
+   <http://en.conn.internet.nl/connection/>.
+ * <del>la qualité de la bande passante varie avec les conditions
+   météo</del> semble être résolu avec le VDSL
  * certaines requêtes DNS échouent, voir [[DNS]] pour les détails
  * le reverse DNS IPv6 ne fonctionne pas, voir mon [review the TSI](https://www.dslreports.com/forum/r30473265-Review)
+ * c'est cher, et pas vite, voir [[blog/2020-05-28-isp-upgrade]] for
+   options.
 
 Voir aussi
 ==========
@@ -104,3 +107,5 @@ Vitesse
     Total UAS:	2842	2811
 
 Donc 27 mbps down, 6 mbps up ou encore 3.4 MB/s down, 800 KB/s up.
+
+Voir [[services/réseau/crapn]] pour les détails de cette transition.
diff --git "a/services/r\303\251seau/crapn.mdwn" "b/services/r\303\251seau/crapn.mdwn"
index fd98120d..b3b8991b 100644
--- "a/services/r\303\251seau/crapn.mdwn"
+++ "b/services/r\303\251seau/crapn.mdwn"
@@ -1,6 +1,8 @@
 [[!meta title="Remplacement des services de communication au Crap'N"]]
 
-TODO: merge with above? wtf *is* this.
+Note: ceci est un document historique destiné à mes colocs lors du
+remplacement du service à la maison. Pour une version à jour du
+réseau, voir [[réseau]].
 
 J'aimerais remplacer le service internet au CrapN par un autre service internet qui me permetterait de déménager mon serveur (nommé "marcos") ainsi que les différents [[services]] que je gère présentement à la maison. En résumé, les services sont:
 

cross-ref with wifi tuning article
diff --git a/hardware/rosa.mdwn b/hardware/rosa.mdwn
index 5fc02f37..fb0276f0 100644
--- a/hardware/rosa.mdwn
+++ b/hardware/rosa.mdwn
@@ -34,6 +34,10 @@ I unfortunately forgot to run the same benchmarks with the stock
 firmware, but that could have been difficult unless it ships with
 `iperf3`...
 
+Note that some optimisations and changes have been performed to the
+wireless network since then, see this [[Wi-Fi tuning blog
+post|blog/2022-04-13-wifi-tuning]] for details.
+
 Wired network
 -------------
 

stopped using Smart HTTPS, switched to https-only mode
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index aeed33e3..0c4cbf65 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -71,13 +71,6 @@ I am testing those and they might make it to the top list once I'm happy:
  * [Popup window](https://addons.mozilla.org/en-US/firefox/addon/popup-window/) (no deb, [source](https://github.com/ettoolong/PopupWindow)) - open the link in a
    pop-up, useful to have an "app-like" window for a website (I use
    this for videoconferencing in a second tab)
- * [Smart HTTPS](https://addons.mozilla.org/en-US/firefox/addon/smart-https-revived/) (no deb, [source](https://github.com/ilGur1132/Smart-HTTPS)) - some use [HTTPS
-   everywhere](https://www.eff.org/https-everywhere) but i find that one works too and doesn't require
-   sites to be added to a list. nowadays, https URLs match http URLs
-   quite well: long gone are the days where wikipedia had a special
-   "secure" URL...  HE does have a "Block all unencrypted requests"
-   setting, but it does exactly that: it breaks plaintext sites
-   completely. See [issue #7936](https://github.com/EFForg/https-everywhere/issues/7936) and [issue #16488](https://github.com/EFForg/https-everywhere/issues/16488) for details.
  * [View Page Archive & Cache](https://addons.mozilla.org/en-US/firefox/addon/view-page-archive/) (no deb, [source](https://github.com/dessant/view-page-archive/)) - load page in
    one or many page archives. No "save" button unfortunately, but is
    good enough for my purposes. [The Archiver](https://addons.mozilla.org/en-US/firefox/addon/the-archiver/) (no deb,
@@ -191,6 +184,16 @@ hard to use or simply irrelevant.
    ([#871502](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=871502)). Right now I'm using the <del>standalone binary from
    upstream</del> flatpak but I'm looking at alternatives, see
    [[services/bookmarks]] for that more general problem.
+ * [Smart HTTPS](https://addons.mozilla.org/en-US/firefox/addon/smart-https-revived/) (no deb, [source](https://github.com/ilGur1132/Smart-HTTPS)) - some use [HTTPS
+   everywhere](https://www.eff.org/https-everywhere) but I found that one works too and doesn't require
+   sites to be added to a list. nowadays, https URLs match http URLs
+   quite well: long gone are the days where wikipedia had a special
+   "secure" URL... HE does have a "Block all unencrypted requests"
+   setting, but it does exactly that: it breaks plaintext sites
+   completely. See [issue #7936](https://github.com/EFForg/https-everywhere/issues/7936) and [issue #16488](https://github.com/EFForg/https-everywhere/issues/16488) for
+   details. Nowadays, I just don't need any extension: I enable
+   [HTTPS-only mode](https://blog.mozilla.org/security/2020/11/17/firefox-83-introduces-https-only-mode/) (AKA `dom.security.https_only_mode`). The EFF
+   even [deprecated HTTPS everywhere](https://www.eff.org/https-everywhere/set-https-default-your-browser) because of this.
 
 [it's all text!]: https://addons.mozilla.org/en-US/firefox/addon/its-all-text/
 
@@ -317,6 +320,8 @@ that I version-control into git:
    * `security.webauth.webauthn` - enable [WebAuthN](https://www.w3.org/TR/webauthn/) support, not
      sure what that's for but it sounds promising
  * `browser.urlbar.trimURLs`: false. show protocol regardless of URL
+ * `dom.security.https_only_mode`: `true` - only access HTTPS
+   websites, click-through for bypass.
 
 I also set privacy parameters following this [user.js](https://gitlab.com/anarcat/scripts/blob/master/firefox-tmp#L7) config
 which, incidentally, is injected in temporary profiles started with

approve comment
diff --git a/blog/2022-05-13-brtfs-notes/comment_1_8b6b318dadc7cd863bb3fc50d323c18b._comment b/blog/2022-05-13-brtfs-notes/comment_1_8b6b318dadc7cd863bb3fc50d323c18b._comment
new file mode 100644
index 00000000..089e7471
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_1_8b6b318dadc7cd863bb3fc50d323c18b._comment
@@ -0,0 +1,20 @@
+[[!comment format=mdwn
+ ip="108.162.145.28"
+ claimedauthor="Matthew"
+ url="https://mjdsystems.ca"
+ subject="Some clarifications"
+ date="2022-05-21T05:10:15Z"
+ content="""
+I have some (hopefully) helpful comments to some of your points above.
+
+# Regarding volumes/subvolumes
+I'm not sure what exactly a BTRFS volume is, but I think it would just be the filesystem.  Subvolumes are what allow you to separate the filesystem into separate pools, similar to an LVM volume.  The big difference is BTRFS lets you easily de-duplicate between these volumes (using reflinks through things like cp --reflink and snapshots).  The general advice I've heard (and follow myself) is not to use the root subvolume, but to instead create a subvolume for each purpose (e.x. one for /, one for /home).  This makes it easy to create snapshots and inspect the system in the future.
+
+BTRFS doesn't give an easy answer to the question \"how much space does this subvolume take\" because there isn't an easy answer.  Imagine you have one subvolume, then you take a snapshot.  How much space should you charge against both subvolumes (since the snapshot is just a regular subvolume that just shares data)?  Depending upon your use case, this answer can differ.  BTRFS can help track this information if you enable quotas, but I have not had a enough of need to enable them myself.
+
+# On btrfs filesystem usage
+This view exposes more details of the underlying filesystem, and is more a debug aid.  The unallocated space is areas of the filesystem that can be allocated to data or metadata.  Metadata is, AFAIK, are core filesystem data structures compared to data being just regular data.  That unallocated space will be used by the filesystem as you store more data.
+
+# Conclusion
+As someone who has used BTRFS in quite some anger for a long time now, I find some of your critiques completely valid.  Some of those issues I find to be features (having subvolumes just take whatever space is useful) but sometimes painful (why can't I know how much space my backups are taking).  And having been down the BTRFS stability rabbit hole, I have had to deal with various issues.  That being said I do enjoy using it where I have it.
+"""]]
diff --git a/blog/zfs-migration/comment_1_6413ee698aca447b5215e8258e630979._comment b/blog/zfs-migration/comment_1_6413ee698aca447b5215e8258e630979._comment
new file mode 100644
index 00000000..f1c24596
--- /dev/null
+++ b/blog/zfs-migration/comment_1_6413ee698aca447b5215e8258e630979._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ ip="50.100.165.103"
+ subject="comment 1"
+ date="2022-05-20T20:58:51Z"
+ content="""
+Cool blog.  It's too bad that the docs you referenced didn't insist on adding by by-uuid or by-uuid (/dev/disk/...) rather than /dev/sdx -- this is generally important, and especially important for USB-connected disks.
+"""]]

renew my openpgp key
diff --git a/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe b/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe
index 224f3439..b8451159 100644
Binary files a/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe and b/.well-known/openpgpkey/hu/myctwj4an6ne7htuzyoo8osctuji68xe differ

minor edits
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 244d1993..2201eeb6 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -42,11 +42,13 @@ This is going to partition `/dev/sdc` with:
         sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
         sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
 
+That will look something like this:
+
         root@curie:/home/anarcat# sgdisk -p /dev/sdc
         Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
         Model: ESD-S1C         
         Sector size (logical/physical): 512/512 bytes
-        Disk identifier (GUID): 932ED8E5-8B5C-4183-9967-56D7652C01DA
+        Disk identifier (GUID): [REDACTED]
         Partition table holds up to 128 entries
         Main partition table begins at sector 2 and ends at sector 33
         First usable sector is 34, last usable sector is 1953525134
@@ -98,7 +100,7 @@ workstation, we're betting that we will not suffer from this problem,
 after hearing a report from another Debian developer running this
 setup on their workstation successfully.
 
-# Creating "pools"
+# Creating pools
 
 ZFS pools are somewhat like "volume groups" if you are familiar with
 LVM, except they obviously also do things like RAID-10. (Even though
@@ -212,7 +214,7 @@ Also, the [FreeBSD handbook quick start](https://docs.freebsd.org/en/books/handb
 about their first example, which is with a single disk. So I am
 reassured at least. All 
 
-# Creating filesystems AKA "datasets"
+# Creating mount points
 
 Next we create the actual filesystems, known as "datasets" which are
 the things that get mounted on mountpoint and hold the actual files.
@@ -878,7 +880,7 @@ this.
 
 # References
 
-### ZFS documentation
+## ZFS documentation
 
  * [Debian wiki page](https://wiki.debian.org/ZFS): good introduction, basic commands, some
    advanced stuff

add toc
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 90ab321e..244d1993 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -10,6 +10,8 @@ because I find it too confusing and unreliable.
 
 So off we go.
 
+[[!toc levels=3]]
+
 # Installation
 
 Since this is a conversion (and not a new install), our procedure is

fix heading
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 1fed42b3..90ab321e 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -614,7 +614,7 @@ then you mount the root filesystem and all the others:
     mount -t tmpfs tmpfs /mnt/run &&
     mkdir /mnt/run/lock
 
-# Remaining work
+# Remaining issues
 
 TODO: swap. how do we do it?
 

document lockups
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index c6d12eb0..1fed42b3 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -618,8 +618,6 @@ then you mount the root filesystem and all the others:
 
 TODO: swap. how do we do it?
 
-TODO: talk about the lockups during migration
-
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
@@ -708,6 +706,174 @@ reporting the right timestamps in the end, although it does feel like
 *starting* all the processes (even if not doing any work yet) could
 skew the results.
 
+## Hangs during procedure
+
+During the procedure, it happened a few times where any ZFS command
+would completely hang. It seems that using an external USB drive to
+sync stuff didn't work so well: sometimes it would reconnect under a
+different device (from `sdc` to `sdd`, for example), and this would
+greatly confuse ZFS.
+
+Here, for example, is `sdd` reappearing out of the blue:
+
+    May 19 11:22:53 curie kernel: [  699.820301] scsi host4: uas
+    May 19 11:22:53 curie kernel: [  699.820544] usb 2-1: authorized to connect
+    May 19 11:22:53 curie kernel: [  699.922433] scsi 4:0:0:0: Direct-Access     ROG      ESD-S1C          0    PQ: 0 ANSI: 6
+    May 19 11:22:53 curie kernel: [  699.923235] sd 4:0:0:0: Attached scsi generic sg2 type 0
+    May 19 11:22:53 curie kernel: [  699.923676] sd 4:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
+    May 19 11:22:53 curie kernel: [  699.923788] sd 4:0:0:0: [sdd] Write Protect is off
+    May 19 11:22:53 curie kernel: [  699.923949] sd 4:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+    May 19 11:22:53 curie kernel: [  699.924149] sd 4:0:0:0: [sdd] Optimal transfer size 33553920 bytes
+    May 19 11:22:53 curie kernel: [  699.961602]  sdd: sdd1 sdd2 sdd3 sdd4
+    May 19 11:22:53 curie kernel: [  699.996083] sd 4:0:0:0: [sdd] Attached SCSI disk
+
+Next time I run a ZFS command (say `zpool list`), the command
+completely hangs (`D` state) and this comes up in the logs:
+
+    May 19 11:34:21 curie kernel: [ 1387.914843] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=71344128 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914859] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=205565952 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914874] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272789504 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.914906] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=270336 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.914932] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073225728 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.914948] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073487872 size=8192 flags=b08c1
+    May 19 11:34:21 curie kernel: [ 1387.915165] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272793600 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.915183] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=339853312 size=4096 flags=184880
+    May 19 11:34:21 curie kernel: [ 1387.915648] WARNING: Pool 'bpool' has encountered an uncorrectable I/O failure and has been suspended.
+    May 19 11:34:21 curie kernel: [ 1387.915648] 
+    May 19 11:37:25 curie kernel: [ 1571.558614] task:txg_sync        state:D stack:    0 pid:  997 ppid:     2 flags:0x00004000
+    May 19 11:37:25 curie kernel: [ 1571.558623] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.558640]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.558650]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.558670]  schedule_timeout+0x8b/0x140
+    May 19 11:37:25 curie kernel: [ 1571.558675]  ? __next_timer_interrupt+0x110/0x110
+    May 19 11:37:25 curie kernel: [ 1571.558678]  io_schedule_timeout+0x4c/0x80
+    May 19 11:37:25 curie kernel: [ 1571.558689]  __cv_timedwait_common+0x12b/0x160 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.558694]  ? add_wait_queue_exclusive+0x70/0x70
+    May 19 11:37:25 curie kernel: [ 1571.558702]  __cv_timedwait_io+0x15/0x20 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.558816]  zio_wait+0x129/0x2b0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.558929]  dsl_pool_sync+0x461/0x4f0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559032]  spa_sync+0x575/0xfa0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559138]  ? spa_txg_history_init_io+0x101/0x110 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559245]  txg_sync_thread+0x2e0/0x4a0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559354]  ? txg_fini+0x240/0x240 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559366]  thread_generic_wrapper+0x6f/0x80 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.559376]  ? __thread_exit+0x20/0x20 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.559379]  kthread+0x11b/0x140
+    May 19 11:37:25 curie kernel: [ 1571.559382]  ? __kthread_bind_mask+0x60/0x60
+    May 19 11:37:25 curie kernel: [ 1571.559386]  ret_from_fork+0x22/0x30
+    May 19 11:37:25 curie kernel: [ 1571.559401] task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
+    May 19 11:37:25 curie kernel: [ 1571.559404] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.559409]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.559412]  ? __kmalloc_node+0x141/0x2b0
+    May 19 11:37:25 curie kernel: [ 1571.559417]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559420]  schedule_preempt_disabled+0xa/0x10
+    May 19 11:37:25 curie kernel: [ 1571.559424]  __mutex_lock.constprop.0+0x133/0x460
+    May 19 11:37:25 curie kernel: [ 1571.559435]  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
+    May 19 11:37:25 curie kernel: [ 1571.559537]  spa_all_configs+0x41/0x120 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559644]  zfs_ioc_pool_configs+0x17/0x70 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559752]  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559758]  ? _copy_from_user+0x28/0x60
+    May 19 11:37:25 curie kernel: [ 1571.559860]  zfsdev_ioctl+0x53/0xe0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.559866]  __x64_sys_ioctl+0x83/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559869]  do_syscall_64+0x33/0x80
+    May 19 11:37:25 curie kernel: [ 1571.559873]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    May 19 11:37:25 curie kernel: [ 1571.559876] RIP: 0033:0x7fcf0ef32cc7
+    May 19 11:37:25 curie kernel: [ 1571.559878] RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    May 19 11:37:25 curie kernel: [ 1571.559881] RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
+    May 19 11:37:25 curie kernel: [ 1571.559883] RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
+    May 19 11:37:25 curie kernel: [ 1571.559885] RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
+    May 19 11:37:25 curie kernel: [ 1571.559886] R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
+    May 19 11:37:25 curie kernel: [ 1571.559888] R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
+    May 19 11:37:25 curie kernel: [ 1571.559980] task:zpool           state:D stack:    0 pid:11815 ppid:  3816 flags:0x00004000
+    May 19 11:37:25 curie kernel: [ 1571.559983] Call Trace:
+    May 19 11:37:25 curie kernel: [ 1571.559988]  __schedule+0x282/0x870
+    May 19 11:37:25 curie kernel: [ 1571.559992]  schedule+0x46/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.559995]  io_schedule+0x42/0x70
+    May 19 11:37:25 curie kernel: [ 1571.560004]  cv_wait_common+0xac/0x130 [spl]
+    May 19 11:37:25 curie kernel: [ 1571.560008]  ? add_wait_queue_exclusive+0x70/0x70
+    May 19 11:37:25 curie kernel: [ 1571.560118]  txg_wait_synced_impl+0xc9/0x110 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560223]  txg_wait_synced+0xc/0x40 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560325]  spa_export_common+0x4cd/0x590 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560430]  ? zfs_log_history+0x9c/0xf0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560537]  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560543]  ? _copy_from_user+0x28/0x60
+    May 19 11:37:25 curie kernel: [ 1571.560644]  zfsdev_ioctl+0x53/0xe0 [zfs]
+    May 19 11:37:25 curie kernel: [ 1571.560649]  __x64_sys_ioctl+0x83/0xb0
+    May 19 11:37:25 curie kernel: [ 1571.560653]  do_syscall_64+0x33/0x80
+    May 19 11:37:25 curie kernel: [ 1571.560656]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    May 19 11:37:25 curie kernel: [ 1571.560659] RIP: 0033:0x7fdc23be2cc7
+    May 19 11:37:25 curie kernel: [ 1571.560661] RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    May 19 11:37:25 curie kernel: [ 1571.560664] RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
+    May 19 11:37:25 curie kernel: [ 1571.560666] RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
+    May 19 11:37:25 curie kernel: [ 1571.560667] RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
+    May 19 11:37:25 curie kernel: [ 1571.560669] R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
+    May 19 11:37:25 curie kernel: [ 1571.560671] R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
+
+Here's another example, where you see the USB controller bleeping out
+and back into existence:
+
+    mai 19 11:38:39 curie kernel: usb 2-1: USB disconnect, device number 2
+    mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronizing SCSI cache
+    mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
+    mai 19 11:39:25 curie kernel: INFO: task zed:1564 blocked for more than 241 seconds.
+    mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
+    mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+    mai 19 11:39:25 curie kernel: task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
+    mai 19 11:39:25 curie kernel: Call Trace:
+    mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
+    mai 19 11:39:25 curie kernel:  ? __kmalloc_node+0x141/0x2b0
+    mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
+    mai 19 11:39:25 curie kernel:  schedule_preempt_disabled+0xa/0x10
+    mai 19 11:39:25 curie kernel:  __mutex_lock.constprop.0+0x133/0x460
+    mai 19 11:39:25 curie kernel:  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
+    mai 19 11:39:25 curie kernel:  spa_all_configs+0x41/0x120 [zfs]
+    mai 19 11:39:25 curie kernel:  zfs_ioc_pool_configs+0x17/0x70 [zfs]
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
+    mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
+    mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
+    mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    mai 19 11:39:25 curie kernel: RIP: 0033:0x7fcf0ef32cc7
+    mai 19 11:39:25 curie kernel: RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
+    mai 19 11:39:25 curie kernel: RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
+    mai 19 11:39:25 curie kernel: RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
+    mai 19 11:39:25 curie kernel: R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
+    mai 19 11:39:25 curie kernel: R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
+    mai 19 11:39:25 curie kernel: INFO: task zpool:11815 blocked for more than 241 seconds.
+    mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
+    mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+    mai 19 11:39:25 curie kernel: task:zpool           state:D stack:    0 pid:11815 ppid:  2621 flags:0x00004004
+    mai 19 11:39:25 curie kernel: Call Trace:
+    mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
+    mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
+    mai 19 11:39:25 curie kernel:  io_schedule+0x42/0x70
+    mai 19 11:39:25 curie kernel:  cv_wait_common+0xac/0x130 [spl]
+    mai 19 11:39:25 curie kernel:  ? add_wait_queue_exclusive+0x70/0x70
+    mai 19 11:39:25 curie kernel:  txg_wait_synced_impl+0xc9/0x110 [zfs]
+    mai 19 11:39:25 curie kernel:  txg_wait_synced+0xc/0x40 [zfs]
+    mai 19 11:39:25 curie kernel:  spa_export_common+0x4cd/0x590 [zfs]
+    mai 19 11:39:25 curie kernel:  ? zfs_log_history+0x9c/0xf0 [zfs]
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
+    mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
+    mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
+    mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
+    mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
+    mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
+    mai 19 11:39:25 curie kernel: RIP: 0033:0x7fdc23be2cc7
+    mai 19 11:39:25 curie kernel: RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+    mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
+    mai 19 11:39:25 curie kernel: RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
+    mai 19 11:39:25 curie kernel: RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
+    mai 19 11:39:25 curie kernel: R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
+    mai 19 11:39:25 curie kernel: R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
+
+I understand those are rather extreme conditions: I would fully expect
+the pool to stop working if the underlying drives disappear. What
+doesn't seem acceptable is that a command would completely hang like
+this.
+
 # References
 
 ### ZFS documentation

move fio discussion to appendix
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 4861eac8..c6d12eb0 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -592,81 +592,6 @@ Another test was performed while in "rescue" mode but was ultimately
 lost. It's actually still in the old M.2 drive, but I cannot mount
 that device with the external USB controller I have right now.
 
-## Side note about fio job files
-
-I would love to have just a single `.fio` job file that lists multiple
-jobs to run *serially*. For example, this file describes the above
-workload pretty well:
-
-[[!format txt """
-[global]
-# cargo-culting Salter
-fallocate=none
-ioengine=posixaio
-runtime=60
-time_based=1
-end_fsync=1
-stonewall=1
-group_reporting=1
-# no need to drop caches, done by default
-# invalidate=1
-
-# Single 4KiB random read/write process
-[randread-4k-4g-1x]
-stonewall=1
-rw=randread
-bs=4k
-size=4g
-numjobs=1
-iodepth=1
-
-[randwrite-4k-4g-1x]
-stonewall=1
-rw=randwrite
-bs=4k
-size=4g
-numjobs=1
-iodepth=1
-
-# 16 parallel 64KiB random read/write processes:
-[randread-64k-256m-16x]
-stonewall=1
-rw=randread
-bs=64k
-size=256m
-numjobs=16
-iodepth=16
-
-[randwrite-64k-256m-16x]
-stonewall=1
-rw=randwrite
-bs=64k
-size=256m
-numjobs=16
-iodepth=16
-
-# Single 1MiB random read/write process
-[randread-1m-16g-1x]
-stonewall=1
-rw=randread
-bs=1m
-size=16g
-numjobs=1
-iodepth=1
-
-[randwrite-1m-16g-1x]
-stonewall=1
-rw=randwrite
-bs=1m
-size=16g
-numjobs=1
-iodepth=1
-"""]]
-
-... except the jobs are actually run in parallel, even though they are
-`stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
-to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
-
 # Recovery procedures
 
 For test purposes, I unmounted all systems during the procedure:
@@ -705,11 +630,84 @@ TODO: send/recv, automated snapshots
 TODO: merge this documentation with the [[hardware/tubman]]
 documentation. maybe create a separate zfs primer?
 
+## fio improvements
+
 I really want to improve my experience with `fio`. Right now, I'm just
 cargo-culting stuff from other folks and I don't really like
 it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
 that it doesn't really work that well for disk tests.
 
+I would love to have just a single `.fio` job file that lists multiple
+jobs to run *serially*. For example, this file describes the above
+workload pretty well:
+
+    [global]
+    # cargo-culting Salter
+    fallocate=none
+    ioengine=posixaio
+    runtime=60
+    time_based=1
+    end_fsync=1
+    stonewall=1
+    group_reporting=1
+    # no need to drop caches, done by default
+    # invalidate=1
+
+    # Single 4KiB random read/write process
+    [randread-4k-4g-1x]
+    rw=randread
+    bs=4k
+    size=4g
+    numjobs=1
+    iodepth=1
+
+    [randwrite-4k-4g-1x]
+    rw=randwrite
+    bs=4k
+    size=4g
+    numjobs=1
+    iodepth=1
+
+    # 16 parallel 64KiB random read/write processes:
+    [randread-64k-256m-16x]
+    rw=randread
+    bs=64k
+    size=256m
+    numjobs=16
+    iodepth=16
+
+    [randwrite-64k-256m-16x]
+    rw=randwrite
+    bs=64k
+    size=256m
+    numjobs=16
+    iodepth=16
+
+    # Single 1MiB random read/write process
+    [randread-1m-16g-1x]
+    rw=randread
+    bs=1m
+    size=16g
+    numjobs=1
+    iodepth=1
+
+    [randwrite-1m-16g-1x]
+    rw=randwrite
+    bs=1m
+    size=16g
+    numjobs=1
+    iodepth=1
+
+... except the jobs are actually started in parallel, even though they
+are `stonewall`'d, as far as I can tell by the reports. I [sent a
+mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u) to the [fio mailing list](https://lore.kernel.org/fio/) for clarification. 
+
+It looks like the jobs are *started* in parallel, but actual
+(correctly) run serially. It seems like this might just be a matter of
+reporting the right timestamps in the end, although it does feel like
+*starting* all the processes (even if not doing any work yet) could
+skew the results.
+
 # References
 
 ### ZFS documentation

migration completed
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 56403158..4861eac8 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -260,6 +260,17 @@ the things that get mounted on mountpoint and hold the actual files.
 
    ... and no, just creating `/mnt/var/lib` doesn't fix that problem.
 
+   Also note that you will *probably* need to change storage driver in
+   Docker, see the [zfs-driver documentation](https://docs.docker.com/storage/storagedriver/zfs-driver/) for details but,
+   basically, I did:
+   
+       echo '{ "storage-driver": "zfs" }' > /etc/docker/daemon.json
+
+   Note, as an aside, that podman has the same problem (and similar
+   solution):
+   
+       printf '[storage]\ndriver = "zfs"\n' > /etc/containers/storage.conf
+
  * make a `tmpfs` for `/run`:
 
         mkdir /mnt/run &&
@@ -376,14 +387,15 @@ seems to be at around 5Gbps:
 So it shouldn't cap at that speed. It's possible the USB adapter is
 failing to give me the full speed though.
 
-TODO: we are here
-
-TODO: ddrescue LVM setup to *other* NVMe drive, to allow for similar
-benchmarks later
-
-TODO: benchmark before, in single-user mode?
+At this point, we're about ready to do the final configuration. We
+drop to single user mode and do the rest of the procedure. That used
+to be `shutdown now`, but it seems like the systemd switch broke that,
+so now you can reboot into grub and pick the "recovery"
+option. Alternatively, you might try `systemctl rescue` (untested).
 
-TODO: rsync in single user mode, then continue below
+I also wanted to copy the drive over to another new NVMe drive, but
+that failed: it looks like the USB controller I have doesn't work with
+older, non-NVME drives.
 
 # Boot configuration
 
@@ -422,7 +434,8 @@ Enable the service:
 
     systemctl enable zfs-import-bpool.service
 
-TODO: fstab? swap?
+I had to trim down `/etc/fstab` and `/etc/crypttab` to only contain
+references to the legacy filesystems (`/srv` is still BTRFS!).
 
 Rebuild boot loader with support for ZFS, but also to workaround
 GRUB's missing zpool-features support:
@@ -474,22 +487,19 @@ Exit chroot:
 
 # Finalizing
 
-TODO: move Docker to the right place:
+One last sync was done in rescue mode:
 
-    rm /var/lib/docker/
-    mv /home/docker/* /var/lib/docker/
-    rmdir /home/docker
-
-TODO: last sync in single user mode
+    for fs in /boot/ /boot/efi/ / /home/; do
+        echo "syncing $fs to /mnt$fs..." && 
+        rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
+    done
 
-Unmount filesystems:
+Then we unmount all filesystems:
  
     mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
     zpool export -a
 
-TODO: reboot
-TODO: swap drives
-TODO: new benchmark
+Reboot, swap the drives, and boot in ZFS. Hurray!
 
 # Benchmarks
 
@@ -578,6 +588,10 @@ what it's worth. Those results are curiously inconsistent with the
 non-idle test, many tests perform more *poorly* than when the
 workstation was busy, which is troublesome.
 
+Another test was performed while in "rescue" mode but was ultimately
+lost. It's actually still in the old M.2 drive, but I cannot mount
+that device with the external USB controller I have right now.
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple
@@ -653,8 +667,34 @@ iodepth=1
 `stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
 to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
 
+# Recovery procedures
+
+For test purposes, I unmounted all systems during the procedure:
+
+    umount /mnt/boot/efi /mnt/boot/run
+    umount -a -t zfs
+    zpool export -a
+
+And disconnected the drive, to see how I would recover this system
+from another Linux system in case of a total motherboard failure.
+
+To import an existing pool, plug the device, then import the pool with
+an alternate root, so it doesn't mount over your existing filesystems,
+then you mount the root filesystem and all the others:
+
+    zpool import -l -a -R /mnt &&
+    zfs mount rpool/ROOT/debian &&
+    zfs mount -a &&
+    mount /dev/sdc2 /mnt/boot/efi &&
+    mount -t tmpfs tmpfs /mnt/run &&
+    mkdir /mnt/run/lock
+
 # Remaining work
 
+TODO: swap. how do we do it?
+
+TODO: talk about the lockups during migration
+
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!
 
 TODO: ship my on .debs? `dkms mkbmdeb zfs/2.0.3` is the magic command
@@ -662,6 +702,9 @@ here.
 
 TODO: send/recv, automated snapshots
 
+TODO: merge this documentation with the [[hardware/tubman]]
+documentation. maybe create a separate zfs primer?
+
 I really want to improve my experience with `fio`. Right now, I'm just
 cargo-culting stuff from other folks and I don't really like
 it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
diff --git a/hardware/tubman.md b/hardware/tubman.md
index 4b0fc1d6..368cd904 100644
--- a/hardware/tubman.md
+++ b/hardware/tubman.md
@@ -444,6 +444,28 @@ IO statistics, every second:
 
     zpool iostat 1
 
+### Mounting
+
+After a `zfs list`, you should see the datasets you can mount. You can
+mount one by name, for example with:
+
+    zfs mount bpool/ROOT/debian
+
+Note that it will mount the device in its pre-defined `mountpoint`
+property. If you want to mount it elsewhere, this is the magic
+formula:
+
+    mount -o zfsutil -t zfs bpool/BOOT/debian /mnt
+
+If the dataset is encrypted, however, you first need to unlock it
+with:
+
+    zpool import -l -a -R /mnt
+
+Note that the above is preferred: it will set the entire imported pool
+to mount under `/mnt` instead of the toplevel. That way you don't need
+the earlier hack to mount it elsewhere.
+
 ### Snapshots
 
 Creating:

another benchmarks set done
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 8ccfb228..56403158 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -146,9 +146,8 @@ This is a more typical pool creation.
         zpool create \
             -o ashift=12 \
             -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
-            -O acltype=posixacl -O xattr=sa \
+            -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
             -O compression=zstd \
-            -O dnodesize=auto \
             -O relatime=on \
             -O canmount=off \
             -O mountpoint=/ -R /mnt \
@@ -558,6 +557,27 @@ assume my work affected the benchmarks greatly.
    * read: 165MiB/s (173MB/s), sync: 172MiB/s (180MB/s)
    * write: 74.7MiB/s (78.3MB/s), sync: 38.5MiB/s (40.4MB/s)
 
+### Somewhat idle test, mdadm/luks/lvm/ext4
+
+This test was done while I was away from my workstation. Everything
+was still running, so a bunch of stuff was probably waking up and
+disturbing the test, but it should be more reliable than the above.
+
+ * 4k blocks, 4GB, 1 process:
+   * read: 16.8MiB/s (17.7MB/s), sync: 18.9MiB/s (19.8MB/s)
+   * write: 73.8MiB/s (77.3MB/s), sync: 847KiB/s (867kB/s)
+ * 64k blocks, 256MB, 16 process:
+   * read: 526MiB/s (552MB/s), sync: 520MiB/s (546MB/s)
+   * write: 98.3MiB/s (103MB/s), sync: 29.6MiB/s (30.0MB/s)
+ * 1m blocks, 16G 1 process:
+   * read: 148MiB/s (155MB/s), sync: 162MiB/s (170MB/s)
+   * write: 109MiB/s (114MB/s), sync: 48.6MiB/s (50.0MB/s)
+
+It looks like the 64k test is the one that can max out the SSD, for
+what it's worth. Those results are curiously inconsistent with the
+non-idle test, many tests perform more *poorly* than when the
+workstation was busy, which is troublesome.
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple

and power consumptions improvements!
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index 5baade96..e024a55d 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -88,7 +88,11 @@ Cons:
    than my current (2021-09-27) laptop (Purism 13v4, currently says
    7h). power problems confirmed by [this report from Linux After
    Dark][linux-after-dark-framework] which also mentions that the USB adapters take power *even
-   when not in use* and quite a bit (400mW in some cases!)
+   when not in use* and quite a bit (400mW in some cases!). update:
+   apparently the [second generation laptop][] has improvements to the
+   battery life, namely associated with the "big-little" design of the
+   12th gen Intel chips but also [standby consumption](https://news.ycombinator.com/item?id=31433666) and
+   [firmware updates for various chipsets](https://news.ycombinator.com/item?id=31434021)
 
 [linux-after-dark-framework]: https://linuxafterdark.net/linux-after-dark-episode-14/
 
@@ -97,7 +101,10 @@ Cons:
    After Dark][linux-after-dark-framework]), so unlikely to have one in the future</del>
    Update: it seems like they cracked that nut and will ship an
    [ethernet expansion card](https://frame.work/ca/en/products/ethernet-expansion-card) in their [second generation
-   laptop](https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646), which is impressive
+   laptop][], which is impressive. Downside: the [chipset is
+   realtek](https://news.ycombinator.com/item?id=31434483), so probably firmware blobby.
+
+[second generation laptop]: https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646
 
  * a bit pricey for the performance, especially when compared to the
    competition (e.g. Dell XPS, Apple M1), but be worth waiting for

update: framework will ship an ethernet port, whoohoo!
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index f2020631..5baade96 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -92,9 +92,12 @@ Cons:
 
 [linux-after-dark-framework]: https://linuxafterdark.net/linux-after-dark-episode-14/
 
- * no RJ-45 port, and attempts at designing ones are failing because
-   the modular plugs are too thin to fit (according to [Linux After
-   Dark][linux-after-dark-framework]), so unlikely to have one in the future
+ * <del>no RJ-45 port, and attempts at designing ones are failing
+   because the modular plugs are too thin to fit (according to [Linux
+   After Dark][linux-after-dark-framework]), so unlikely to have one in the future</del>
+   Update: it seems like they cracked that nut and will ship an
+   [ethernet expansion card](https://frame.work/ca/en/products/ethernet-expansion-card) in their [second generation
+   laptop](https://community.frame.work/t/introducing-the-new-and-upgraded-framework-laptop/18646), which is impressive
 
  * a bit pricey for the performance, especially when compared to the
    competition (e.g. Dell XPS, Apple M1), but be worth waiting for

more todos, typos
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 80c44b0d..8ccfb228 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -379,6 +379,9 @@ failing to give me the full speed though.
 
 TODO: we are here
 
+TODO: ddrescue LVM setup to *other* NVMe drive, to allow for similar
+benchmarks later
+
 TODO: benchmark before, in single-user mode?
 
 TODO: rsync in single user mode, then continue below
@@ -420,6 +423,8 @@ Enable the service:
 
     systemctl enable zfs-import-bpool.service
 
+TODO: fstab? swap?
+
 Rebuild boot loader with support for ZFS, but also to workaround
 GRUB's missing zpool-features support:
 
@@ -483,7 +488,9 @@ Unmount filesystems:
     mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
     zpool export -a
 
-reboot to the new system.
+TODO: reboot
+TODO: swap drives
+TODO: new benchmark
 
 # Benchmarks
 
@@ -651,7 +658,7 @@ that it doesn't really work that well for disk tests.
  * [FreeBSD handbook](https://docs.freebsd.org/en/books/handbook/zfs/): FreeBSD-specific of course, but
    excellent as always
  * [OpenZFS FAQ](https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html)
- * [OpenZFS: Debian buyllseye root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.htm): excellent documentation, basis
+ * [OpenZFS: Debian buyllseye root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html): excellent documentation, basis
    for the above procedure
  * [another ZFS on linux documentation](https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/)
 

fix blob format
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2f476f9a..80c44b0d 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -557,7 +557,7 @@ I would love to have just a single `.fio` job file that lists multiple
 jobs to run *serially*. For example, this file describes the above
 workload pretty well:
 
-[[!format """
+[[!format txt """
 [global]
 # cargo-culting Salter
 fallocate=none

move benchmark script to git
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index aeebebd3..2f476f9a 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -508,43 +508,10 @@ article](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-
 already pretty strange. But also it doesn't include stuff like
 dropping caches or repeating results.
 
-So here's my variation. 
-
-[[!format sh """
-#!/bin/sh
-
-set -e
-
-common_flags="--group_reporting --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
-
-while read type bs size jobs extra ; do
-    name="${type}${bs}${size}${jobs}x$extra"
-    echo "dropping caches..." >&2
-    sync
-    echo 3 > /proc/sys/vm/drop_caches
-    echo "running job $name..." >&2
-    fio $common_flags --name="$name" \
-        --rw="$type" \
-        --bs="$bs" \
-        --size="$size" \
-        --numjobs="$jobs" \
-        --iodepth="$jobs" \
-        $extra
-done <<EOF
-randread  4k 4g 1
-randwrite 4k 4g 1
-randread  64k 256m 16
-randwrite 64k 256m 16
-randread  1m 16g 1
-randwrite 1m 16g 1
-randread  4k 4g 1 --fsync=1
-randwrite 4k 4g 1 --fsync=1
-randread  64k 256m 16 --fsync=1
-randwrite 64k 256m 16 --fsync=1
-randread  1m 16g 1 --fsync=1
-randwrite 1m 16g 1 --fsync=1
-EOF
-"""]]
+So here's my variation, which i called [fio-ars-bench.sh](https://gitlab.com/anarcat/scripts/-/blob/main/fio-ars-bench.sh) for
+now. It just batches a bunch of `fio` tests, one by one, 60 seconds
+each. It should take about 12 minutes to run, as there are 3 pair of
+tests, read/write, with and without async.
 
 And before I show the results, it should be noted there is a huge
 caveat here The test is done between:

some benchmarks
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 0249f8d4..aeebebd3 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -379,11 +379,9 @@ failing to give me the full speed though.
 
 TODO: we are here
 
-TODO: resync
+TODO: benchmark before, in single-user mode?
 
-TODO: benchmark before
-
-TODO: resync in single user mode, then 
+TODO: rsync in single user mode, then continue below
 
 # Boot configuration
 
@@ -517,7 +515,7 @@ So here's my variation.
 
 set -e
 
-common_flags="--group_reporting --minimal --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
+common_flags="--group_reporting --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
 
 while read type bs size jobs extra ; do
     name="${type}${bs}${size}${jobs}x$extra"
@@ -533,18 +531,18 @@ while read type bs size jobs extra ; do
         --iodepth="$jobs" \
         $extra
 done <<EOF
-randwrite 4k 4g 1
 randread  4k 4g 1
-randwrite 64k 256m 16
+randwrite 4k 4g 1
 randread  64k 256m 16
-randwrite 1m 16g 1
+randwrite 64k 256m 16
 randread  1m 16g 1
-randwrite 4k 4g 1 --fsync=1
+randwrite 1m 16g 1
 randread  4k 4g 1 --fsync=1
-randwrite 64k 256m 16 --fsync=1
+randwrite 4k 4g 1 --fsync=1
 randread  64k 256m 16 --fsync=1
-randwrite 1m 16g 1 --fsync=1
+randwrite 64k 256m 16 --fsync=1
 randread  1m 16g 1 --fsync=1
+randwrite 1m 16g 1 --fsync=1
 EOF
 """]]
 
@@ -568,6 +566,24 @@ not on reads. It's also possible it outperforms it on both, because
 it's a newer drive. A new test might be possible with a new external
 USB drive as well, although I doubt I will find the time to do this.
 
+## Results
+
+### Non-idle test, mdadm/luks/lvm/ext4
+
+Those tests were done with the above script, in `/home`, while working
+on other things on my workstation, which generally felt sluggish, so I
+assume my work affected the benchmarks greatly.
+
+ * 4k blocks, 4GB, 1 process:
+   * read: 21.5MiB/s (22.5MB/s), sync: 20.8MiB/s (21.9MB/s)
+   * write: 139MiB/s (146MB/s), sync: 1118KiB/s (1145kB/s)
+ * 64k blocks, 256MB, 16 process:
+   * read: 513MiB/s (537MB/s), sync: 512MiB/s (537MB/s)
+   * write: 160MiB/s (167MB/s), sync: 41.5MiB/s (43.5MB/s)
+ * 1m blocks, 16G 1 process:
+   * read: 165MiB/s (173MB/s), sync: 172MiB/s (180MB/s)
+   * write: 74.7MiB/s (78.3MB/s), sync: 38.5MiB/s (40.4MB/s)
+
 ## Side note about fio job files
 
 I would love to have just a single `.fio` job file that lists multiple
@@ -640,8 +656,8 @@ iodepth=1
 """]]
 
 ... except the jobs are actually run in parallel, even though they are
-`stonewall`'d, as far as I can tell by the reports. I sent a mail to
-the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
+`stonewall`'d, as far as I can tell by the reports. I [sent a mail](https://lore.kernel.org/fio/87pmkaeicg.fsf@curie.anarc.at/T/#u)
+to the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
 
 # Remaining work
 
@@ -652,6 +668,11 @@ here.
 
 TODO: send/recv, automated snapshots
 
+I really want to improve my experience with `fio`. Right now, I'm just
+cargo-culting stuff from other folks and I don't really like
+it. [stressant](https://stressant.readthedocs.io/) is a good example of my struggles, in the sense
+that it doesn't really work that well for disk tests.
+
 # References
 
 ### ZFS documentation

expand on benchmarks
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index 2fda422d..0249f8d4 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -21,6 +21,10 @@ So, install the required packages, on the current system:
 
     apt install --yes gdisk zfs-dkms zfs zfs-initramfs zfsutils-linux
 
+We also tell DKMS that we need to rebuild the initrd when upgrading:
+
+    echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
+
 # Partitioning
 
 This is going to partition `/dev/sdc` with:
@@ -336,6 +340,30 @@ idea. At this point, the procedure was restarted all the way back to
 which, surprisingly, doesn't require any confirmation (`zpool destroy
 rpool`).
 
+The second run was cleaner:
+
+    root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
+            echo "syncing $fs to /mnt$fs..." && 
+            rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
+        done
+    syncing /boot/ to /mnt/boot/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
+    syncing /boot/efi/ to /mnt/boot/efi/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/110)  
+    syncing / to /mnt/...
+     28,019,033,070  97%   42.03MB/s    0:10:35 (xfr#703671, ir-chk=1093/833515)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
+    could not make way for new symlink: var/lib/docker
+     34,081,807,102  98%   44.84MB/s    0:12:04 (xfr#736580, to-chk=0/867723)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+    syncing /home/ to /mnt/home/...
+    rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
+    IO error encountered -- skipping file deletion
+     24,043,086,450  96%   62.03MB/s    0:06:09 (xfr#151819, ir-chk=15117/172571)
+    file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/4C1FDBFEA976FF924D062FB990B24B897A77B84B"
+    315,423,626,507  96%   67.09MB/s    1:14:43 (xfr#2256845, to-chk=0/2994364)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+
+
 Also note the transfer speed: we seem capped at 76MB/s, or
 608Mbit/s. This is not as fast as I was expecting: the USB connection
 seems to be at around 5Gbps:
@@ -349,14 +377,8 @@ seems to be at around 5Gbps:
 So it shouldn't cap at that speed. It's possible the USB adapter is
 failing to give me the full speed though.
 
-TODO: make a new paste
-
 TODO: we are here
 
-TODO:
-
-    echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
-
 TODO: resync
 
 TODO: benchmark before
@@ -478,7 +500,7 @@ This is a test that was ran in single-user mode using fio and the
 
         fio --name=randwrite64k16x --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1
 
- * Single 1MiB random write process
+ * Single 1MiB random write process:
 
         fio --name=randwrite1m1x --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
 
@@ -488,7 +510,7 @@ article](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-
 already pretty strange. But also it doesn't include stuff like
 dropping caches or repeating results.
 
-So here's my variation
+So here's my variation. 
 
 [[!format sh """
 #!/bin/sh
@@ -497,14 +519,12 @@ set -e
 
 common_flags="--group_reporting --minimal --fallocate=none --ioengine=posixaio --runtime=60 --time_based --end_fsync=1"
 
-# + --directory=/test?
-
 while read type bs size jobs extra ; do
     name="${type}${bs}${size}${jobs}x$extra"
-    echo "dropping caches..."
+    echo "dropping caches..." >&2
     sync
     echo 3 > /proc/sys/vm/drop_caches
-    echo "running job $name..."
+    echo "running job $name..." >&2
     fio $common_flags --name="$name" \
         --rw="$type" \
         --bs="$bs" \
@@ -528,6 +548,101 @@ randread  1m 16g 1 --fsync=1
 EOF
 """]]
 
+And before I show the results, it should be noted there is a huge
+caveat here The test is done between:
+
+ * a WDC WDS500G1B0B-00AS40 SSD, a [WD blue M.2 2280 SSD](https://www.westerndigital.com/products/internal-drives/wd-blue-sata-m-2-ssd#WDS500G2B0B) (running
+   mdadm/LUKS/LVM/ext4), that is at least 5 years old, spec'd at
+   560MB/s read, 530MB/s write
+
+ * a brand new [WD blue SN550](https://www.westerndigital.com/products/internal-drives/wd-blue-sn550-nvme-ssd#WDS500G2B0C) drive, which claims to be able to
+   push 2400MB/s read and 1750MB/s write
+
+In practice, I'm going to assume we'll never reach those numbers
+because we're not actually NVMe so the bottleneck isn't the disk
+itself. For our purposes, it might still give us useful results.
+
+My bias, before building, running and analysing those results is that
+ZFS should outperform the traditional stack on writes, but possibly
+not on reads. It's also possible it outperforms it on both, because
+it's a newer drive. A new test might be possible with a new external
+USB drive as well, although I doubt I will find the time to do this.
+
+## Side note about fio job files
+
+I would love to have just a single `.fio` job file that lists multiple
+jobs to run *serially*. For example, this file describes the above
+workload pretty well:
+
+[[!format """
+[global]
+# cargo-culting Salter
+fallocate=none
+ioengine=posixaio
+runtime=60
+time_based=1
+end_fsync=1
+stonewall=1
+group_reporting=1
+# no need to drop caches, done by default
+# invalidate=1
+
+# Single 4KiB random read/write process
+[randread-4k-4g-1x]
+stonewall=1
+rw=randread
+bs=4k
+size=4g
+numjobs=1
+iodepth=1
+
+[randwrite-4k-4g-1x]
+stonewall=1
+rw=randwrite
+bs=4k
+size=4g
+numjobs=1
+iodepth=1
+
+# 16 parallel 64KiB random read/write processes:
+[randread-64k-256m-16x]
+stonewall=1
+rw=randread
+bs=64k
+size=256m
+numjobs=16
+iodepth=16
+
+[randwrite-64k-256m-16x]
+stonewall=1
+rw=randwrite
+bs=64k
+size=256m
+numjobs=16
+iodepth=16
+
+# Single 1MiB random read/write process
+[randread-1m-16g-1x]
+stonewall=1
+rw=randread
+bs=1m
+size=16g
+numjobs=1
+iodepth=1
+
+[randwrite-1m-16g-1x]
+stonewall=1
+rw=randwrite
+bs=1m
+size=16g
+numjobs=1
+iodepth=1
+"""]]
+
+... except the jobs are actually run in parallel, even though they are
+`stonewall`'d, as far as I can tell by the reports. I sent a mail to
+the [fio mailing list](https://lore.kernel.org/fio/) for clarification.
+
 # Remaining work
 
 TODO: [TRIM](https://wiki.debian.org/ZFS#TRIM_support), also on tubman!

update zfs migration
had to rebuild teh pools because utf8only is crap (and i don't only
have utf8)
i screwed up smartctl and sgdisks commands (scary) so i don't actually
know the block size.
actually use zstd encryption
start the first sync
design a benchmark procedure
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
index d76dc3d7..2fda422d 100644
--- a/blog/zfs-migration.md
+++ b/blog/zfs-migration.md
@@ -36,27 +36,26 @@ This is going to partition `/dev/sdc` with:
         sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
         sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
 
-It looks like this:
-
-    root@curie:~# sgdisk -p /dev/sdb
-    Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
-    Model: WDC WD10JPLX-00M
-    Sector size (logical/physical): 512/4096 bytes
-    Disk identifier (GUID): D8806C02-B5A6-4705-ACA9-F5A92F98C2D1
-    Partition table holds up to 128 entries
-    Main partition table begins at sector 2 and ends at sector 33
-    First usable sector is 34, last usable sector is 1953525134
-    Partitions will be aligned on 2048-sector boundaries
-    Total free space is 3437 sectors (1.7 MiB)
-
-    Number  Start (sector)    End (sector)  Size       Code  Name
-       1            2048          411647   200.0 MiB   EF00  EFI System Partition
-       2          411648         2508799   1024.0 MiB  8300  
-       3         2508800        18958335   7.8 GiB     8300  
-       4        18958336      1953523711   922.5 GiB   8300
-
-This, by the way, says the device has 4KB sector size. `smartctl`
-agrees as well:
+        root@curie:/home/anarcat# sgdisk -p /dev/sdc
+        Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
+        Model: ESD-S1C         
+        Sector size (logical/physical): 512/512 bytes
+        Disk identifier (GUID): 932ED8E5-8B5C-4183-9967-56D7652C01DA
+        Partition table holds up to 128 entries
+        Main partition table begins at sector 2 and ends at sector 33
+        First usable sector is 34, last usable sector is 1953525134
+        Partitions will be aligned on 16-sector boundaries
+        Total free space is 14 sectors (7.0 KiB)
+
+        Number  Start (sector)    End (sector)  Size       Code  Name
+           1              48            2047   1000.0 KiB  EF02  
+           2            2048         1050623   512.0 MiB   EF00  
+           3         1050624         3147775   1024.0 MiB  BF01  
+           4         3147776      1953525134   930.0 GiB   BF00
+
+Unfortunately, we can't be sure of the sector size here, because the
+USB controller is probably lying to us about it. Normally, this
+`smartctl` command should tell us the sector size as well:
 
     root@curie:~# smartctl -i /dev/sdb -qnoserial
     smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-14-amd64] (local build)
@@ -77,8 +76,13 @@ agrees as well:
     SMART support is: Available - device has SMART capability.
     SMART support is: Enabled
 
+Above is the example of the builtin HDD drive. But the SSD device
+enclosed in that USB controller [doesn't support SMART commands](https://www.smartmontools.org/ticket/1054),
+so we can't trust that it really has 512 bytes sectors.
+
 This matters because we need to tweak the `ashift` value
-correctly. 4KB means `ashift=12`.
+correctly. We're going to go ahead the SSD drive has the common 4KB
+settings, which means `ashift=12`.
 
 Note here that we are *not* creating a separate partition for
 swap. Swap on ZFS volumes (AKA "swap on ZVOL") can trigger lockups and
@@ -137,11 +141,10 @@ This is a more typical pool creation.
 
         zpool create \
             -o ashift=12 \
-            -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
+            -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
             -O acltype=posixacl -O xattr=sa \
-            -O compression=lz4 \
+            -O compression=zstd \
             -O dnodesize=auto \
-            -O normalization=formD \
             -O relatime=on \
             -O canmount=off \
             -O mountpoint=/ -R /mnt \
@@ -160,8 +163,6 @@ Breaking this down:
  * `-O compression=zstd`: enable [zstd](https://en.wikipedia.org/wiki/Zstd) compression, can be
    disabled/enabled by dataset to with `zfs set compression=off
    rpool/example`
- * `-O normalization=formD`: normalize file names on comparisons (not
-   storage), implies `utf8only=on`
  * `-O relatime=on`: classic `atime` optimisation, another that could
    be used on a busy server is `atime=off`
  * `-O canmount=off`: do not make the pool mount automatically with
@@ -171,7 +172,14 @@ Breaking this down:
 
 Those settings are all available in [zfsprops(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zfsprops.8.en.html). Other flags are
 defined in [zpool-create(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zpool-create.8.en.html). The reasoning behind them is also
-explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian wiki](https://wiki.debian.org/ZFS#Advanced_Topics).
+explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian
+wiki](https://wiki.debian.org/ZFS#Advanced_Topics). Those flags were actually not used:
+
+ * `-O normalization=formD`: normalize file names on comparisons (not
+   storage), implies `utf8only=on`, which is a [bad idea](https://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames) (and
+   effectively meant my first sync failed to copy some files,
+   including [this folder from a supysonic checkout](https://github.com/spl0k/supysonic/tree/270fa9883b2f2bc98f1482a68f7d9022017af50b/tests/assets/%E6)). and this
+   cannot be changed after the filesystem is created. bad, bad, bad.
 
 ## Side note about single-disk pools
 
@@ -277,16 +285,71 @@ like this:
     rpool/var/lib/docker  899G  256K  899G   1% /mnt/var/lib/docker
     /dev/sdc2             511M  4.0K  511M   1% /mnt/boot/efi
 
+Now that we have everything setup and mounted, let's copy all files
+over.
 
+# Copying files
 
-# Copy files over
+This is a list of all the mounted filesystems
 
     for fs in /boot/ /boot/efi/ / /home/; do
         echo "syncing $fs to /mnt$fs..." && 
         rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
     done
 
-TODO: paste what it looked like
+You can check that the list is correct with:
+
+    mount -l -t ext4,btrfs,vfat | awk '{print $3}'
+
+Note that we skip `/srv` as it's on a different disk.
+
+On the first run, we had:
+
+    root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
+            echo "syncing $fs to /mnt$fs..." && 
+            rsync -aSHAXx --info=progress2 $fs /mnt$fs
+        done
+    syncing /boot/ to /mnt/boot/...
+                  0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
+    syncing /boot/efi/ to /mnt/boot/efi/...
+         16,831,437 100%  184.14MB/s    0:00:00 (xfr#101, to-chk=0/110)
+    syncing / to /mnt/...
+     28,019,293,280  94%   47.63MB/s    0:09:21 (xfr#703710, ir-chk=6748/839220)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
+    could not make way for new symlink: var/lib/docker
+     34,081,267,990  98%   50.71MB/s    0:10:40 (xfr#736577, to-chk=0/867732)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+    syncing /home/ to /mnt/home/...
+    rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
+     24,456,268,098  98%   68.03MB/s    0:05:42 (xfr#159867, ir-chk=6875/172377) 
+    file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/B3AB0CDA9C4454B3C1197E5A22669DF8EE849D90"
+    199,762,528,125  93%   74.82MB/s    0:42:26 (xfr#1437846, ir-chk=1018/1983979)rsync: [generator] recv_generator: mkdir "/mnt/home/anarcat/dist/supysonic/tests/assets/\#346" failed: Invalid or incomplete multibyte or wide character (84)
+    *** Skipping any contents from this failed directory ***
+    315,384,723,978  96%   76.82MB/s    1:05:15 (xfr#2256473, to-chk=0/2993950)    
+    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
+
+Note the failure to transfer that supysonic file? It turns out they
+had a [weird filename in their source tree](https://github.com/spl0k/supysonic/pull/183), since then removed,
+but still it showed how the `utf8only` feature might not be such a bad
+idea. At this point, the procedure was restarted all the way back to
+"Creating pools", after unmounting all ZFS filesystems (`umount
+/mnt/run /mnt/boot/efi && umount -t zfs -a`) and destroying the pool,
+which, surprisingly, doesn't require any confirmation (`zpool destroy
+rpool`).
+
+Also note the transfer speed: we seem capped at 76MB/s, or
+608Mbit/s. This is not as fast as I was expecting: the USB connection
+seems to be at around 5Gbps:
+
+    anarcat@curie:~$ lsusb -tv | head -4
+    /:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
+        ID 1d6b:0003 Linux Foundation 3.0 root hub
+        |__ Port 1: Dev 4, If 0, Class=Mass Storage, Driver=uas, 5000M
+            ID 0b05:1932 ASUSTek Computer, Inc.
+
+So it shouldn't cap at that speed. It's possible the USB adapter is
+failing to give me the full speed though.
+
+TODO: make a new paste
 
 TODO: we are here
 
@@ -402,6 +465,69 @@ Unmount filesystems:
 
 reboot to the new system.
 
+# Benchmarks
+
+This is a test that was ran in single-user mode using fio and the
+[Ars Technica recommended tests](https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/), which are:
+
+ * Single 4KiB random write process:
+
+        fio --name=randwrite4k1x --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
+
+ * 16 parallel 64KiB random write processes:
+
+        fio --name=randwrite64k16x --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1
+
+ * Single 1MiB random write process
+
+        fio --name=randwrite1m1x --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

(Diff truncated)
start working on migrating my workstation to ZFS
diff --git a/blog/zfs-migration.md b/blog/zfs-migration.md
new file mode 100644
index 00000000..d76dc3d7
--- /dev/null
+++ b/blog/zfs-migration.md
@@ -0,0 +1,429 @@
+In my [[hardware/tubman]] setup, I started using ZFS on an old server
+I had lying around. The machine is really old though (2011!) and it
+"feels" pretty slow. I want to see how much of that is ZFS and how
+much is the machine. Synthetic benchmarks [show that ZFS may be slower
+than mdadm in RAID-10 or RAID-6 configuration](https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-disks-two-filesystems-one-winner/), so I want to
+confirm that on a live workload: my workstation. Plus, I want easy,
+regular, high performance backups (with send/receive snapshots) and
+there's no way I'm going to use [[BTRFS|2022-05-13-brtfs-notes]]
+because I find it too confusing and unreliable.
+
+So off we go.
+
+# Installation
+
+Since this is a conversion (and not a new install), our procedure is
+slightly different than the [official documentation](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html) but otherwise
+it's pretty much in the same spirit: we're going to use ZFS for
+everything, including the root filesystem.
+
+So, install the required packages, on the current system:
+
+    apt install --yes gdisk zfs-dkms zfs zfs-initramfs zfsutils-linux
+
+# Partitioning
+
+This is going to partition `/dev/sdc` with:
+
+ * 1MB MBR / BIOS legacy boot
+ * 512MB EFI boot
+ * 1GB bpool, unencrypted pool for /boot
+ * rest of the disk for zpool, the rest of the data
+
+        sgdisk --zap-all /dev/sdc
+        sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/sdc
+        sgdisk     -n2:1M:+512M   -t2:EF00 /dev/sdc
+        sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
+        sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
+
+It looks like this:
+
+    root@curie:~# sgdisk -p /dev/sdb
+    Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
+    Model: WDC WD10JPLX-00M
+    Sector size (logical/physical): 512/4096 bytes
+    Disk identifier (GUID): D8806C02-B5A6-4705-ACA9-F5A92F98C2D1
+    Partition table holds up to 128 entries
+    Main partition table begins at sector 2 and ends at sector 33
+    First usable sector is 34, last usable sector is 1953525134
+    Partitions will be aligned on 2048-sector boundaries
+    Total free space is 3437 sectors (1.7 MiB)
+
+    Number  Start (sector)    End (sector)  Size       Code  Name
+       1            2048          411647   200.0 MiB   EF00  EFI System Partition
+       2          411648         2508799   1024.0 MiB  8300  
+       3         2508800        18958335   7.8 GiB     8300  
+       4        18958336      1953523711   922.5 GiB   8300
+
+This, by the way, says the device has 4KB sector size. `smartctl`
+agrees as well:
+
+    root@curie:~# smartctl -i /dev/sdb -qnoserial
+    smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-14-amd64] (local build)
+    Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
+
+    === START OF INFORMATION SECTION ===
+    Model Family:     Western Digital Black Mobile
+    Device Model:     WDC WD10JPLX-00MBPT0
+    Firmware Version: 01.01H01
+    User Capacity:    1 000 204 886 016 bytes [1,00 TB]
+    Sector Sizes:     512 bytes logical, 4096 bytes physical
+    Rotation Rate:    7200 rpm
+    Form Factor:      2.5 inches
+    Device is:        In smartctl database [for details use: -P show]
+    ATA Version is:   ATA8-ACS T13/1699-D revision 6
+    SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
+    Local Time is:    Tue May 17 13:33:04 2022 EDT
+    SMART support is: Available - device has SMART capability.
+    SMART support is: Enabled
+
+This matters because we need to tweak the `ashift` value
+correctly. 4KB means `ashift=12`.
+
+Note here that we are *not* creating a separate partition for
+swap. Swap on ZFS volumes (AKA "swap on ZVOL") can trigger lockups and
+that issue is [still not fixed upstream](https://github.com/openzfs/zfs/issues/7734). [Ubuntu recommends using
+a separate partition for swap instead](https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1847628). But since this is "just" a
+workstation, we're betting that we will not suffer from this problem,
+after hearing a report from another Debian developer running this
+setup on their workstation successfully.
+
+# Creating "pools"
+
+ZFS pools are somewhat like "volume groups" if you are familiar with
+LVM, except they obviously also do things like RAID-10. (Even though
+LVM can technically [also do RAID](https://manpages.debian.org/bullseye/lvm2/lvmraid.7.en.html), people typically use [mdadm](https://manpages.debian.org/bullseye/mdadm/mdadm.8.en.html)
+instead.) 
+
+In any case, the guide suggests creating two different pools here:
+one, in cleartext, for boot, and a separate, encrypted one, for the
+rest. Technically, the boot partition is required because the Grub
+bootloader only supports readonly ZFS pools, from what I
+understand. But I'm a little out of my depth here and just following
+the guide.
+
+## Boot pool creation
+
+This creates the boot pool in readonly mode with features that grub
+supports:
+
+        zpool create \
+            -o cachefile=/etc/zfs/zpool.cache \
+            -o ashift=12 -d \
+            -o feature@async_destroy=enabled \
+            -o feature@bookmarks=enabled \
+            -o feature@embedded_data=enabled \
+            -o feature@empty_bpobj=enabled \
+            -o feature@enabled_txg=enabled \
+            -o feature@extensible_dataset=enabled \
+            -o feature@filesystem_limits=enabled \
+            -o feature@hole_birth=enabled \
+            -o feature@large_blocks=enabled \
+            -o feature@lz4_compress=enabled \
+            -o feature@spacemap_histogram=enabled \
+            -o feature@zpool_checkpoint=enabled \
+            -O acltype=posixacl -O canmount=off \
+            -O compression=lz4 \
+            -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
+            -O mountpoint=/boot -R /mnt \
+            bpool /dev/sdc3
+
+I haven't investigated all those settings and just trust the upstream
+guide on the above.
+
+## Main pool creation
+
+This is a more typical pool creation.
+
+        zpool create \
+            -o ashift=12 \
+            -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
+            -O acltype=posixacl -O xattr=sa \
+            -O compression=lz4 \
+            -O dnodesize=auto \
+            -O normalization=formD \
+            -O relatime=on \
+            -O canmount=off \
+            -O mountpoint=/ -R /mnt \
+            rpool /dev/sdc4
+
+Breaking this down:
+
+ * `-o ashift=12`: mentioned above, 4k sector size
+ * `-O encryption=on -O keylocation=prompt -O keyformat=passphrase`:
+   encryption, prompt for a password, default algorithm is
+   `aes-256-gcm`, explicit in the guide, made implicit here
+ * `-O acltype=posixacl -O xattr=sa`: enable ACLs, with better
+   performance (not enabled by default)
+ * `-O dnodesize=auto`: related to extended attributes, less
+   compatibility with other implementations
+ * `-O compression=zstd`: enable [zstd](https://en.wikipedia.org/wiki/Zstd) compression, can be
+   disabled/enabled by dataset to with `zfs set compression=off
+   rpool/example`
+ * `-O normalization=formD`: normalize file names on comparisons (not
+   storage), implies `utf8only=on`
+ * `-O relatime=on`: classic `atime` optimisation, another that could
+   be used on a busy server is `atime=off`
+ * `-O canmount=off`: do not make the pool mount automatically with
+   `mount -a`?
+ * `-O mountpoint=/ -R /mnt`: mount pool on `/` in the future, but
+   `/mnt` for now
+
+Those settings are all available in [zfsprops(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zfsprops.8.en.html). Other flags are
+defined in [zpool-create(8)](https://manpages.debian.org/bullseye/zfsutils-linux/zpool-create.8.en.html). The reasoning behind them is also
+explained in [the upstream guide](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-2-disk-formatting) and some also in [the Debian wiki](https://wiki.debian.org/ZFS#Advanced_Topics).
+
+## Side note about single-disk pools
+
+Also note that we're living dangerously here: single-disk ZFS pools
+are [rumoured to be more dangerous](https://www.truenas.com/community/threads/single-drive-zfs.35515/) than not running ZFS at
+all. The choice quote from this article is:
+
+> [...] any error can be detected, but cannot be corrected. This
+> sounds like an acceptable compromise, but its actually not. The
+> reason its not is that ZFS' metadata cannot be allowed to be
+> corrupted. If it is it is likely the zpool will be impossible to
+> mount (and will probably crash the system once the corruption is
+> found). So a couple of bad sectors in the right place will mean that
+> all data on the zpool will be lost. Not some, all. Also there's no
+> ZFS recovery tools, so you cannot recover any data on the drives.
+
+Compared with (say) ext4, where a single disk error can recovered,
+this is pretty bad. But we are ready to live with this with the idea
+that we'll have hourly offline snapshots that we can easily recover
+from. It's trade-off. Also, we're running this on a NVMe/M.2 drive

(Diff truncated)
responses
diff --git a/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment b/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment
new file mode 100644
index 00000000..44b3f443
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_2_95492f43f666d354cf5eefc410473c9c._comment
@@ -0,0 +1,40 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject=""""""
+ date="2022-05-16T14:45:50Z"
+ content="""
+> of course it is circumstantial, but facebook runs a few thousand servers with btrfs (AFAIR)
+
+i wouldn't call this "circumstantial", that's certainly a strong data point. But Facebook has a whole other scale and, if they're anything like other large SV shops, they have their own Linux kernel fork that might have improvements we don't.
+
+>  and personally I am running it since many years without a single failure.
+
+that, however, I would call "anecdotal" and something I hear a lot.... often followed with things like:
+
+> with some quirks, again, I admit - namely that healing is not automatic
+
+... which, for me, is the entire problem here. "it kind of works for me, except sometimes not" is really not what I expect from a filesystem.
+
+> Concerning subvolumes - you don't get separate disk usage simply because it is part of the the main volume (volid=5), and within there just a subdirectory. So getting disk usage from there would mean reading the whole tree (du-like -- btw, I recommend gdu which is many many times faster than du!).
+>
+> The root subvolume on fedora is not the same as volid=5 but just another subvolume. I also have root volumes for Debian/Arch/Fedora on my system (sharing /usr/local and /home volumes). That they called it root is indeed confusing.
+
+[Hacker News](https://news.ycombinator.com/item?id=31383007) helpfully reminded me that:
+
+> the author intentionally gave up early on understanding and simply
+> rants about everything that does not look or work as usual
+
+I think it's framed as criticism of my work, but I take it as a compliment. I reread the two paragraphs a few times, and they still don't make much sense to me. It just begs more questions:
+
+ 1. can we have more than one main volumes?
+ 2. why was it setup with subvolumes instead of volumes?
+ 3. why isn't everything volumes?
+
+I know I sound like a newbie meeting a complex topic and giving up. But here's the thing: I've encountered (and worked in production) with at least half a dozen filesystems in my lifetime (ext2/ext3/ext4, XFS, UFS, FAT16/FAT32, NTFS, HFS, ExFAT, ZFS), and for most of those, I could use them without having to go very deep into the internals. 
+
+But BTRFS gets obscure *quick*. Even going through official documentation (e.g. [BTRFS Design](https://btrfs.wiki.kernel.org/index.php/Btrfs_design)), you *start* with C structs. And somewhere down there there's this confusing diagram about the internal mechanics of the btree and how you build subvolumes and snapshots on top of that.
+
+If you want to hack on BTRFS, that's great. You can get up to speed pretty quick. But I'm not looking at BTRFS from an enthusiast, kernel developer look. I'm looking at it from a "OMG what is this" look, with very little time to deal with it. Every other filesystem architecture I've used like this so far has been able to somewhat be operational in a day or two. After spending multiple days banging my head on this problem, I felt I had to write this down, because everything seems so obtuse that I can't wrap my head around it.
+
+Anyways, thanks for the constructive feedback, it certainly clarifies things a little, but really doesn't make me want to adopt BTRFS in any significant way.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment b/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment
new file mode 100644
index 00000000..1e2bc245
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_3_ecad06ed6928427b1c9d7e95db2f2ce9._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""comment 3"""
+ date="2022-05-16T14:58:23Z"
+ content="""
+> Seems worth to consider on your side, too, makes it a bit easier to deal with this (and with btrfs volumes you can have multiple dists booting) 
+
+I'm certainly considering some sort of RAID or snapshotting for my workstation now. Problem is it's a NUC so it really can't fit more disks.
+
+Considering my ... unfruitful experience with BTRFS, I probably will stay the heck away from it though, but thanks for the advice.
+
+> Working, up to date backups are a must have. 
+
+That's the understatement of the day. :p
+
+Thankfully, as I said, this machine is mostly throw-away. But because our installers are still kind of crap, it takes a while to recover it, so I am thinking RAID or offline snapshots could be useful to speed up recovery...
+"""]]

approve comment
diff --git a/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment b/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment
new file mode 100644
index 00000000..0e56bb11
--- /dev/null
+++ b/blog/2022-05-13-brtfs-notes/comment_1_3e01f4062b4fa96bd8c981bd6087ea7d._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="180.94.215.40"
+ claimedauthor="Norbert"
+ subject="Some comments"
+ date="2022-05-14T01:21:55Z"
+ content="""
+Concerning stability: of course it is circumstantial, but facebook runs a few thousand servers with btrfs (AFAIR), and personally I am running it since many years without a single failure. Admittingly, raid5/6 is broken, don't touch it. raid1 works also rock solid afais (with some quirks, again, I admit - namely that healing is not automatic).
+
+Concerning subvolumes - you don't get separate disk usage simply because it is part of the the main volume (volid=5), and within there just a subdirectory. So getting disk usage from there would mean reading the whole tree (du-like -- btw, I recommend `gdu` which is many many times faster than du!).
+
+The `root` subvolume on fedora is not the same as `volid=5` but just another subvolume. I also have root volumes for Debian/Arch/Fedora on my system (sharing `/usr/local` and `/home` volumes). That they called it `root` is indeed confusing.
+
+One thing that I like a lot about btrfs is `btrfs send/receive`, it is a nice way to do incremental backups.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment b/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment
new file mode 100644
index 00000000..484f8d6b
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_1_b7041ff7a07b7b21edf17b0b25ebd1c4._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="88.196.217.58"
+ claimedauthor="Arti"
+ subject="Dying SSD-s"
+ date="2022-05-16T07:18:45Z"
+ content="""
+I also have experienced few SATA and NVME drives just disappearing on reboot or even during normal usage. In my experience SSD-s just stop working without any warnings. Working, up to date backups are a must have.
+"""]]
diff --git a/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment b/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment
new file mode 100644
index 00000000..14e8abfe
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure/comment_1_e13ebdb87f15f39eff8b7a0e2a693cc7._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="180.94.215.40"
+ claimedauthor="Norbert"
+ subject="BTRFS raid?"
+ date="2022-05-14T01:12:25Z"
+ content="""
+I have seen similar things happening with some ssds, too. Since I run btrfs-raid (over 7 disks or so) it happens now and then, and usually is fixed by unplugging, plugging in a new disk, and rebalancing. Seems worth to consider on your side, too, makes it a bit easier to deal with this (and with btrfs volumes you can have multiple dists booting)
+"""]]

and another failure
diff --git a/blog/2022-05-13-nvme-disk-failure.md b/blog/2022-05-13-nvme-disk-failure.md
new file mode 100644
index 00000000..d4e4b69f
--- /dev/null
+++ b/blog/2022-05-13-nvme-disk-failure.md
@@ -0,0 +1,53 @@
+[[!meta title="NVMe/SSD disk failure"]]
+
+Yesterday, my workstation ([[curie|hardware/curie]]) was hung when I
+came in the office. After a "[skinny elephant](https://en.wikipedia.org/wiki/Raising_Skinny_Elephants_Is_Boring)", the box rebooted,
+but it couldn't find the primary disk (in the BIOS). Instead, it
+booted on the secondary HDD drive, still running an old Fedora 27
+install which somehow survived to this day, possibly because [[BTRFS
+is incomprehensible|blog/2022-05-13-btrfs-notes]].
+
+Somehow, I blindly accepted the Fedora prompt asking me to upgrade to
+Fedora 28, not realizing that:
+
+ 1. Fedora is now at release 36, not 28
+ 2. major upgrades take about an hour...
+ 3. ... and happen at boot time, blocking the entire machine (I'll
+    remember this next time I laugh at Windows and Mac OS users stuck
+    on updates on boot)
+ 4. you can't skip more than one major upgrade
+
+Which means that upgrading to latest would take over 4
+hours. Thankfully, it's mostly automated and seems to work pretty well
+(which is [not exactly the case for Debian](https://wiki.debian.org/AutomatedUpgrade)). It still seems like a
+lot of wasted time -- it would probably be better to just reinstall
+the machine at this point -- and not what I had planned to do that
+morning at all.
+
+In any case, after waiting all that time, the machine booted (in
+Fedora) again, and now it *could* detect the SSD disk. The BIOS could
+find the disk too, so after I reinstalled grub (from Fedora) and fixed
+the boot order, it rebooted, but secureboot failed, so I turned that
+off (!?), and I was back in Debian.
+
+I did an emergency backup with `ddrescue`, *from the running system*
+which probably doesn't really work as a backup (because the filesystem
+is likely to be corrupt) but it was fast enough (20 minutes) and gave
+me some peace of mind. My offsites backup have been down for a while
+and since I treat my workstations as "cattle" (not "pets"), I don't
+have a solid recovery scenario for those situations other than "just
+reinstall and run Puppet", which takes a while.
+
+Now I'm wondering what the next step is: probably replace the disk
+anyways (the new one is bigger: 1TB instead of 500GB), or keep the new
+one as a hot backup somehow. Too bad I don't have a snapshotting
+filesystem on there... (Technically, I have LVM, but LVM snapshots are
+heavy and slow, and can't atomically cover the entire machine.)
+
+It's kind of scary how this thing failed: totally dropped off the bus,
+just not in the BIOS at all. I prefer the way spinning rust fails:
+clickety sounds, tons of warnings beforehand, partial recovery
+possible. With this new flashy junk, you just lose everything all at
+once. Not fun.
+
+[[!tag debian-planet debian hardware fail]]
diff --git a/hardware/curie.mdwn b/hardware/curie.mdwn
index 7ade7c51..1873ff7f 100644
--- a/hardware/curie.mdwn
+++ b/hardware/curie.mdwn
@@ -129,6 +129,10 @@ the upgrade eventually go through, but it finally did.
 The [release notes](https://downloadmirror.intel.com/29102/eng/SY_0072_ReleaseNotes.pdf) detail the updates since the previous one (v61)
 which includes a bunch of security updates, for example.
 
+## SSD disk failure
+
+See [[blog/2022-05-13-nvme-disk-failure]].
+
 ## Replacement options
 
 The CMOS battery died some time in 2021, and I'm having a hard time

publish btrfs notes
diff --git a/blog/btrfs-notes.md b/blog/2022-05-13-brtfs-notes.md
similarity index 57%
rename from blog/btrfs-notes.md
rename to blog/2022-05-13-brtfs-notes.md
index 2ec14a15..71b484a5 100644
--- a/blog/btrfs-notes.md
+++ b/blog/2022-05-13-brtfs-notes.md
@@ -2,21 +2,21 @@
 
 I'm not a fan of [BTRFS](https://btrfs.wiki.kernel.org/). This page serves as a reminder of why,
 but also a cheat sheet to figure out basic tasks in a BTRFS
-environment because those are *not* obvious when coming from any other
-filesystem environment.
+environment because those are *not* obvious to me, even after
+repeatedly having to deal with them.
 
-Trigger warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
+Content warning: there might be mentions of [ZFS](https://en.wikipedia.org/wiki/OpenZFS).
 
 [[!toc]]
 
 # Stability concerns
 
-I'm a little worried about its [stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been
-historically quite flaky. RAID-5 and RAID-6 are still marked
-[unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for example, and it's kind of a lucky guess whether
-your current kernel will behave properly with your planned
-workload. For example, [with Linux 4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status) were marked as "mostly OK"
-with a note that says:
+I'm worried about [BTRFS stability](https://btrfs.wiki.kernel.org/index.php/Status), which has been historically
+... changing. RAID-5 and RAID-6 are still marked [unstable](https://btrfs.wiki.kernel.org/index.php/RAID56), for
+example. It's kind of a lucky guess whether your current kernel will
+behave properly with your planned workload. For example, [in Linux
+4.9](http://web.archive.org/web/20170311220554/https://btrfs.wiki.kernel.org/index.php/Status), RAID-1 and RAID-10 were marked as "mostly OK" with a note that
+says:
 
 > Needs to be able to create two copies always. Can get stuck in
 > irreversible read-only mode if only one copy can be made.
@@ -28,27 +28,38 @@ Even as of now, RAID-1 and RAID-10 has this note:
 > improved so the reads will spread over the mirrors evenly or based
 > on device congestion.
 
+Granted, that's not a stability concern anymore, just performance. A
+reviewer of a draft of this article actually claimed that BTRFS only
+reads from one of the drives, which hopefully is inaccurate, but goes
+to show how confusing all this is.
+
 There are [other warnings](https://wiki.debian.org/Btrfs#Other_Warnings) in the Debian wiki that are quite
-worrisome. Even if those are fixed, it can be hard to tell *when* they
-were fixed.
+scary. Even the legendary Arch wiki [has a warning on top of their
+BTRFS page, still](https://wiki.archlinux.org/title/btrfs).
+
+Even if those issues are now fixed, it can be hard to tell *when* they
+were fixed. There is a [changelog by feature](https://btrfs.wiki.kernel.org/index.php/Changelog#By_feature) but it explicitly
+warns that it doesn't know "which kernel version it is considered
+mature enough for production use", so it's also useless for this.
 
 It would have been much better if BTRFS was released into the world
-only when those bugs were being completely fixed. Even now, we get
-mixed messages even in the official BTRFS documentation which says
-"The Btrfs code base is stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same
-time clearly stating [unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently
-RAID56).
-
-There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I will
-stop here, but let's just say that I feel a little uncomfortable
+only when those bugs were being completely fixed. Or that, at least,
+features were announced when they were stable, not just "we merged to
+mainline, good luck". Even now, we get mixed messages even in the
+official BTRFS documentation which says "The Btrfs code base is
+stable" ([main page](https://btrfs.wiki.kernel.org/index.php/Main_Page)) while at the same time clearly stating
+[unstable parts in the status page](https://btrfs.wiki.kernel.org/index.php/Status) (currently RAID56).
+
+There are much [harsher BTRFS critics](https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/) than me [out there](https://2.5admins.com/) so I
+will stop here, but let's just say that I feel a little uncomfortable
 trusting server data with full RAID arrays to BTRFS. But surely, for a
-workstation, things should just work smoothly... Right? Let's see the
-snags I hit.
+workstation, things should just work smoothly... Right? Well, let's
+see the snags I hit.
 
 # My BTRFS test setup
 
-Before I go any further, it will probably help to clarify how I am
-testing BTRFS in the first place.
+Before I go any further, I should probably clarify how I am testing
+BTRFS in the first place.
 
 The reason I tried BTRFS is that I was ... let's just say "strongly
 encouraged" by the [LWN](https://lwn.net) editors to install [Fedora](https://getfedora.org/) for the
@@ -69,34 +80,41 @@ table looks like this:
     └─sda4                   8:4    0 922,5G  0 part  
       └─fedora_crypt       253:4    0 922,5G  0 crypt /
 
+(This might not entirely be accurate: I rebuilt this from the Debian
+side of things.)
+
 This is pretty straightforward, except for the swap partition:
 normally, I just treat swap like any other logical volume and create
 it in a logical volume. This is now just speculation, but I bet it was
 setup this way because "swap" support was only added in BTRFS 5.0.
 
-I fully expect BTRFS fans to yell at me now because this is an old
+I fully expect BTRFS experts to yell at me now because this is an old
 setup and BTRFS is so much better now, but that's exactly the point
-here. That setup is not *that* old (2018? is that old? really?), and
-migrating to a new partition scheme isn't exactly practical right
-now. But let's move on to more practical considerations.
+here. That setup is not *that* old (2018? old? really?), and migrating
+to a new partition scheme isn't exactly practical right now. But let's
+move on to more practical considerations.
 
 # No builtin encryption
 
 BTRFS aims at replacing the entire [mdadm](https://en.wikipedia.org/wiki/Mdadm), [LVM][], and [ext4](https://en.wikipedia.org/wiki/Ext4)
-stack with a single entity, alongside adding new features like
+stack with a single entity, and adding new features like
 deduplication, checksums and so on.
 
-Yet there is one feature it is critically missing: encryption. See,
-*my* stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and ext4. This
-is convenient because I have only a single volume to decrypt.
+Yet there is one feature it is critically missing: encryption. See, my
+typical stack is actually mdadm, [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and *then* LVM and
+ext4. This is convenient because I have only a single volume to
+decrypt.
 
 If I were to use BTRFS on servers, I'd need to have one LUKS volume
 *per-disk*. For a simple RAID-1 array, that's not too bad: one extra
 key. But for large RAID-10 arrays, this gets really unwieldy.
 
-The obvious BTRFS alternative, ZFS, supports encryption out of the box
-and mixes it above the disks so you only have one passphrase to
-enter.
+The obvious BTRFS alternative, ZFS, [supports encryption](https://arstechnica.com/gadgets/2021/06/a-quick-start-guide-to-openzfs-native-encryption/) out of
+the box and mixes it above the disks so you only have one passphrase
+to enter. The main downside of ZFS encryption is that it happens above
+the "pool" level so you can typically see filesystem names (and
+possibly snapshots, depending on how it is built), which is not the
+case with a more traditional stack.
 
 [LVM]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
 
@@ -107,13 +125,17 @@ traditional LVM stack (which is itself kind of confusing if you're new
 to that stuff), you have those layers:
 
  * disks: let's say `/dev/nvme0n1` and `nvme1n1`
- * mdadm RAID arrays: let's say the above disks are joined in a RAID-1
-   array in `/dev/md1`
- * LVM volume groups or VG: the above RAID device (technically a
+ * RAID arrays with mdadm: let's say the above disks are joined in a
+   RAID-1 array in `/dev/md1`
+ * volume groups or VG with LVM: the above RAID device (technically a
    "physical volume" or PV) is assigned into a VG, let's call it
-   `vg_tbbuild05`
+   `vg_tbbuild05` (multiple PVs can be added to a single VG which is
+   why there is that abstraction)
  * LVM logical volumes: out of *that* volume group actually "virtual
-   partitions" or "logical volumes" are created
+   partitions" or "logical volumes" are created, that is where your
+   filesystem lives
+ * filesystem, typically with ext4: that's your normal filesystem,
+   which treats the logical volume as just another block device
 
 A typical server setup would look like this:
 
@@ -130,18 +152,17 @@ A typical server setup would look like this:
     │     └─vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
     └─nvme0n1p4               259:4    0     1M  0 part
 
-
 I stripped the other `nvme1n1` disk because it's basically the same.
 
-Now, if we look at my workstation, which doesn't even have RAID, we
-have the following:
+Now, if we look at my BTRFS-enabled workstation, which doesn't even
+have RAID, we have the following:
 
  * disk: `/dev/sda` with, again, `/dev/sda4` being where BTRFS lives
  * filesystem: `fedora_crypt`, which is, confusingly, kind of like a
    volume group. it's where everything lives. i think.
  * subvolumes: `home`, `root`, `/`, etc. those are actually the things
    that get mounted. you'd think you'd mount a filesystem, but no, you
-   mount a subvolume
+   mount a subvolume. that is backwards.
 
 It looks something like this to `lsblk`:
 
@@ -189,8 +210,17 @@ This is *really* confusing. I don't even know if I understand this
 right, and I've been staring at this all afternoon. Hopefully, the
 lazyweb will correct me eventually.
 
-So at least I can refer to this section in the future, the next time I
-fumble around the `btrfs` commandline.
+(As an aside, why are they called "subvolumes"? If something is a
+"[sub](https://en.wiktionary.org/wiki/sub#Latin)" of "something else", that "something else" must exist
+right? But no, BTRFS doesn't have "volumes", it only has
+"subvolumes". Go figure. Presumably the filesystem still holds "files"
+though, at least empirically it doesn't seem like it lost anything so

(Diff truncated)

Archival link:

The above link creates a machine-readable RSS feed that can be used to easily archive new changes to the site. It is used by internal scripts to do sanity checks on new entries in the wiki.

Created . Edited .