Recent changes to this wiki. Not to be confused with my history.

Complete source to the wiki is available on gitweb or by cloning this site.

order wishlist, add filter
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index bad31ae0..973d8f20 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -135,7 +135,8 @@ Appareils analogues
 Lentilles
 ---------
 
-* Fujifilm [18-55mm f/2.8-4 R LM OIS ø58](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf18_55mmf28_4_r_lm_ois/), vient avec le kit X-T2
+* Fujifilm [18-55mm f/2.8-4 R LM OIS ø58](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf18_55mmf28_4_r_lm_ois/), vient avec le kit X-T2,
+  avec un [filtre 58mm XS-Pro Clear MRC-Nano 007 Filter](https://www.bhphotovideo.com/c/product/756813-REG/B_W_66_1066106_58mm_XS_Pro_NANO_Clear.html), 30$USD
 * Fujifilm [55-200mm f/3.5-4.8 R LM OIS ø62](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf55_200mmf35_48_r_lm_ois/), 650$ sur kijiji
 * Image 70-210mm 3.8, [Minolta SR] (monté sur le Minolta SRT-200)
 * Minolta 135mm 3.5, [Minolta SR]
@@ -181,22 +182,26 @@ Reference
 
 Évidemment, je magasine encore, cette fois pour des lentilles.
 
- * [16-55mm f/2.8 R LM WR ø77](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf16_55mmf28_r_lm_wr/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/16-55mm-f28.htm), [Phoblographer](https://www.thephoblographer.com/2015/03/12/review-fujifilm-16-55mm-f2-8-lm-wr-fujifilm-x-mount/), huge
-   but real nice, 900-1400$
- * [23mm f/1.4 R ø62](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf23mmf14_r/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/23mm-f14.htm), [fstoppers](https://fstoppers.com/gear/worlds-quickest-lens-review-fuji-xf-23mm-14r-8342) (glowing review),
-   700-900$ on kijiji
- * [27mm f/2.8 ø39](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf27mmf28/), [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/27mm-f28.htm), nice little pancake lens, 300-350$
-   on kijiji
- * [35mm f/2 R WR ø43](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf2_r_wr/), [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f2.htm), [fstoppers](https://fstoppers.com/gear/fstoppers-reviews-fujifilm-35mm-f2-wr-158227), nice size,
-   sealed, 350-400$ on kijiji 
- * [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("an extraordinary lens"),
-   700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 400-460$ on kijiji
- * [60mm f/2.4 R Macro ø39mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf60mmf24_r_macro/), [Rockwell](https://kenrockwell.com/fuji/x-mount-lenses/60mm-f24.htm), [Photograph blog](http://www.photographyblog.com/reviews/fujifilm_xf_60mm_f2_4_r_review/),
-   420-700$ on kijiji
- * [X100f ø49](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x100f/) 1200-1600$ on kijiji
- * lens cap holder: [1](https://www.bhphotovideo.com/c/product/834774-REG/Sensei_CK_L_Cap_Keeper_for_Lens.html?sts=pi), [2](https://www.bhphotovideo.com/c/product/850525-REG/Sensei_ck_lp_Cap_Keeper_Plus_Lens.html), haven't found others
- * [cleaning pen](https://www.bhphotovideo.com/c/product/1051483-REG/lenspen_nlp1_c_nlp1c_lens_pen.html): ~10USD. haven't looked at alternative brushes.
- * sensor cleaner?
+ 1. lens cap holder: [1](https://www.bhphotovideo.com/c/product/834774-REG/Sensei_CK_L_Cap_Keeper_for_Lens.html?sts=pi) or [2](https://www.bhphotovideo.com/c/product/850525-REG/Sensei_ck_lp_Cap_Keeper_Plus_Lens.html), haven't found others
+ 2. cleaning gear:
+    * [cleaning pen](https://www.bhphotovideo.com/c/product/1051483-REG/lenspen_nlp1_c_nlp1c_lens_pen.html): ~10USD. haven't looked at alternative brushes.
+    * air brush - apparently, that's best to clear sensors,
+      e.g. [blower on B&H](https://www.bhphotovideo.com/c/buy/Blowers-Compressed-Air/ci/18806/N/4077634545?origSearch=blower), 5-15$
+ 3. UV filter for zoom ø62mm, [9-50$ on B&H](https://www.bhphotovideo.com/c/search?setNs=p_OVER_ALL_RATE%7c1&Ns=p_OVER_ALL_RATE%7c1&ci=112&fct=fct_circular-sizes_27%7c62mm%2bfct_filter-type_39%7cuv&srtclk=sort&N=4026728358&), e.g. [B+W 62mm UV
+    Haze SC 010 Filter](https://www.bhphotovideo.com/c/product/11969-REG/B_W_65070127_62mm_Ultraviolet_UV_Filter.html) at 20$
+ 4. [27mm f/2.8 ø39](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf27mmf28/), [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/27mm-f28.htm), really nice little pancake
+    lens, 300-350$ on kijiji
+ 5. [35mm f/2 R WR ø43](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf2_r_wr/), [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f2.htm), [fstoppers](https://fstoppers.com/gear/fstoppers-reviews-fujifilm-35mm-f2-wr-158227), nice size,
+    sealed, 350-400$ on kijiji 
+ 6. [23mm f/1.4 R ø62](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf23mmf14_r/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/23mm-f14.htm), [fstoppers](https://fstoppers.com/gear/worlds-quickest-lens-review-fuji-xf-23mm-14r-8342) (glowing review),
+    700-900$ on kijiji
+ 7. [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("an extraordinary lens"),
+    700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 400-460$ on kijiji
+ 8. [60mm f/2.4 R Macro ø39mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf60mmf24_r_macro/), [Rockwell](https://kenrockwell.com/fuji/x-mount-lenses/60mm-f24.htm), [Photograph blog](http://www.photographyblog.com/reviews/fujifilm_xf_60mm_f2_4_r_review/),
+    420-700$ on kijiji
+ 9. [16-55mm f/2.8 R LM WR ø77](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf16_55mmf28_r_lm_wr/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/16-55mm-f28.htm), [Phoblographer](https://www.thephoblographer.com/2015/03/12/review-fujifilm-16-55mm-f2-8-lm-wr-fujifilm-x-mount/), huge
+    but real nice, 900-1400$
+ 10. [X100f ø49](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x100f/) 1200-1600$ on kijiji
 
 2013-2017 shopping
 ==================

include in series
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 86f1f849..5cc616b0 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -6,7 +6,7 @@
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
 > * [[Updates in container isolation|2018-05-31-secure-pods]]
 > * [[Securing the container image supply chain|2018-05-31-securing-container-supply]]
-> * [[Easier container security with entitlements|entitlements]] (to be published)
+> * [[Easier container security with entitlements|2018-06-13-easier-container-security-entitlements]]
 
 This is a rant I wrote while attending [KubeCon Europe 2018](https://kccnceu18.sched.com/). I do
 not know how else to frame this deep discomfort I have with the way
diff --git a/blog/2018-05-29-autoscaling-kubernetes.mdwn b/blog/2018-05-29-autoscaling-kubernetes.mdwn
index a8069d4a..3edbac2e 100644
--- a/blog/2018-05-29-autoscaling-kubernetes.mdwn
+++ b/blog/2018-05-29-autoscaling-kubernetes.mdwn
@@ -9,7 +9,7 @@
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]] (this article)
 > * [[Updates in container isolation|2018-05-31-secure-pods]]
 > * [[Securing the container image supply chain|2018-05-31-securing-container-supply]]
-> * [[Easier container security with entitlements|entitlements]] (to be published)
+> * [[Easier container security with entitlements|2018-06-13-easier-container-security-entitlements]]
 
 [[!toc levels=2]]
 
diff --git a/blog/2018-05-31-secure-pods.mdwn b/blog/2018-05-31-secure-pods.mdwn
index 8a148670..d89d8bde 100644
--- a/blog/2018-05-31-secure-pods.mdwn
+++ b/blog/2018-05-31-secure-pods.mdwn
@@ -8,7 +8,7 @@
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
 > * [[Updates in container isolation|2018-05-31-secure-pods]] (this article)
 > * [[Securing the container image supply chain|2018-05-31-securing-container-supply]]
-> * [[Easier container security with entitlements|entitlements]] (to be published)
+> * [[Easier container security with entitlements|2018-06-13-easier-container-security-entitlements]]
 
 [KubeCon EU](https://lwn.net/Archives/ConferenceByYear/#2018-KubeCon_EU)
 At [KubeCon + CloudNativeCon
diff --git a/blog/2018-05-31-securing-container-supply.mdwn b/blog/2018-05-31-securing-container-supply.mdwn
index b045b4d4..4c42f622 100644
--- a/blog/2018-05-31-securing-container-supply.mdwn
+++ b/blog/2018-05-31-securing-container-supply.mdwn
@@ -8,7 +8,7 @@
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
 > * [[Updates in container isolation|2018-05-31-secure-pods]]
 > * [[Securing the container image supply chain|2018-05-31-securing-container-supply]] (this article)
-> * [[Easier container security with entitlements|entitlements]] (to be published)
+> * [[Easier container security with entitlements|2018-06-13-easier-container-security-entitlements]]
 
 [KubeCon EU](https://lwn.net/Archives/ConferenceByYear/#2018-KubeCon_EU)
 "Security is hard" is a tautology, especially in the fast-moving world
diff --git a/blog/2018-06-13-easier-container-security-entitlements.mdwn b/blog/2018-06-13-easier-container-security-entitlements.mdwn
index 94ac0116..861c160d 100644
--- a/blog/2018-06-13-easier-container-security-entitlements.mdwn
+++ b/blog/2018-06-13-easier-container-security-entitlements.mdwn
@@ -1,10 +1,16 @@
 [[!meta title="Easier container security with entitlements"]]
-\[LWN subscriber-only content\]
--------------------------------
 
 [[!meta date="2018-05-22T00:00:00+0000"]]
 [[!meta updated="2018-05-23T09:32:11-0400"]]
 
+> This article is part of a series on KubeCon Europe 2018.
+>
+> * [[Diversity, education, privilege and ethics in technology|2018-05-26-kubecon-rant]]
+> * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
+> * [[Updates in container isolation|2018-05-31-secure-pods]]
+> * [[Securing the container image supply chain|2018-05-31-securing-container-supply]]
+> * [[Easier container security with entitlements|2018-06-13-easier-container-security-entitlements]] (this article)
+
 [[!toc levels=2]]
 
 During [KubeCon + CloudNativeCon Europe

add tags, rename
diff --git a/blog/entitlements.mdwn b/blog/2018-06-13-easier-container-security-entitlements.mdwn
similarity index 99%
rename from blog/entitlements.mdwn
rename to blog/2018-06-13-easier-container-security-entitlements.mdwn
index da54f883..94ac0116 100644
--- a/blog/entitlements.mdwn
+++ b/blog/2018-06-13-easier-container-security-entitlements.mdwn
@@ -223,9 +223,6 @@ A YouTube [video](https://www.youtube.com/watch?v=Jbqxsli2tRw) and
 \[PDF\]](https://schd.ws/hosted_files/kccnceu18/d1/Kubecon%20Entitlements.pdf)
 of the talk are available.
 
-\[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting
-my travel to the event.\]
-
 ------------------------------------------------------------------------
 
 
@@ -235,4 +232,4 @@ my travel to the event.\]
 [first appeared]: https://lwn.net/Articles/755238/
 [Linux Weekly News]: http://lwn.net/
 
-[[!tag debian-planet lwn]]
+[[!tag debian-planet lwn kubernetes containers security]]

Added a comment
diff --git a/blog/2018-05-26-kubecon-rant/comment_6_a00fd375402b5c6145907d349905e5be._comment b/blog/2018-05-26-kubecon-rant/comment_6_a00fd375402b5c6145907d349905e5be._comment
new file mode 100644
index 00000000..c687ea88
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant/comment_6_a00fd375402b5c6145907d349905e5be._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="130.125.11.171"
+ claimedauthor="Hugues le cousin"
+ subject="comment 6"
+ date="2018-06-11T13:54:26Z"
+ content="""
+Excellent article!
+"""]]

notes on the gemini
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index da879b23..d7ce530c 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -40,6 +40,18 @@ Tiny - closer to a phone:
 
 https://www.indiegogo.com/projects/gemini-pda-android-linux-keyboard-mobile-device-phone#/
 
+[First impressions](https://www.sommitrealweird.co.uk/blog/2018/06/07/psion-gemini/), from fellow Debian maintainer Brett Parker:
+
+> I look forward to seeing where it goes, I'm happy to have been an
+> early backer, as I don't think I'd pay the current retail price for
+> one. [...] not bad once you're on a call, but not great until you're
+> on a call, and I certainly wouldn't use it to replace the Samsung
+> Galaxy S7 Edge that I currently use as my full time phone. [...]
+> really rather useful as a sysadmin tool when you don't want to be
+> lugging a full laptop around with you, the keyboard is better than
+> using the on screen keyboard on the phone.
+
+
 Mnt reform
 ----------
 

add custom suffix for tar chroot otherwise it fails
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index e96fd484..1fe93d50 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -463,7 +463,7 @@ This assumes that:
     you need that feature (e.g. the mercurial test suite relies on
     this). to create a tarball image, use this:
     
-        sudo sbuild-createchroot --make-sbuild-tarball=/srv/chroot/unstable-amd64-sbuild.tar.gz unstable `mktemp -d` http://deb.debian.org/debian
+        sudo sbuild-createchroot --make-sbuild-tarball=/srv/chroot/unstable-amd64-sbuild.tar.gz unstable --chroot-prefix unstable-tar `mktemp -d` http://deb.debian.org/debian
 """]]
 
 The above will create chroots for all the main suites and two

rewrite the offloading section with sbuild
The procedure was written for cowpoke. That relies on cowbuilder which
we is a secondary option. With sbuild, there's no
sbuild-specific *thing* to do remote builds, but after looking at how
cowpoke works, it is simple enough to reproduce what it does with
simple commands (yay dcmd).
I found about dcmd after looking at the cowpoke source code.
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 4e8f2dc7..e96fd484 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -562,24 +562,28 @@ really is...
 [AskUbuntu.com has a good comparative between pbuilder and sbuild]: http://askubuntu.com/questions/53014/why-use-sbuild-over-pbuilder
 """]]
 
-Offloading: cowpoke and debomatic
----------------------------------
+<a name="offloading-cowpoke-and-debomatic" />
+
+Build servers
+-------------
 
 Sometimes, your machine is too slow to build this stuff yourself. If
-you have a more powerful machine lying around, you can use [cowpoke][]
-to send builds to that machine. `cowpoke` operates on a source package
-(the `.dsc` file created with `dpkg-source -b`) and works well with
-`gitpkg`. By default, `cowpoke` logs into a remote server and uses
-`sudo` to call `cowbuilder` to build a chroot. For example, this will
-build calibre on the remote host `buildd.example.com`, in a
-`jessie/amd64` chroot:
+you have a more powerful machine lying around, you can send a source
+package to the machine and build the package there.
 
-    cowpoke --buildd buildd.example.com --dist sid --arch amd64 calibre_2.55.0+dfsg-1~bpo8+1.dsc
+You first need to create a source package (the `.dsc` file created
+with `dpkg-buildpackage -S`) and transfer the files over. An easy way to do
+the latter is with the [dcmd](https://manpages.debian.org/dcmd) command. For example, this will
+create a source package, transfer it to the remote host
+`example.net` and build it:
 
-This assume the chroot already exists of course. You can create it by
-using the `--create` argument. It also only works for `.dsc` files, so
-it doesn't cooperate well with git-buildpackage, which expects a
-`debuild`-like interface.
+    foo-1.0$ dpkg-buildpackage -S
+    foo-1.0$ dcmd scp ../foo_1.0.dsc example.net:dist/
+    foo-1.0$ ssh example.net sbuild dist/foo_1.0.dsc
+
+The above might cause trouble if you are working within a git
+repository, as `dpkg-source` might bundle `.git` files that should be
+ignored.
 
 To build from git, you first use `gitpkg` to generate a `.dsc` file
 from the git tree, where `rev` is the current commit of the debian
@@ -589,7 +593,23 @@ tarball if not already present:
 
     gitpkg rev upstream
 
-Then call `cowpoke` on the resulting `.dsc` file.
+Then you can use the resulting `.dsc` file as normal.
+
+[[!note """
+If you use cowbuilder, you can also use [cowpoke][] instead of the
+above. Since I started using `sbuild`, I do not have a `cowbuilder`
+setup anymore. 
+
+By default, `cowpoke` logs into a remote server and uses `sudo` to
+call `cowbuilder` to build a chroot. For example, this will build
+calibre on the remote host `buildd.example.com`, in a `jessie/amd64`
+chroot:
+
+    cowpoke --buildd buildd.example.com --dist sid --arch amd64 calibre_2.55.0+dfsg-1~bpo8+1.dsc
+
+This assume the chroot already exists of course. You can create it by
+using the `--create` argument.
+"""]]
 
 If you do not have your own host to build packages, you can upload
 source packages to another buildd using `dput`, for example through

use dpkg-buildpackage -S instead of dpkg-source -b .
This means one less command to learn, because you *need*
dpkg-buildpackage -S to perform source-only uploads, while dpkg-source
-b only builds a source package. The latter is useful to build stuff,
but that's it, so we drop that.
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 00d11751..4e8f2dc7 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -436,9 +436,8 @@ configurations that can contaminate the build.
 
 `sbuild` takes your source package (the `.dsc` file), and builds it in
 a clean, temporary `chroot`. To create that `.dsc` file, you can use
-`dpkg-source -b` or simply call `sbuild` in the source directory. Some
-places, like Debomatic, require a full `.changes` file, which is
-generated with `dpkg-buildpackage -S`.
+`dpkg-buildpackage -S` simply call `sbuild` in the source directory
+which will create it for you.
 
 To use sbuild, you first need to configure an image:
 
@@ -478,7 +477,7 @@ packages) and each chroot is between 500MB and 700MB.
 Then I build packages in one of three ways.
 
  1. If I have a `.dsc` already (again, that can be generated with
-    `dpkg-source -b` in the source tree):
+    `dpkg-buildpackage -S` in the source tree):
  
         sbuild calibre_2.55.0+dfsg-1~bpo8+1.dsc
 

new attack against the STM / FST-01
diff --git a/blog/2017-10-26-comparison-cryptographic-keycards/comment_1_bf4338f21b87f54ff6cacf9add07f1c8._comment b/blog/2017-10-26-comparison-cryptographic-keycards/comment_1_bf4338f21b87f54ff6cacf9add07f1c8._comment
new file mode 100644
index 00000000..9f118536
--- /dev/null
+++ b/blog/2017-10-26-comparison-cryptographic-keycards/comment_1_bf4338f21b87f54ff6cacf9add07f1c8._comment
@@ -0,0 +1,15 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""STM32F103 secrets extraction"""
+ date="2018-06-06T00:05:50Z"
+ content="""
+FST-01's author (gniibe) recently found out about a key extraction service online, see this post: [STM32F103 flash ROM read-out service](https://lists.gnupg.org/pipermail/gnupg-users/2018-June/060601.html).
+
+This does not profoundly change the conclusions from this article: attacks were already known against the previous STMF design. That such an attack can be mounted against the FST-01 firmware is not surprising, although it is a little disconcerting to find out about it as a *service* (as opposed to a publication or research.
+
+gniibe suggested using the gnuk's new [KDF-DO algorithm](https://dev.gnupg.org/T3152), which seems to be a custom-built key derivation function built with SHA-2. I can't tell from a quick inspection of the source but it probably suffers from the same issues as [PBKDF](https://en.wikipedia.org/wiki/PBKDF2) in that it's vulnerable to custom-built ASICs, or, to put it simply: SHA-2 is too fast and shouldn't be used in a KDF. Unfortunately, the limited hardware capabilities of the FST-01 might limit the gnuk to that algorithm for now. It might also be possible to do the KDF on the host which might leverage more powerful processing. The new KDF support was released in [Gnuk 1.2.8](https://www.fsij.org/gnuk/version1_2_8.html) in January 2018.
+
+All this requires physical access to the key, and having the key lost long enough to allow for such extraction should probably trigger key revocation which was also explained in the article.
+
+Just something to keep in mind still...
+"""]]

new bottom
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index f5806719..bad31ae0 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -190,7 +190,7 @@ Reference
  * [35mm f/2 R WR ø43](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf2_r_wr/), [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f2.htm), [fstoppers](https://fstoppers.com/gear/fstoppers-reviews-fujifilm-35mm-f2-wr-158227), nice size,
    sealed, 350-400$ on kijiji 
  * [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("an extraordinary lens"),
-   700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 460$ on kijiji
+   700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 400-460$ on kijiji
  * [60mm f/2.4 R Macro ø39mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf60mmf24_r_macro/), [Rockwell](https://kenrockwell.com/fuji/x-mount-lenses/60mm-f24.htm), [Photograph blog](http://www.photographyblog.com/reviews/fujifilm_xf_60mm_f2_4_r_review/),
    420-700$ on kijiji
  * [X100f ø49](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x100f/) 1200-1600$ on kijiji

limit image width so it scales properly
diff --git a/blog/2018-05-31-secure-pods.mdwn b/blog/2018-05-31-secure-pods.mdwn
index 5d9437db..8a148670 100644
--- a/blog/2018-05-31-secure-pods.mdwn
+++ b/blog/2018-05-31-secure-pods.mdwn
@@ -118,7 +118,7 @@ and audit. It provides a cleaner and simpler interface: no hardware
 drivers, interrupts, or I/O port support to implement, as the host
 operating system takes care of all that mess.
 
-[[!img gvisor-architecture.png alt="gVisor architecture"]]
+[[!img gvisor-architecture.png size="600x" alt="gVisor architecture"]]
 
 As we can see in the diagram above (taken from the talk slides),
 gVisor has a component called "sentry" that implements the core of the

two new articles online
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 0cd2e395..86f1f849 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -4,8 +4,8 @@
 >
 > * [[Diversity, education, privilege and ethics in technology|2018-05-26-kubecon-rant]] (this article)
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
-> * [[Updates in container isolation|2018-05-29-secure-pods]] (to be published)
-> * [[Securing the container image supply chain|image-security]] (to be published)
+> * [[Updates in container isolation|2018-05-31-secure-pods]]
+> * [[Securing the container image supply chain|2018-05-31-securing-container-supply]]
 > * [[Easier container security with entitlements|entitlements]] (to be published)
 
 This is a rant I wrote while attending [KubeCon Europe 2018](https://kccnceu18.sched.com/). I do
diff --git a/blog/2018-05-29-autoscaling-kubernetes.mdwn b/blog/2018-05-29-autoscaling-kubernetes.mdwn
index 6d34e8c7..a8069d4a 100644
--- a/blog/2018-05-29-autoscaling-kubernetes.mdwn
+++ b/blog/2018-05-29-autoscaling-kubernetes.mdwn
@@ -7,8 +7,8 @@
 >
 > * [[Diversity, education, privilege and ethics in technology|2018-05-26-kubecon-rant]]
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]] (this article)
-> * [[Updates in container isolation|2018-05-29-secure-pods]] (to be published)
-> * [[Securing the container image supply chain|image-security]] (to be published)
+> * [[Updates in container isolation|2018-05-31-secure-pods]]
+> * [[Securing the container image supply chain|2018-05-31-securing-container-supply]]
 > * [[Easier container security with entitlements|entitlements]] (to be published)
 
 [[!toc levels=2]]
diff --git a/blog/2018-05-31-secure-pods.mdwn b/blog/2018-05-31-secure-pods.mdwn
new file mode 100644
index 00000000..5d9437db
--- /dev/null
+++ b/blog/2018-05-31-secure-pods.mdwn
@@ -0,0 +1,245 @@
+[[!meta title="Updates in container isolation"]]
+[[!meta date="2018-05-16T12:00:00-0500"]]
+[[!meta updated="2018-05-31T13:22:07-04:00"]]
+
+> This article is part of a series on KubeCon Europe 2018.
+>
+> * [[Diversity, education, privilege and ethics in technology|2018-05-26-kubecon-rant]]
+> * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
+> * [[Updates in container isolation|2018-05-31-secure-pods]] (this article)
+> * [[Securing the container image supply chain|2018-05-31-securing-container-supply]]
+> * [[Easier container security with entitlements|entitlements]] (to be published)
+
+[KubeCon EU](https://lwn.net/Archives/ConferenceByYear/#2018-KubeCon_EU)
+At [KubeCon + CloudNativeCon
+Europe](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/)
+2018, several talks explored the topic of container isolation and
+security. The last year saw the release of [Kata
+Containers](https://katacontainers.io/) which, combined with the
+[CRI-O](http://cri-o.io/) project, provided strong isolation guarantees
+for containers using a hypervisor. During the conference, Google
+released its own hypervisor called
+[gVisor](https://github.com/google/gvisor), adding yet another possible
+solution for this problem. Those new developments prompted the community
+to work on integrating the concept of "secure containers" (or "sandboxed
+containers") deeper into Kubernetes. This work is now coming to
+fruition; it prompts us to look again at how Kubernetes tries to keep
+the bad guys from wreaking havoc once they break into a container.
+
+Attacking and defending the container boundaries
+------------------------------------------------
+
+[![Tim Allclair](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3523.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/43)
+
+Tim Allclair's
+[talk](https://kccnceu18.sched.com/event/Dqvf/secure-pods-tim-allclair-google-advanced-skill-level)
+([slides
+\[PDF\]](https://schd.ws/hosted_files/kccnceu18/96/Secure%20Pods%20-%20KubeCon%20EU%202018.pdf),
+[video](https://www.youtube.com/watch?v=GLwmJh-j3rs)) was all about
+explaining the possible attacks on secure containers. To simplify,
+Allclair said that "secure is isolation, even if that's a little
+imprecise" and explained that isolation is directional across
+boundaries: for example, a host might be isolated from a guest
+container, but the container might be fully visible from the host. So
+there are two distinct problems here: threats from the outside
+(attackers trying to get *into* a container) and threats from the
+inside (attackers trying to get *out* of a compromised
+container). Allclair's talk focused on the latter. In this context,
+sandboxed containers are concerned with threats from the *inside*;
+once the attacker is inside the sandbox, they should not be able to
+compromise the system any further.
+
+Attacks can take multiple forms: untrusted code provided by users in
+multi-tenant clusters, un-audited code fetched from random sites by
+trusted users, or trusted code compromised through an unknown
+vulnerability. According to Allclair, defending a system from a
+compromised container is harder than defending a container from external
+threats, because there is a larger attack surface. While outside
+attackers only have access to a single port, attackers on the inside
+often have access to the kernel's extensive system-call interface, a
+multitude of storage backends, the internal network, daemons providing
+services to the cluster, hardware interfaces, and so on.
+
+Taking those vectors one by one, Allclair first looked at the kernel and
+said that there were [169 code execution
+vulnerabilities](https://www.cvedetails.com/vulnerability-list/vendor_id-33/product_id-47/year-2017/opec-1/Linux-Linux-Kernel.html)
+in the Linux kernel in 2017. He admitted this was a bit of fear
+mongering; it indeed was a rather [unusual
+year](https://www.cvedetails.com/product/47/Linux-Linux-Kernel.html?vendor_id=33)
+and "most of those were in mobile device drivers". These vulnerabilities
+are not really a problem for Kubernetes unless you run it on your phone.
+Allclair said that at least one attendee at the conference was probably
+doing exactly that; as it turns out, some people have managed to [run
+Kubernetes on a vacuum
+cleaner](https://kccnceu18.sched.com/event/DqwI/why-running-kubelet-on-your-vacuum-robot-is-not-a-good-idea-christian-simon-jetstack-any-skill-level).
+Container runtimes implement all sorts of mechanisms to reduce the
+kernel's attack surface: Docker has seccomp profiles, but Kubernetes
+turns those off by default. Runtimes will use AppArmor or SELinux rule
+sets. There are also ways to run containers as non-root, which was the
+topic of a pun-filled [separate
+talk](https://kccnceu18.sched.com/event/DquO/the-route-to-rootless-containers-ed-king-pivotal-julz-friedman-ibm-any-skill-level)
+as well. Unfortunately, those mechanisms do not fundamentally solve the
+problem of kernel vulnerabilities. Allclair cited the [Dirty
+COW](https://dirtycow.ninja/) vulnerability as a classic example of a
+container escape through race conditions on system calls that *are*
+allowed by security profiles.
+
+The proposed solution to this problem is to add a second security
+boundary. This is apparently an overarching principle at Google,
+according to Allclair: "At Google, we have this principle security
+principle that between any untrusted code and user data there have to be
+at least two distinct security boundaries so that means two independent
+security mechanisms need to fail in order to for that untrusted code to
+get out that user data."
+
+Adding another boundary makes attacks harder to accomplish. One such
+solution is to use a hypervisor like Kata Containers or gVisor. Those
+new runtimes depend on a `sandboxed` setting that is still in the
+proposal stage in the Kubernetes API.
+
+gVisor as an extra boundary
+---------------------------
+
+[![Dawn Chen](https://photos.anarc.at/events/kubecon-eu-2018/original/DSCF3361.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/31)
+
+Let's look at gVisor as an example hypervisor. Google spent five years
+developing the project in the dark before sharing it with the
+world. At KubeCon, it was introduced in a keynote and a more in-depth
+[talk](https://kccnceu18.sched.com/event/Dqv1/best-practice-for-container-security-at-scale-dawn-chen-zhengyu-he-google-intermediate-skill-level)
+([slides
+\[PDF\]](https://schd.ws/hosted_files/kccnceu18/47/Container%20Isolation%20at%20Scale.pdf),
+[video](https://www.youtube.com/watch?v=pWyJahTWa4I)) by Dawn Chen and
+Zhengyu He. gVisor is a user-space kernel that implements a subset of
+the Linux kernel API, but which was written from scratch in Go. The
+idea is to have an independent kernel that reduces the attack surface;
+while the Linux kernel has 20 million lines of code, at the time of
+writing gVisor only has 185,000, which should make it easier to review
+and audit. It provides a cleaner and simpler interface: no hardware
+drivers, interrupts, or I/O port support to implement, as the host
+operating system takes care of all that mess.
+
+[[!img gvisor-architecture.png alt="gVisor architecture"]]
+
+As we can see in the diagram above (taken from the talk slides),
+gVisor has a component called "sentry" that implements the core of the
+system-call logic. It uses `ptrace()` out of the box for portability
+reasons, but can also work with KVM for better security and performance,
+as `ptrace()` is slow and racy. Sentry can use KVM to map processes to
+CPUs and provide lower-level support like privilege separation and
+memory-management. He suggested thinking of gVisor as a "layered
+solution" to provide isolation, as it also uses seccomp filters and
+namespaces. He explained how it differed from user-mode Linux (UML):
+while UML is a port of Linux to user space, gVisor actually reimplements
+the Linux system calls (211 of the 319 x86-64 system calls) using only
+64 system calls in the host system. Another key difference from other
+systems, like [unikernels](http://unikernel.org/) or Google's Native
+Client ([NaCL](https://developer.chrome.com/native-client)), is that it
+can run unmodified binaries. To fix classes of attacks relying on the
+`open()` system call, gVisor also forbids any direct filesystem access;
+all filesystem operations go through a second process called the
+`gopher` that enforces access permissions, in another example of a
+double security boundary.
+
+[![Zhengyu He](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3367.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/33)
+
+According to He, gVisor has a 150ms startup time and 15MB overhead,
+close to Kata Containers startup times, but smaller in terms of
+memory.  He said the approach is good for small containers in
+high-density workloads. It is not so useful for trusted images
+(because it's not required), workloads that make heavy use of system
+calls (because of the performance overhead), or workloads that require
+hardware access (because that's not available at all). Even though
+gVisor implements a large number of system calls, some functionality is
+missing. There is no System V shared memory, for example, which means
+[PostgreSQL](https://github.com/google/gvisor/issues/3) does not work
+under gVisor. A simple `ping` might not work either, as gVisor [lacks
+`SOCK_RAW` support](https://github.com/google/gvisor/issues/6). Linux
+has been in use for decades now and is more than just a set of system
+calls: interfaces like `/proc` and sysfs also make Linux what it is.
+~~gVisor implements none of those~~ Of those, gVisor only implements a
+subset of `/proc` currently, with the result that some containers will
+not work with gVisor without modification, for now.
+
+As an aside, the new hypervisor does allow for experimentation and
+development of new system calls directly in user space. The speakers

(Diff truncated)
slides available
diff --git a/blog/2018-05-26-kubecon-rant/comment_5_6edd9e76d5b80cd6f163ed9b6ddcd5ad._comment b/blog/2018-05-26-kubecon-rant/comment_5_6edd9e76d5b80cd6f163ed9b6ddcd5ad._comment
new file mode 100644
index 00000000..bb0c5fe7
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant/comment_5_6edd9e76d5b80cd6f163ed9b6ddcd5ad._comment
@@ -0,0 +1,31 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""presentation"""
+ date="2018-05-31T15:26:03Z"
+ content="""
+Out of sheer coincidence, a friend asked me this week to do my (now
+usual) presentation in their class. The course is called
+"*Informatique et société*" and is given at [UQAM](http://uqam.ca) to computer
+science students, as an general philosophy / ethics class.
+
+I often go there to talk about free software, my experience working at
+[Koumbit](https://koumbit.org) and so on, but this time around I had
+some other material to give, so I gave an ardent talk based on this
+article which was met with a round of applause and heated debate about
+whether those problems were bound to human nature (I don't think so),
+what we can actually do about it (just do the right thing, damnit) and
+what kind of ethical decisions or projects people get involved in
+(some students help community groups with their website, others get
+involved in free software).
+
+We talked about self-driving cars and the trolley problem (why are
+people on the track in the first place? why don't we just get rid of
+cars instead?), about social networking (people were just as isolated
+in the bus when they were reading their newspapers a century ago, but
+social media is engineered to create addiction, craving, and filter
+bubbles) and many more stuff than I can related here.
+
+It was fun.
+
+Slides are available in [this repository](https://gitlab.com/anarcat/presentation-ethics).
+"""]]

link to systemd experiments
diff --git a/services/mail/syncmaildir.mdwn b/services/mail/syncmaildir.mdwn
index 31fbcb3b..fac52809 100644
--- a/services/mail/syncmaildir.mdwn
+++ b/services/mail/syncmaildir.mdwn
@@ -292,7 +292,8 @@ one-time cost only without data loss.
         chmod +x ~/.smd/hooks/post-pull.d/notmuch
 
  16. start `smd-loop` from *somewhere*. for now just running in place
-     of OfflineIMAP in a different workspace
+     of OfflineIMAP in a different workspace. I'm also experimenting
+     with [starting SMD from systemd](https://github.com/gares/syncmaildir/issues/10)
 
  17. create restricted shell. this means creating a new key with
      `ssh-keygen`, without password, and adding it on the server in

fix typo
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 9cfd161d..0cd2e395 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -2,7 +2,7 @@
 
 > This article is part of a series on KubeCon Europe 2018.
 >
-> * [[Diversity, education, privilege and ethics in technology|2018-05-25-kubecon-rant]] (this article)
+> * [[Diversity, education, privilege and ethics in technology|2018-05-26-kubecon-rant]] (this article)
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
 > * [[Updates in container isolation|2018-05-29-secure-pods]] (to be published)
 > * [[Securing the container image supply chain|image-security]] (to be published)
diff --git a/blog/2018-05-29-autoscaling-kubernetes.mdwn b/blog/2018-05-29-autoscaling-kubernetes.mdwn
index b380e51e..6d34e8c7 100644
--- a/blog/2018-05-29-autoscaling-kubernetes.mdwn
+++ b/blog/2018-05-29-autoscaling-kubernetes.mdwn
@@ -5,7 +5,7 @@
 
 > This article is part of a series on KubeCon Europe 2018.
 >
-> * [[Diversity, education, privilege and ethics in technology|2018-05-25-kubecon-rant]]
+> * [[Diversity, education, privilege and ethics in technology|2018-05-26-kubecon-rant]]
 > * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]] (this article)
 > * [[Updates in container isolation|2018-05-29-secure-pods]] (to be published)
 > * [[Securing the container image supply chain|image-security]] (to be published)

publish first LWN article in the series
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index ea6dce56..9cfd161d 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -1,5 +1,13 @@
 [[!meta title="Diversity, education, privilege and ethics in technology"]]
 
+> This article is part of a series on KubeCon Europe 2018.
+>
+> * [[Diversity, education, privilege and ethics in technology|2018-05-25-kubecon-rant]] (this article)
+> * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]]
+> * [[Updates in container isolation|2018-05-29-secure-pods]] (to be published)
+> * [[Securing the container image supply chain|image-security]] (to be published)
+> * [[Easier container security with entitlements|entitlements]] (to be published)
+
 This is a rant I wrote while attending [KubeCon Europe 2018](https://kccnceu18.sched.com/). I do
 not know how else to frame this deep discomfort I have with the way
 one of the most cutting edge projects in my community is moving. I see
diff --git a/blog/2018-05-29-autoscaling-kubernetes.mdwn b/blog/2018-05-29-autoscaling-kubernetes.mdwn
index 3d600230..b380e51e 100644
--- a/blog/2018-05-29-autoscaling-kubernetes.mdwn
+++ b/blog/2018-05-29-autoscaling-kubernetes.mdwn
@@ -3,6 +3,14 @@
 [[!meta date="2018-05-14T00:00:00+0000"]]
 [[!meta updated="2018-05-29T09:42:21-0400"]]
 
+> This article is part of a series on KubeCon Europe 2018.
+>
+> * [[Diversity, education, privilege and ethics in technology|2018-05-25-kubecon-rant]]
+> * [[Autoscaling for Kubernetes workloads|2018-05-29-autoscaling-kubernetes]] (this article)
+> * [[Updates in container isolation|2018-05-29-secure-pods]] (to be published)
+> * [[Securing the container image supply chain|image-security]] (to be published)
+> * [[Easier container security with entitlements|entitlements]] (to be published)
+
 [[!toc levels=2]]
 
 Technologies like containers, clusters, and Kubernetes offer the
@@ -254,9 +262,6 @@ dive](https://kccnceu18.sched.com/event/DroC/sig-autoscaling-deep-dive-marcin-wi
 be interesting to our readers who want all the gory details about
 Kubernetes autoscaling.
 
-\[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting
-my travel to the event.\]
-
 ------------------------------------------------------------------------
 
 
@@ -266,4 +271,4 @@ my travel to the event.\]
 [first appeared]: https://lwn.net/Articles/754153/
 [Linux Weekly News]: http://lwn.net/
 
-[[!tag debian-planet lwn]]
+[[!tag debian-planet lwn kubernetes containers prometheus]]

proper name
diff --git a/blog/autoscaling.mdwn b/blog/2018-05-29-autoscaling-kubernetes.mdwn
similarity index 100%
rename from blog/autoscaling.mdwn
rename to blog/2018-05-29-autoscaling-kubernetes.mdwn

article public
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index 68e53dd1..3d600230 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -1,9 +1,7 @@
 [[!meta title="Autoscaling for Kubernetes workloads"]]
-\[LWN subscriber-only content\]
--------------------------------
 
-[[!meta date="2018-05-10T00:00:00+0000"]]
-[[!meta updated="2018-05-10T13:17:48-0400"]]
+[[!meta date="2018-05-14T00:00:00+0000"]]
+[[!meta updated="2018-05-29T09:42:21-0400"]]
 
 [[!toc levels=2]]
 

Added a comment: Je partage !
diff --git a/blog/2018-05-26-kubecon-rant/comment_4_57d25a96bec21e5e061eff311aaea941._comment b/blog/2018-05-26-kubecon-rant/comment_4_57d25a96bec21e5e061eff311aaea941._comment
new file mode 100644
index 00000000..e39dc785
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant/comment_4_57d25a96bec21e5e061eff311aaea941._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="132.208.1.8"
+ claimedauthor="Thomas"
+ subject="Je partage !"
+ date="2018-05-28T14:20:36Z"
+ content="""
+Très bon article que je vais partager dans mon réseau de ce pas. Je trouve aussi que l'équilibre entre certaines valeurs (qui me touchent), l’innovation, le code ouvert, notre passion technologique, l'argent des géant américain etc est dur à trouver. Merci.
+"""]]

Added a comment: learning sysadm and moving fast
diff --git a/blog/2018-05-26-kubecon-rant/comment_3_667104772a99366e59ec1423512205d4._comment b/blog/2018-05-26-kubecon-rant/comment_3_667104772a99366e59ec1423512205d4._comment
new file mode 100644
index 00000000..7e2fe7df
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant/comment_3_667104772a99366e59ec1423512205d4._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ ip="65.103.205.222"
+ claimedauthor="der.hans"
+ subject="learning sysadm and moving fast"
+ date="2018-05-28T02:41:35Z"
+ content="""
+> There is basically no sysadmin education curriculum right now. Sure you can follow a Cisco CCNA or MSCE private trainings. But anyone who's been seriously involved in running any computing infrastructure knows those are a scam: that will tie you down in a proprietary universe (Cisco and Microsoft, respectively) and probably just to \"remote hands monkey\" positions and rarely to executive positions.
+
+Community colleges in Arizona and California do teach system administration. Arizona has had an associates degree in Linux system administration for over a decade. While the program I taught in is an offshoot to Cisco Networking Academy classes, Cisco did not provide any educational resources for the GNU/Linux classes. ( Full disclosure: I did not realize when I started teaching part-time that the GNU/Linux classes were part of a Cisco Networking Academy and my day job was as a database developer for the non-profit behind the academies )
+
+ I was an instructor for many years and many of my degree students are doing well in system administration.
+
+In regards to \"move fast and break things\", Friday Cory Doctorow recommended we should instead \"Be thoughtful and consider human circumstances\" during a panel at Phoenix Comic Fest
+"""]]

align the borg to the right
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index bb2a351e..ea6dce56 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -214,7 +214,7 @@ resistance truly is futile, the ultimate neo-colonial scheme.
  [this one]: https://en.wikipedia.org/wiki/SKYNET_(surveillance_program)
  [that one]: https://en.wikipedia.org/wiki/Skynet_(satellite)
 
-<figure>
+<figure class="align-right">
 <a href="https://en.wikipedia.org/wiki/File:Picard_as_Locutus.jpg">
 <img
 src="https://upload.wikimedia.org/wikipedia/en/a/a1/Picard_as_Locutus.jpg"

Added a comment: feedback, related articles
diff --git a/blog/2018-05-26-kubecon-rant/comment_2_1613cbb720aaeb165b2d4b6e0f57d477._comment b/blog/2018-05-26-kubecon-rant/comment_2_1613cbb720aaeb165b2d4b6e0f57d477._comment
new file mode 100644
index 00000000..6fa945b8
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant/comment_2_1613cbb720aaeb165b2d4b6e0f57d477._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="anarcat"
+ avatar="https://seccdn.libravatar.org/avatar/741655483dd8a0b4df28fb3dedfa7e4c"
+ subject="feedback, related articles"
+ date="2018-05-27T11:38:30Z"
+ content="""
+A friend referred to an article from The Atlantic, [The 9.9 Percent Is the New American Aristocracy](https://www.theatlantic.com/amp/article/559130/) that goes much deeper in some of the causes of my discomfort:
+
+>  We are not innocent bystanders to the growing concentration of wealth in our time. We are the principal accomplices in a process that is slowly strangling the economy, destabilizing American politics, and eroding democracy. Our delusions of merit now prevent us from recognizing the nature of the problem that our emergence as a class represents. We tend to think that the victims of our success are just the people excluded from the club. But history shows quite clearly that, in the kind of game we’re playing, everybody loses badly in the end.
+
+Also, I want to thank everyone for the excellent feedback that was provided and, incidentally, mention that the space here is *not* a free-for-all, totally free speech zone. This is my blog and I decide which comments get published. I fully reserve the right to remove trollish comments and I have very little tolerance for those these days. You have the [right to free speech in your own spaces](https://xkcd.com/1357/), but here should be a safe space for everyone, and that does mean I may err on the side of censorship. Sorry to bring this up, but I already had to remove one such comment and wish to avoid any sort of escalation.
+"""]]

removed
diff --git a/blog/2018-05-26-kubecon-rant/comment_1_43bd3be6bdab38fa8784ecbcdf83fb6f._comment b/blog/2018-05-26-kubecon-rant/comment_1_43bd3be6bdab38fa8784ecbcdf83fb6f._comment
deleted file mode 100644
index 335b456f..00000000
--- a/blog/2018-05-26-kubecon-rant/comment_1_43bd3be6bdab38fa8784ecbcdf83fb6f._comment
+++ /dev/null
@@ -1,12 +0,0 @@
-[[!comment format=mdwn
- ip="82.8.134.118"
- subject="Yes!"
- date="2018-05-27T08:10:07Z"
- content="""
-
-We should be forcing girls to do technology and STEM subjects at school. 
-
-Also, ban all forms of \"girly\" TV cartoons and all any makeup or fashion advice.  Make them watch Star Trek and play D&D.
-
-We can fix women to be what we want them to be!
-"""]]

Added a comment
diff --git a/blog/2018-05-26-kubecon-rant/comment_2_ae0ae1a3f81688416173d8b0261edbe5._comment b/blog/2018-05-26-kubecon-rant/comment_2_ae0ae1a3f81688416173d8b0261edbe5._comment
new file mode 100644
index 00000000..7c817b51
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant/comment_2_ae0ae1a3f81688416173d8b0261edbe5._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="80.110.92.125"
+ claimedauthor="Manu"
+ subject="comment 2"
+ date="2018-05-27T08:46:33Z"
+ content="""
+Thank you for putting this conference in a higher perspective.
+Yes the social context of what software production does matter.
+"""]]

Added a comment: Yes!
diff --git a/blog/2018-05-26-kubecon-rant/comment_1_43bd3be6bdab38fa8784ecbcdf83fb6f._comment b/blog/2018-05-26-kubecon-rant/comment_1_43bd3be6bdab38fa8784ecbcdf83fb6f._comment
new file mode 100644
index 00000000..335b456f
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant/comment_1_43bd3be6bdab38fa8784ecbcdf83fb6f._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ ip="82.8.134.118"
+ subject="Yes!"
+ date="2018-05-27T08:10:07Z"
+ content="""
+
+We should be forcing girls to do technology and STEM subjects at school. 
+
+Also, ban all forms of \"girly\" TV cartoons and all any makeup or fashion advice.  Make them watch Star Trek and play D&D.
+
+We can fix women to be what we want them to be!
+"""]]

holy crap, the freedom-to-create shit comes from cloudfoundry
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index c3b5167c..bb2a351e 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -263,7 +263,8 @@ The fraud
 
 <figure>
 <a href="https://photos.anarc.at/events/kubecon-eu-2018/#/40">
-<img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3496.jpg" alt="A t-shirt on a booth that reads 'Freedom to create'"/> 
+<img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3496.jpg"
+alt="A t-shirt from the Cloudfoundry booth that reads 'Freedom to create'"/> 
 </a>
 <figcaption>...but what are you creating exactly?</figcaption>
 </figure>

fix closing tags
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 2e6d97a5..c3b5167c 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -33,7 +33,7 @@ effectiveness diversity efforts in our communities.
 <img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3307.JPG" alt="a large conference room full of people that mostly look like white male, with a speaker on a large stage illuminated in white" />
 </a>
 <figcaption>4000 white men</figcaption>
-<figure>
+</figure>
 
 The truth is that contrary to programmer communities, "operations"
 knowledge ([sysadmin](https://en.wikipedia.org/wiki/System_administrator), [SRE](https://en.wikipedia.org/wiki/Site_Reliability_Engineering), [DevOps](https://en.wikipedia.org/wiki/DevOps), whatever it's called
@@ -54,7 +54,7 @@ as C!) in-between sessions.
 <img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3502.JPG" alt="A bunch of white geeks hanging out with their phones next to a sign that says 'Thanks to our Diversity Scholarship Sponsors' with a bunch of corporate logos" />
 </a>
 <figcaption>Diversity program</figcaption>
-<figure>
+</figure>
 
 The real solutions to the lack of diversity in our communities not
 only comes from a change in culture, but also real investments in
@@ -171,7 +171,7 @@ supply chain of the modern technology that is destroying the planet.
 <img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3484.JPG" alt="An empty shipping container probably made of cardboard hanging over the IBM booth"/> 
 </a>
 <figcaption>Nothing is like corporate nothing.</figcaption>
-<figure>
+</figure>
 
 The nice little boxes and containers we call the cloud all abstract
 this away from us and those dependencies are actively encouraged in
@@ -223,7 +223,7 @@ alt="Captain Jean-Luc Picard, played by Patrick Stewart, assimilated by the Borg
 <figcaption> "We are the Borg. Your biological and technological
 distinctiveness will be added to our own. Resistance is
 futile."</figcaption>
-<figure>
+</figure>
 
 The "hackers" of our age are building this machine with conscious
 knowledge of the social and ethical implications of their work. At
@@ -266,7 +266,7 @@ The fraud
 <img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3496.jpg" alt="A t-shirt on a booth that reads 'Freedom to create'"/> 
 </a>
 <figcaption>...but what are you creating exactly?</figcaption>
-<figure>
+</figure>
 
 And that is the ultimate fraud: to make the world believe we are
 harmless little boys, so repressed that we can't communicate

take extra care of images in this image-full article
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 817626dd..2e6d97a5 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -8,7 +8,11 @@ figured it was as good a way as any to open the discussion regarding
 how free software communities seem to naturally evolved into corporate
 money-making machines with questionable ethics.
 
-[![A white man groomed by a white woman](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3129.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/0)
+<figure>
+<a href="https://photos.anarc.at/events/kubecon-eu-2018/#/0">
+<img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3129.JPG" alt="A white male looking at his phone while a hair-dresser prepares him for a video shoot, with plants and audio-video equipment in the background" /></a>
+<figcaption>A white man groomed by a white woman</figcaption>
+</figure>
 
 Diversity and education
 -----------------------
@@ -24,7 +28,12 @@ course, there's real life out there, where women constitute basically
 half the population, of course. This says something about the actual
 effectiveness diversity efforts in our communities.
 
-[![4000 white men](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3307.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/24)
+<figure>
+<a href="https://photos.anarc.at/events/kubecon-eu-2018/#/24">
+<img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3307.JPG" alt="a large conference room full of people that mostly look like white male, with a speaker on a large stage illuminated in white" />
+</a>
+<figcaption>4000 white men</figcaption>
+<figure>
 
 The truth is that contrary to programmer communities, "operations"
 knowledge ([sysadmin](https://en.wikipedia.org/wiki/System_administrator), [SRE](https://en.wikipedia.org/wiki/Site_Reliability_Engineering), [DevOps](https://en.wikipedia.org/wiki/DevOps), whatever it's called
@@ -40,7 +49,12 @@ useful there, but I acquired those *before* going to university: even
 there teachers expected students to learn programming languages (such
 as C!) in-between sessions.
 
-[![Diversity program](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3502.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/42)
+<figure>
+<a href="https://photos.anarc.at/events/kubecon-eu-2018/#/42">
+<img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3502.JPG" alt="A bunch of white geeks hanging out with their phones next to a sign that says 'Thanks to our Diversity Scholarship Sponsors' with a bunch of corporate logos" />
+</a>
+<figcaption>Diversity program</figcaption>
+<figure>
 
 The real solutions to the lack of diversity in our communities not
 only comes from a change in culture, but also real investments in
@@ -95,7 +109,10 @@ place. Shouldn't we converge over selling *less* hammers? Making them
 more solid, reliable, so that they are passed down from generations
 instead of breaking and having to be replaced all the time?
 
-![Home Depot ecstasy](home-depot-speed.png)
+<figure>
+<img src="home-depot-speed.png" alt="Slide from Kearn's keynote that shows a women with perfect nail polish considering a selection of paint colors with the Home Depot logo and stats about 'speed' in their deployment " />
+<figcaption>Home Depot ecstasy</figcaption>
+</figure>
 
 We're solving a problem that wasn't there in some new absurd faith
 that code deployments will naturally make people happier, by making
@@ -106,7 +123,10 @@ military in existence would move state secrets into a private cloud,
 out of the control of any government. It's the name of the game, at
 KubeCon.
 
-![USAF saves (money)](usaf-cost-savings.png)
+<figure>
+<img src="usaf-cost-savings.png" alt="Picture of a jet fighter flying over clouds, the logo of the USAF and stats about the cost savings due their move to the cloud" />
+<figcaption>USAF saves (money)</figcaption>
+</figure>
 
 In his [keynote](https://kccnceu18.sched.com/event/Duok/keynote-cncf-20-20-vision-alexis-richardson-founder-ceo-weaveworks), Alexis Richardson, CEO of [Weaveworks](https://www.weave.works/),
 presented [the toaster project](http://www.thetoasterproject.org/) as an example of what *not* to
@@ -127,12 +147,15 @@ surveillance machine.
 Privilege
 ---------
 
-![The toaster experiment](toaster-project.jpg)
+<figure>
+<img src="toaster-project.jpg" alt="Photo of the Toaster Project book which shows a molten toster that looks like it came out of a H.P. Lovecraft novel" />
+<figcaption>
+"Left to his own devices he couldn’t build a toaster. He could just
+about make a sandwich and that was it." -- Mostly Harmless, Douglas
+Adams, 1992
+</figcaption>
+</figure>
 
-> "Left to his own devices he couldn’t build a toaster. He could just
-> about make a sandwich and that was it."
->
-> -- Mostly Harmless, Douglas Adams, 1992
 
 Staying in an hotel room for a week, all expenses paid, certainly puts
 things in perspectives. Rarely have I felt more privileged in my
@@ -143,7 +166,12 @@ rock star agenda of this community. People get used to being served,
 both directly in their day to day lives, but also through the complex
 supply chain of the modern technology that is destroying the planet.
 
-[![An empty container at the IBM booth](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3484.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/37)
+<figure>
+<a href="https://photos.anarc.at/events/kubecon-eu-2018/#/37">
+<img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3484.JPG" alt="An empty shipping container probably made of cardboard hanging over the IBM booth"/> 
+</a>
+<figcaption>Nothing is like corporate nothing.</figcaption>
+<figure>
 
 The nice little boxes and containers we call the cloud all abstract
 this away from us and those dependencies are actively encouraged in
@@ -186,12 +214,16 @@ resistance truly is futile, the ultimate neo-colonial scheme.
  [this one]: https://en.wikipedia.org/wiki/SKYNET_(surveillance_program)
  [that one]: https://en.wikipedia.org/wiki/Skynet_(satellite)
 
-> "We are the Borg. Your biological and technological distinctiveness
-> will be added to our own. Resistance is futile."
->
-> -- The Borg
-
-[![We are the Borg](https://upload.wikimedia.org/wikipedia/en/a/a1/Picard_as_Locutus.jpg)](https://en.wikipedia.org/wiki/File:Picard_as_Locutus.jpg)
+<figure>
+<a href="https://en.wikipedia.org/wiki/File:Picard_as_Locutus.jpg">
+<img
+src="https://upload.wikimedia.org/wikipedia/en/a/a1/Picard_as_Locutus.jpg"
+alt="Captain Jean-Luc Picard, played by Patrick Stewart, assimilated by the Borg as 'Locutus'" />
+</a>
+<figcaption> "We are the Borg. Your biological and technological
+distinctiveness will be added to our own. Resistance is
+futile."</figcaption>
+<figure>
 
 The "hackers" of our age are building this machine with conscious
 knowledge of the social and ethical implications of their work. At
@@ -229,7 +261,12 @@ companies are now [negotiating as equals](http://cphpost.dk/news/business/denmar
 The fraud
 ---------
 
-[!["Freedom to create"](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3496.jpg)](https://photos.anarc.at/events/kubecon-eu-2018/#/40)
+<figure>
+<a href="https://photos.anarc.at/events/kubecon-eu-2018/#/40">
+<img src="https://photos.anarc.at/events/kubecon-eu-2018/DSCF3496.jpg" alt="A t-shirt on a booth that reads 'Freedom to create'"/> 
+</a>
+<figcaption>...but what are you creating exactly?</figcaption>
+<figure>
 
 And that is the ultimate fraud: to make the world believe we are
 harmless little boys, so repressed that we can't communicate

fix title
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 12d2dd59..817626dd 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -1,4 +1,4 @@
-[[!meta title="Diversity, education, privilege and ethics in the new world"]]
+[[!meta title="Diversity, education, privilege and ethics in technology"]]
 
 This is a rant I wrote while attending [KubeCon Europe 2018](https://kccnceu18.sched.com/). I do
 not know how else to frame this deep discomfort I have with the way

creating tag page tag/reflexion
diff --git a/tag/reflexion.mdwn b/tag/reflexion.mdwn
new file mode 100644
index 00000000..9ccdf07c
--- /dev/null
+++ b/tag/reflexion.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged reflexion"]]
+
+[[!inline pages="tagged(reflexion)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/tech
diff --git a/tag/tech.mdwn b/tag/tech.mdwn
new file mode 100644
index 00000000..b2dc480f
--- /dev/null
+++ b/tag/tech.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged tech"]]
+
+[[!inline pages="tagged(tech)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/ethics
diff --git a/tag/ethics.mdwn b/tag/ethics.mdwn
new file mode 100644
index 00000000..a075bcdc
--- /dev/null
+++ b/tag/ethics.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged ethics"]]
+
+[[!inline pages="tagged(ethics)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/diversity
diff --git a/tag/diversity.mdwn b/tag/diversity.mdwn
new file mode 100644
index 00000000..9b1d3c78
--- /dev/null
+++ b/tag/diversity.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged diversity"]]
+
+[[!inline pages="tagged(diversity)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/education
diff --git a/tag/education.mdwn b/tag/education.mdwn
new file mode 100644
index 00000000..6c901cdf
--- /dev/null
+++ b/tag/education.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged education"]]
+
+[[!inline pages="tagged(education)" actions="no" archive="yes"
+feedshow=10]]

put online
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 337478e4..12d2dd59 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -248,8 +248,8 @@ the people, the machine is at the mercy of markets and powerful
 oligarchs.
 
 A recurring pattern at Kubernetes conferences is the [KubeCon
-chant](https://twitter.com/alstard/status/991615812397092865 ) where Kelsey Hightower (who I otherwise truly appreciate and
-respect) reluctantly engages the crowd in a pep chant:
+chant](https://twitter.com/alstard/status/991615812397092865 ) where [Kelsey Hightower](https://github.com/kelseyhightower/) reluctantly engages the crowd in
+a pep chant:
 
 > When I say 'Kube!', you say 'Con!'
 >
@@ -266,4 +266,4 @@ this will take it the right way, but I somehow doubt it. With chance,
 it might just become irrelevant and everything will fix itself, but
 somehow I fear things will get worse before they get better.
 
-[[!tag draft]]
+[[!tag debian-planet kubernetes containers reflexion politics tech diversity education ethics]]

fix another link
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 60a519cd..337478e4 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -229,7 +229,7 @@ companies are now [negotiating as equals](http://cphpost.dk/news/business/denmar
 The fraud
 ---------
 
-[![freedom to create](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3496.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/40)
+[!["Freedom to create"](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3496.jpg)](https://photos.anarc.at/events/kubecon-eu-2018/#/40)
 
 And that is the ultimate fraud: to make the world believe we are
 harmless little boys, so repressed that we can't communicate

set title
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 8621821f..60a519cd 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -1,3 +1,5 @@
+[[!meta title="Diversity, education, privilege and ethics in the new world"]]
+
 This is a rant I wrote while attending [KubeCon Europe 2018](https://kccnceu18.sched.com/). I do
 not know how else to frame this deep discomfort I have with the way
 one of the most cutting edge projects in my community is moving. I see

fix broken links
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
index 381a8413..8621821f 100644
--- a/blog/2018-05-26-kubecon-rant.mdwn
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -125,7 +125,7 @@ surveillance machine.
 Privilege
 ---------
 
-![The toaster experiment](kubecon-rant/toaster-project.jpg)
+![The toaster experiment](toaster-project.jpg)
 
 > "Left to his own devices he couldn’t build a toaster. He could just
 > about make a sandwich and that was it."
@@ -141,7 +141,7 @@ rock star agenda of this community. People get used to being served,
 both directly in their day to day lives, but also through the complex
 supply chain of the modern technology that is destroying the planet.
 
-![An empty container at the IBM booth](3488.JPG)
+[![An empty container at the IBM booth](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3484.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/37)
 
 The nice little boxes and containers we call the cloud all abstract
 this away from us and those dependencies are actively encouraged in
diff --git a/blog/2018-05-26-kubecon-rant/home-depo-speed.png b/blog/2018-05-26-kubecon-rant/home-depot-speed.png
similarity index 100%
rename from blog/2018-05-26-kubecon-rant/home-depo-speed.png
rename to blog/2018-05-26-kubecon-rant/home-depot-speed.png

review rant and rename
diff --git a/blog/2018-05-26-kubecon-rant.mdwn b/blog/2018-05-26-kubecon-rant.mdwn
new file mode 100644
index 00000000..381a8413
--- /dev/null
+++ b/blog/2018-05-26-kubecon-rant.mdwn
@@ -0,0 +1,267 @@
+This is a rant I wrote while attending [KubeCon Europe 2018](https://kccnceu18.sched.com/). I do
+not know how else to frame this deep discomfort I have with the way
+one of the most cutting edge projects in my community is moving. I see
+it as a symptom of so many things wrong in society at large and
+figured it was as good a way as any to open the discussion regarding
+how free software communities seem to naturally evolved into corporate
+money-making machines with questionable ethics.
+
+[![A white man groomed by a white woman](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3129.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/0)
+
+Diversity and education
+-----------------------
+
+There is often a great point made of diversity at KubeCon, and that is
+something I truly appreciate. It's one of the places where I have seen
+the largest efforts towards that goal; I was impressed by the efforts
+done in Austin, and mentioned it in [[my overview of that
+conference|blog/2017-12-13-kubecon-overview]] back then. Yet it is
+still one of the less diverse places I've ever participated in: in
+comparison, [Pycon](https://en.wikipedia.org/wiki/Python_Conference) "feels" more diverse, for example. And then, of
+course, there's real life out there, where women constitute basically
+half the population, of course. This says something about the actual
+effectiveness diversity efforts in our communities.
+
+[![4000 white men](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3307.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/24)
+
+The truth is that contrary to programmer communities, "operations"
+knowledge ([sysadmin](https://en.wikipedia.org/wiki/System_administrator), [SRE](https://en.wikipedia.org/wiki/Site_Reliability_Engineering), [DevOps](https://en.wikipedia.org/wiki/DevOps), whatever it's called
+these days) comes not from institutional education, but from
+self-learning. Even though I have years of university training, the
+day to day knowledge I need in my work as a sysadmin comes not from
+the university, but from late night experiments on my [[personal
+computer network|2012-11-01-my-short-computing-history/]]. This was
+first on the Macintosh, then on the FreeBSD source code of passed down
+as a magic word from an uncle and finally through Debian consecrated
+as the leftist's true computing way. Sure, my programming skills were
+useful there, but I acquired those *before* going to university: even
+there teachers expected students to learn programming languages (such
+as C!) in-between sessions.
+
+[![Diversity program](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3502.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/42)
+
+The real solutions to the lack of diversity in our communities not
+only comes from a change in culture, but also real investments in
+society at large. The mega-corporations subsidizing events like
+KubeCon make sure they get a lot of good press from those diversity
+programs. However, the money they spend on those is nothing compared
+to tax evasion in their home states. As an example, [Amazon recently
+put 7000 jobs on hold because of a tax the city of Seattle wanted to
+impose on corporations to help the homeless population](https://www.jwz.org/blog/2018/05/amazon-puts-7000-jobs-on-hold-because-of-a-tax-that-would-help-seattles-homeless-population/). Google,
+Facebook, Microsoft, and Apple all evade taxes like gangsters. This is
+important because society changes partly through education, and that
+costs money. Education is how more traditional [STEM](https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics) sectors like
+engineering and medicine have changed: women, minorities, and poorer
+populations were finally allowed into schools after the epic social
+struggles of the 1970s finally yielded more accessible education. The
+same way that culture changes are seeing a backlash, the tide is
+turning there as well and the trend is reversing towards more costly,
+less accessible education of course. But not everywhere. The impacts
+of education changes are long-lasting. By evading taxes, those
+companies are keeping the state from revenues that could level the
+playing field through affordable education.
+
+Hell, *any* education in the field would help. There is basically no
+sysadmin education curriculum right now. Sure you can follow a Cisco
+[CCNA](https://en.wikipedia.org/wiki/CCNA) or [MSCE](https://en.wikipedia.org/wiki/Microsoft_Certified_Professional) private trainings. But anyone who's been
+seriously involved in running any computing infrastructure knows those
+are a scam: that will tie you down in a proprietary universe (Cisco
+and Microsoft, respectively) and probably just to "remote hands
+monkey" positions and rarely to executive positions.
+
+Velocity
+--------
+
+Besides, providing an education curriculum would require the field to
+slow down so that knowledge would settle down and trickle into a
+curriculum. Configuration management is [pretty old](https://en.wikipedia.org/wiki/Configuration_management#History), but because
+the changes in tooling are fast, any curriculum built in the last
+decade (or even less) quickly becomes irrelevant. [Puppet](https://en.wikipedia.org/wiki/Puppet_%28software%29)
+publishes a new release [every 6 month](https://puppet.com/misc/puppet-enterprise-lifecycle), Kubernetes is barely 4
+years old now, and is changing rapidly with a [~3 month release
+schedule](https://gravitational.com/blog/kubernetes-release-cycle/).
+
+Here at KubeCon, Mark Zuckerberg's mantra of "move fast and break
+things" is everywhere. We call it "[velocity](https://www.cncf.io/blog/2017/06/05/30-highest-velocity-open-source-projects/)": where you are going
+does not matter as much as how fast you're going there. At one of the
+many [keynotes](https://kccnceu18.sched.com/event/E5wI/keynote-shaping-the-cloud-native-future-abby-kearns-executive-director-cloud-foundry-foundation-slides-attached), Abby Kearns from the [Cloud Foundry Foundation](https://www.cloudfoundry.org/)
+boasted at how Home Depot, in trying to sell more hammers than Amazon,
+is now deploying code to production multiple times a day. I am still
+unclear as whether this made Home Depot *actually* sell more hammers,
+or if it's something that we should even care about in the first
+place. Shouldn't we converge over selling *less* hammers? Making them
+more solid, reliable, so that they are passed down from generations
+instead of breaking and having to be replaced all the time?
+
+![Home Depot ecstasy](home-depot-speed.png)
+
+We're solving a problem that wasn't there in some new absurd faith
+that code deployments will naturally make people happier, by making
+sure Home Depot sells more hammers. And that's after telling us that
+Cloud Foundry helped the USAF save 600M$ by moving their databases to
+the cloud. No one seems bothered by the idea that the most powerful
+military in existence would move state secrets into a private cloud,
+out of the control of any government. It's the name of the game, at
+KubeCon.
+
+![USAF saves (money)](usaf-cost-savings.png)
+
+In his [keynote](https://kccnceu18.sched.com/event/Duok/keynote-cncf-20-20-vision-alexis-richardson-founder-ceo-weaveworks), Alexis Richardson, CEO of [Weaveworks](https://www.weave.works/),
+presented [the toaster project](http://www.thetoasterproject.org/) as an example of what *not* to
+do. "He did not use any sourced components, everything was built from
+scratch, by hand", obviously missing the fact that toasters are
+*deliberately* *not* built from reusable parts, as part of the
+[planned obsolescence](https://en.wikipedia.org/wiki/Planned_obsolescence) design. The goal of the toaster experiment
+is also to show how fragile our civilization has become precisely
+*because* we depend on layers upon layers of parts. In this
+totalitarian view of the world, people are also "reusable" or, in that
+case "disposable components". Not just the white dudes in California,
+but also workers outsourced out of the USA decades ago; it depends on
+precious metals and the miners of Africa, the specialized labour of
+the factories and intricate knowledge of the factory workers in Asia,
+and the flooded forests of the first nations powering this terrifying
+surveillance machine.
+
+Privilege
+---------
+
+![The toaster experiment](kubecon-rant/toaster-project.jpg)
+
+> "Left to his own devices he couldn’t build a toaster. He could just
+> about make a sandwich and that was it."
+>
+> -- Mostly Harmless, Douglas Adams, 1992
+
+Staying in an hotel room for a week, all expenses paid, certainly puts
+things in perspectives. Rarely have I felt more privileged in my
+entire life: someone else makes my food, makes my bed, and cleans up
+the toilet magically when I'm gone. For me, this is extraordinary, but
+for many people at KubeCon, it's routine: traveling is part of the
+rock star agenda of this community. People get used to being served,
+both directly in their day to day lives, but also through the complex
+supply chain of the modern technology that is destroying the planet.
+
+![An empty container at the IBM booth](3488.JPG)
+
+The nice little boxes and containers we call the cloud all abstract
+this away from us and those dependencies are actively encouraged in
+the community. We like containers here and their image is
+ubiquitous. We acknowledge that a single person cannot run a Kube shop
+because the knowledge is too broad to be possibly handled by a single
+person. While there are interesting collaborative and social ideas in
+that approach, I am deeply skeptical of its impact on civilization in
+the long run. We already created systems so complex that we don't
+truly know who hacked the Trump election or how. Many feel it was, but
+it's really just a hunch: there were bots, maybe they were Russian, or
+maybe from [Cambridge](https://en.wikipedia.org/wiki/Cambridge_Analytica)? The [DNC emails](https://en.wikipedia.org/wiki/2016_Democratic_National_Committee_email_leak), was that really
+Wikileaks?  Who knows! Never mind failing close or open: the system
+has become so complex that we don't even know *how* we fail when we
+do. Even those in the highest positions of power seem [unable to
+protect themselves](https://www.bloomberg.com/features/2018-palantir-peter-thiel/); politics seem to have become a game of Russian
+roulette: we cock the bot, roll the secret algorithm, and see what
+dictator will shoot out.
+
+Ethics
+------
+
+All this is to build a new [Skynet][]; not [this one][] or [that
+one][], those already exist. I was able to pleasantly joke about the
+[AI takeover](https://en.wikipedia.org/wiki/AI_takeover) during breakfast with a random stranger without
+raising as much as an eyebrow: we know it will happen, oh well. I've
+skipped that track in my attendance, but multiple talks at KubeCon are
+about AI, [TensorFlow](https://en.wikipedia.org/wiki/TensorFlow) (it's opensource!), self-driving cars, and
+removing humans from the equation as much as possible, as a [general
+principle](https://en.wikipedia.org/wiki/All_Watched_Over_by_Machines_of_Loving_Grace). Kubernetes is often shortened to "Kube", which I always
+think of as a reference to the [Star Trek Borg](https://en.wikipedia.org/wiki/Borg_(Star_Trek)) all mighty ship,
+the "cube". This might actually make sense given that Kubernetes is an
+open source version of Google's internal software incidentally
+called... [Borg](https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/). To make such fleeting, tongue-in-cheek references
+to a totalitarian civilization is not harmless: it makes more
+acceptable the notion that AI domination is inescapable and that
+resistance truly is futile, the ultimate neo-colonial scheme.
+
+ [Skynet]: https://en.wikipedia.org/wiki/Skynet_(Terminator)
+ [this one]: https://en.wikipedia.org/wiki/SKYNET_(surveillance_program)
+ [that one]: https://en.wikipedia.org/wiki/Skynet_(satellite)
+
+> "We are the Borg. Your biological and technological distinctiveness
+> will be added to our own. Resistance is futile."
+>
+> -- The Borg
+
+[![We are the Borg](https://upload.wikimedia.org/wikipedia/en/a/a1/Picard_as_Locutus.jpg)](https://en.wikipedia.org/wiki/File:Picard_as_Locutus.jpg)
+
+The "hackers" of our age are building this machine with conscious

(Diff truncated)
article online
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index f4675f58..da54f883 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -102,7 +102,7 @@ entitlements do not specify explicit low-level mechanisms, the resulting
 image is portable to different runtimes without change. Such portability
 helps Kubernetes on non-Linux platforms do its job.
 
-Entitlements shift the responsibility for designing sandboxing
+Entitlements shift the responsibility for configuring sandboxing
 environments to image developers, but also empowers them to deliver
 security mechanisms directly to end users. Developers are the ones with
 the best knowledge about what their applications should or should not be
@@ -226,6 +226,8 @@ of the talk are available.
 \[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting
 my travel to the event.\]
 
+------------------------------------------------------------------------
+
 
 
 > *This article [first appeared][] in the [Linux Weekly News][].*

changes from corbet
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 184ce314..f4675f58 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -18,8 +18,8 @@ they are so hard to use, people often just turn the whole thing off. The
 goal of the proposal is to make those controls easier to understand and
 use; it is partly inspired by mobile apps on iOS and Android platforms,
 an idea that trickled back into Microsoft and Apple desktops. The time
-now seems ripe to improve the field of container security, which is
-in desperate need of simpler controls.
+seems ripe to improve the field of container security, which is in
+desperate need of simpler controls.
 
 The problem with container security
 -----------------------------------
@@ -36,21 +36,20 @@ of measures, some of which we have [previously
 covered](https://lwn.net/Articles/754433/). Cormack presented an
 overview of those mechanisms, including capabilities, seccomp, AppArmor,
 SELinux, namespaces, control groups — the list goes on. He showed how
-[`docker run
---help`](https://docs.docker.com/engine/reference/commandline/run/) has a
-"ridiculously large number of options"; there are around one hundred on
-my machine, with about fifteen just for security mechanisms. He said
-that "most developers don't know how to actually apply those mechanisms
-to make sure their containers are secure". In the best-case scenario,
-some people may know what the options are, but in most cases people
-don't actually understand each mechanism in detail.
+[`docker run --help`](https://docs.docker.com/engine/reference/commandline/run/)
+has a "ridiculously large number of options"; there are around one
+hundred on my machine, with about fifteen just for security mechanisms.
+He said that "most developers don't know how to actually apply those
+mechanisms to make sure their containers are secure". In the best-case
+scenario, some people may know what the options are, but in most cases
+people don't actually understand each mechanism in detail.
 
 He gave the example of capabilities; there are about forty possible
 values that can be provided for the `--cap-drop` option, each with its
 own meaning. He described some capabilities as "understandable", but
-said that others end up in overly broad boxes. Because of the kernel's
-representation, there can't be more than 64 capabilities, so a bunch of
-functionality was lumped together into `CAP_SYS_ADMIN`, he said.
+said that others end up in overly broad boxes. The kernel's data
+structure limits the system to a maximum of 64 capabilities, so a bunch
+of functionality was lumped together into `CAP_SYS_ADMIN`, he said.
 
 Cormack also talked about namespaces and seccomp. While there are fewer
 namespaces than capabilities, he said that "it's very unclear for a
@@ -78,12 +77,12 @@ Introducing entitlements
 There must be a better way. Eddequiouaq proposed this simple idea:
 "provide something humans can actually understand without diving into
 code or possibly even without reading documentation". The solution
-proposed by the Docker security team is "entitlements": the ability
-for users to choose simple permissions on the command
-line. Eddequiouaq said that application users and developers alike
-don't need to understand the low-level security mechanisms or how they
-interact within the kernel; "people don't care about that, they want
-to make sure their app is secure."
+proposed by the Docker security team is "entitlements": the ability for
+users to choose simple permissions on the command line. Eddequiouaq said
+that application users and developers alike don't need to understand the
+low-level security mechanisms or how they interact within the kernel;
+"people don't care about that, they want to make sure their app is
+secure."
 
 Entitlements divide resources into meaningful domains like "network",
 "security", or "host resources" (like devices). Behind the scenes,
@@ -91,19 +90,19 @@ Docker translates those into whatever security mechanisms are
 available.  This implies that the actual mechanism deployed will vary
 between runtimes, depending on the implementation. For example, a
 "confined" network access might mean a seccomp filter blocking all
-networking-related system calls except `socket(AF_UNIX|AF_LOCAL)` with
-dropped network capabilities. AppArmor will `deny network` on some
-platforms while SELinux would do similar enforcement on others.
+networking-related system calls except `socket(AF_UNIX|AF_LOCAL)` along
+with dropping network-related capabilities. AppArmor will `deny network`
+on some platforms while SELinux would do similar enforcement on others.
 
 Eddequiouaq said the complexity of implementing those mechanisms is the
 responsibility of platform developers. Image developers can ship
-entitlements lists alongside container images, with a regular
-`docker build`, and sign the whole bundle, with `docker trust`. Because
+entitlement lists along with container images created with a regular
+`docker build`, and sign the whole bundle with `docker trust`. Because
 entitlements do not specify explicit low-level mechanisms, the resulting
 image is portable to different runtimes without change. Such portability
 helps Kubernetes on non-Linux platforms do its job.
 
-Entitlements shift the responsibility of designing sandboxing
+Entitlements shift the responsibility for designing sandboxing
 environments to image developers, but also empowers them to deliver
 security mechanisms directly to end users. Developers are the ones with
 the best knowledge about what their applications should or should not be
@@ -135,9 +134,9 @@ platforms instead of opting out of security configurations when they get
 a "permission denied" error. Eddequiouaq said that Docker eventually
 wants to "ditch the `--privileged` flag because it is really a bad
 habit". Instead, applications should run with the least privileges they
-need. He said that "this is not the case; currently, everyone
-works with defaults that work with 95% of the applications out there."
-Those Docker defaults, he said, provide a "way too big attack surface".
+need. He said that "this is not the case; currently, everyone works with
+defaults that work with 95% of the applications out there." Those Docker
+defaults, he said, provide a "way too big attack surface".
 
 Eddequiouaq opened the door for developers to define custom entitlements
 because "it's hard to come up with a set that will cover all needs". One
@@ -168,7 +167,7 @@ are not *exactly* identical.
 Eddequiouaq said that entitlements could help share best security
 policies for a pod in Kubernetes. He proposed that such configuration
 would happen through the [`SecurityContext`
-object](https://lwn.net/SubscriberLink/754443/b25a4d2a687123b6/).
+object](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
 Another way would be an admission controller that would avoid conflicts
 between the entitlements in the image and existing `SecurityContext`
 profiles already configured in the cluster. There are two possible
@@ -224,6 +223,9 @@ A YouTube [video](https://www.youtube.com/watch?v=Jbqxsli2tRw) and
 \[PDF\]](https://schd.ws/hosted_files/kccnceu18/d1/Kubecon%20Entitlements.pdf)
 of the talk are available.
 
+\[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting
+my travel to the event.\]
+
 
 
 > *This article [first appeared][] in the [Linux Weekly News][].*

fixes from jake
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 4dc5422f..184ce314 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -16,9 +16,9 @@ containerized applications. Containers depend on a large set of
 intricate security primitives that can have weird interactions. Because
 they are so hard to use, people often just turn the whole thing off. The
 goal of the proposal is to make those controls easier to understand and
-use and is partly inspired by mobile apps on iOS and Android platforms,
+use; it is partly inspired by mobile apps on iOS and Android platforms,
 an idea that trickled back into Microsoft and Apple desktops. The time
-now seems now ripe to improve the field of container security, which is
+now seems ripe to improve the field of container security, which is
 in desperate need of simpler controls.
 
 The problem with container security
@@ -31,13 +31,13 @@ for users.
 
 [![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
 
-"Container security" is a catch phrase that actually includes all sorts
+"Container security" is a catchphrase that actually includes all sorts
 of measures, some of which we have [previously
 covered](https://lwn.net/Articles/754433/). Cormack presented an
 overview of those mechanisms, including capabilities, seccomp, AppArmor,
 SELinux, namespaces, control groups — the list goes on. He showed how
-[docker run
---help](https://docs.docker.com/engine/reference/commandline/run/) has a
+[`docker run
+--help`](https://docs.docker.com/engine/reference/commandline/run/) has a
 "ridiculously large number of options"; there are around one hundred on
 my machine, with about fifteen just for security mechanisms. He said
 that "most developers don't know how to actually apply those mechanisms
@@ -49,7 +49,7 @@ He gave the example of capabilities; there are about forty possible
 values that can be provided for the `--cap-drop` option, each with its
 own meaning. He described some capabilities as "understandable", but
 said that others end up in overly broad boxes. Because of the kernel's
-data structure there can't be more than 64 capabilities, so a bunch of
+representation, there can't be more than 64 capabilities, so a bunch of
 functionality was lumped together into `CAP_SYS_ADMIN`, he said.
 
 Cormack also talked about namespaces and seccomp. While there are fewer
@@ -67,7 +67,7 @@ policies for their application by hand, it's a real mess and makes their
 heads explode. So instead developers run their containers in
 `--privileged` mode. It works, but it disables all the nice security
 mechanisms that the container abstraction provides. This is why
-"containers do not contain", as Dan Walsh now famously
+"containers do not contain", as Dan Walsh famously
 [quipped](https://blog.docker.com/2014/07/new-dockercon-video-docker-security-renamed-from-docker-and-selinux/).
 
 Introducing entitlements
@@ -75,15 +75,15 @@ Introducing entitlements
 
 [![Nassim Eddequiouaq](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3381.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/34)
 
-There must be a better way. Eddequiouaq proposed this simple idea: "to
-provide something humans can actually understand without diving into
+There must be a better way. Eddequiouaq proposed this simple idea:
+"provide something humans can actually understand without diving into
 code or possibly even without reading documentation". The solution
-proposed by the Docker security team is "entitlements": the ability for
-users to choose simple permissions on the command line. Eddequiouaq said
-that app users and developers alike don't need to understand the
-low-level security mechanisms or how they interact within the kernel;
-"people don't care about that, they want to make sure their app is
-secure."
+proposed by the Docker security team is "entitlements": the ability
+for users to choose simple permissions on the command
+line. Eddequiouaq said that application users and developers alike
+don't need to understand the low-level security mechanisms or how they
+interact within the kernel; "people don't care about that, they want
+to make sure their app is secure."
 
 Entitlements divide resources into meaningful domains like "network",
 "security", or "host resources" (like devices). Behind the scenes,
@@ -97,7 +97,7 @@ platforms while SELinux would do similar enforcement on others.
 
 Eddequiouaq said the complexity of implementing those mechanisms is the
 responsibility of platform developers. Image developers can ship
-entitlements lists along container images, with a regular
+entitlements lists alongside container images, with a regular
 `docker build`, and sign the whole bundle, with `docker trust`. Because
 entitlements do not specify explicit low-level mechanisms, the resulting
 image is portable to different runtimes without change. Such portability
@@ -115,7 +115,7 @@ Eddequiouaq gave a demo of the community's nemesis: Docker inside Docker
 (DinD). He picked that use case because it requires a lot of privileges,
 which usually means using the dreaded `--privileged` flag. With the
 entitlements patch, he was able to run DinD with `network.admin`,
-`security.admin` and `host.devices.admin`, which *looks* like
+`security.admin`, and `host.devices.admin`, which *looks* like
 `--privileged`, but actually means some protections are still in place.
 According to Eddequiouaq, "everything works and we didn't have to
 disable all the seccomp and AppArmor profiles". He also gave a demo of
@@ -135,7 +135,7 @@ platforms instead of opting out of security configurations when they get
 a "permission denied" error. Eddequiouaq said that Docker eventually
 wants to "ditch the `--privileged` flag because it is really a bad
 habit". Instead, applications should run with the least privileges they
-need. He said that "this is not the case \[as\] currently, everyone
+need. He said that "this is not the case; currently, everyone
 works with defaults that work with 95% of the applications out there."
 Those Docker defaults, he said, provide a "way too big attack surface".
 
@@ -161,7 +161,7 @@ Kubernetes has the same usability issues as Docker so the ultimate goal
 is to get entitlements working in Kubernetes runtimes directly. Indeed,
 its `PodSecurityPolicy` maps (almost) one-to-one with the Docker
 security flags. But as we have [previously
-discussed](https://lwn.net/Articles/754443/), another challenge in
+reported](https://lwn.net/Articles/754443/), another challenge in
 Kubernetes security is that the security models of Kubernetes and Docker
 are not *exactly* identical.
 
@@ -198,7 +198,7 @@ that seccomp was such a "pain to work with to do complicated policies".
 He said that having [eBPF seccomp
 filters](https://lwn.net/Articles/747229/) would make it easier to deal
 with conflicts between policies and also mentioned the work done on the
-[Checkmate](https://lwn.net/Articles/696344/) and
+[Checmate](https://lwn.net/Articles/696344/) and
 [Landlock](https://lwn.net/Articles/703876/) security modules as
 interesting avenues to explore. It seems that none of those kernel
 mechanisms are ready for prime time, at least not to the point that
@@ -219,7 +219,7 @@ recently](https://www.wired.com/story/cryptojacking-tesla-amazon-cloud/).
 Hopefully having such easier and cleaner mechanisms will help users,
 developers, and administrators alike.
 
-A Youtube [video](https://www.youtube.com/watch?v=Jbqxsli2tRw) and
+A YouTube [video](https://www.youtube.com/watch?v=Jbqxsli2tRw) and
 [slides
 \[PDF\]](https://schd.ws/hosted_files/kccnceu18/d1/Kubecon%20Entitlements.pdf)
 of the talk are available.

clarify
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index f09c2dce..4dc5422f 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-05-22T00:00:00+0000"]]
-[[!meta updated="2018-05-22T18:21:40-0400"]]
+[[!meta updated="2018-05-23T09:32:11-0400"]]
 
 [[!toc levels=2]]
 
@@ -95,13 +95,13 @@ networking-related system calls except `socket(AF_UNIX|AF_LOCAL)` with
 dropped network capabilities. AppArmor will `deny network` on some
 platforms while SELinux would do similar enforcement on others.
 
-Eddequiouaq said the complexity of implementing those mechanisms is
-the responsibility of platform developers. Image developers can ship
-entitlements lists along container images, with a regular `docker
-build`, and sign the whole bundle, with `docker trust`. Because
-entitlements do not specify explicit low-level mechanisms, the
-resulting image is portable to different runtimes without change. Such
-portability helps Kubernetes on non-Linux platforms do its job.
+Eddequiouaq said the complexity of implementing those mechanisms is the
+responsibility of platform developers. Image developers can ship
+entitlements lists along container images, with a regular
+`docker build`, and sign the whole bundle, with `docker trust`. Because
+entitlements do not specify explicit low-level mechanisms, the resulting
+image is portable to different runtimes without change. Such portability
+helps Kubernetes on non-Linux platforms do its job.
 
 Entitlements shift the responsibility of designing sandboxing
 environments to image developers, but also empowers them to deliver
@@ -111,22 +111,20 @@ doing. Image end-users, in turn, benefit from verifiable security
 properties delivered by the bundles and the expertise of image
 developers when they `docker pull` and `run` those images.
 
-Eddequiouaq gave a demo of the community's nemesis: Docker inside
-Docker (DinD). He picked that use case because it requires a lot of
-privileges, which usually means using the dreaded `--privileged`
-flag. With the entitlements patch, he was able to run DinD with
-`network.admin`, `security.admin` and `host.devices.admin`, which
-*looks* like `--privileged`, but actually means some protections are
-still in place.  According to Eddequiouaq, "everything works and we
-didn't have to disable all the seccomp and AppArmor profiles". He also
-gave a demo of how to build an image and demonstrated how `docker
-inspect` shows the entitlements bundled inside the image. With such an
-image, `docker run` starts a DinD image without any special flags,
-which means that suddenly images can elevate their own privileges
-without the caller specifying anything on the Docker command
-line. This means that runtimes will require a way to specify they
-trust the content publisher before the extra permissions will be
-granted to the image.
+Eddequiouaq gave a demo of the community's nemesis: Docker inside Docker
+(DinD). He picked that use case because it requires a lot of privileges,
+which usually means using the dreaded `--privileged` flag. With the
+entitlements patch, he was able to run DinD with `network.admin`,
+`security.admin` and `host.devices.admin`, which *looks* like
+`--privileged`, but actually means some protections are still in place.
+According to Eddequiouaq, "everything works and we didn't have to
+disable all the seccomp and AppArmor profiles". He also gave a demo of
+how to build an image and demonstrated how `docker inspect` shows the
+entitlements bundled inside the image. With such an image, `docker run`
+starts a DinD image without any special flags. That requires a way to
+trust the content publisher because suddenly images can elevate their
+own privileges without the caller specifying anything on the Docker
+command line.
 
 Goals and future
 ----------------
@@ -169,19 +167,20 @@ are not *exactly* identical.
 
 Eddequiouaq said that entitlements could help share best security
 policies for a pod in Kubernetes. He proposed that such configuration
-would happen through the `SecurityContext` parameter. Another way would
-be an admission controller that would avoid conflicts between the
-entitlements in the image and existing `SecurityContext` profiles
-already configured in the cluster. There are two possible approaches in
-that case: the rules from the entitlements could expand the existing
-configuration or restrict it where the existing configuration becomes a
-default. The problem here is that the pod's `SecurityContext` already
-provides a widely deployed way to configure security mechanisms, even if
-it's not portable or easy to share, so the proposal shouldn't break
-existing configurations. There is work in progress in Docker to allow
-inheriting entitlements within a Dockerfile. Eddequiouaq proposed that
-Kubernetes should implement a simple mechanism to inherit entitlements
-from images in the admission controller.
+would happen through the [`SecurityContext`
+object](https://lwn.net/SubscriberLink/754443/b25a4d2a687123b6/).
+Another way would be an admission controller that would avoid conflicts
+between the entitlements in the image and existing `SecurityContext`
+profiles already configured in the cluster. There are two possible
+approaches in that case: the rules from the entitlements could expand
+the existing configuration or restrict it where the existing
+configuration becomes a default. The problem here is that the pod's
+`SecurityContext` already provides a widely deployed way to configure
+security mechanisms, even if it's not portable or easy to share, so the
+proposal shouldn't break existing configurations. There is work in
+progress in Docker to allow inheriting entitlements within a Dockerfile.
+Eddequiouaq proposed that Kubernetes should implement a simple mechanism
+to inherit entitlements from images in the admission controller.
 
 The Docker security team wants to create a "widely adopted standard"
 supported by Docker swarm, Kubernetes, or any container scheduler. But

progress update for backups
diff --git a/services/backup.mdwn b/services/backup.mdwn
index 4dc06209..6b878b45 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -303,7 +303,8 @@ documentation.
  3. resync everything again (in progress)
 
  4. add Photos repo with [git-annex encryption](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/) (blocker: [error
-    while setting up gcrypt remote](https://git-annex.branchable.com/bugs/gcrypt_repository_not_found/))
+    while setting up gcrypt remote](https://git-annex.branchable.com/bugs/gcrypt_repository_not_found/), fixed by removing the
+    `push.sign` option, sent patch to spwhitton, so done)
 
  5. restricted shell, see [git-annex-shell](https://git-annex.branchable.com/git-annex-shell/):
 
@@ -373,7 +374,8 @@ Parallel transfers also don't have progress information. It really
 feels like encryption is a second-class citizen here. I also feel it
 will be rather difficult to reconstruct this repository from scratch,
 and an attempt will need to be made before we feel confident of our
-restore capacities.
+restore capacities. Yet the restore was tested below and seems to work
+so we're going ahead with the approach.
 
 ### Encrypted repos restore procedure
 

how to refresh
diff --git a/services/mail/syncmaildir.mdwn b/services/mail/syncmaildir.mdwn
index 4177b2d1..31fbcb3b 100644
--- a/services/mail/syncmaildir.mdwn
+++ b/services/mail/syncmaildir.mdwn
@@ -367,6 +367,9 @@ good.
 It would be nice if `notmuch insert` could be used to deliver the
 emails locally instead of having to rescan the whole database.
 
+How do we refresh a session? In OfflineIMAP, we used to `SIGUSR`, but
+it's unclear how that works in SMD. Maybe [through the FIFOs](https://github.com/gares/syncmaildir/issues/9)?
+
 Issue summary
 =============
 
@@ -380,5 +383,8 @@ above, in chronological order:
  * [should ignore symlinks in "mailboxes"](https://github.com/gares/syncmaildir/issues/5)
  * [smd-pull/push --dry-run way too verbose](https://github.com/gares/syncmaildir/issues/6)
  * [Maildir/INBOX exception](https://github.com/gares/syncmaildir/issues/7)
+ * [full resync fails](https://github.com/gares/syncmaildir/issues/8) (closed, PEBKAC)
+ * [how do FIFOs work?](https://github.com/gares/syncmaildir/issues/9)
 
-Trivia: all of the 7 first issues reported against SMD were from me.
+Trivia: at the time of writing, all of the currently reported issues
+against SMD are the ones above, reported by yours truly.

nicer page name, as that is not just about migrating anymore
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 4177b2d1..f78a3f51 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -1,384 +1,2 @@
-[[!meta title="syncmaildir (SMD) configuration"]]
-
-I tried to follow the official procedure to migrate from OfflienIMAP
-to SMD. I hit some difficulties, which I documented in upstream
-issues. What follows is the detailed test procedure I followed to test
-the synchronization and notes about the process.
-
-[[!toc]]
-
-OfflineIMAP migration
-=====================
-
-This procedure was attempted to migrate my OfflineIMAP mailbox to SMD,
-but ultimately failed for reasons explained below.
-
- 1. run `smd-check-conf` to create a template configuration in
-    `.smd/config.default` and configure it with:
-
-        SERVERNAME=smd-server-anarcat
-        CLIENTNAME=curie-anarcat
-        MAILBOX_LOCAL=Maildir-smd
-        MAILBOX_REMOTE=Maildir-smd
-        TRANSLATOR_RL="smd-translate -m oimap-dovecot -d RL default"
-        TRANSLATOR_LR="smd-translate -m oimap-dovecot -d LR default"
-        EXCLUDE="$MAILBOX_LOCAL/.notmuch/hooks/* $MAILBOX_LOCAL/.notmuch/xapian/*"
-
- 2. authenticate remote server:
-
-        ssh imap.anarc.at true
-
- 3. created a `.ssh/config` entry
-
-        # wrapper for smd
-        Host smd-server-anarcat
-            Hostname imap.anarc.at
-            User anarcat
-            BatchMode yes
-            IdentitiesOnly yes
-            Compression yes
-
-    This will be useful later to configure the restricted shell
-    account. A quick overview of those options:
-    
-     * `Host`: a unique alias that is unlikely to be reused outside of
-       that configuration
-     * `Hostname`: make sure we connect to the right host regardless
-       of the alias defined in `Host`
-     * `User`: same, but for the user
-     * `BatchMode`: do not prompt so fails correctly if public key is
-       missing
-     * `IdentitiesOnly`: same, but do not look for external crypto
-       tokens like a Yubikey
-     * `Compression`: SMD does not do compression by default, so
-       delegate that to `SSH`
-
- 4. create the test maildirs, takes about 2 minutes, on both the
-    client and the server:
- 
-        server$ cp -a Maildir/ Maildir-smd/
-        client$ cp -a Maildir/Anarcat/ Maildir-smd/
-
- 5. rename the `INBOX` folder on the client. the problem here is that
-     the `INBOX` folder exists locally (thanks to offlineimap) and not
-     remotely (thanks to dovecot). this was reported in [bug #7](https://github.com/gares/syncmaildir/issues/7)
-     and it seems the workaround might be, on the client:
-     
-        client$ mv Maildir-smd/INBOX/{cur,new,tmp} Maildir-smd/ && rmdir Maildir-smd/INBOX/
-
- 6. run `smd-check-conf` repeatedly until it stops complaining and
-    looks sane. steps taken to cleanup remote directory:
-
-    * removed top-level stray folders
-    * moved out the Koumbit subdirectory that I couldn't make smd
-      ignore. nothing seemed to work: not only did the folder not get
-      ignored, the translation layer would fail to convert back and
-      forth, because this was not really a Dovecot folder. i tried:
-
-        EXCLUDE_REMOTE='Maildir/Koumbit Maildir/Koumbit* Maildir/Koumbit/* Maildir/Koumbit.INBOX.Archives.2012/ Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*'
-
-      filed as [bug #4](https://github.com/gares/syncmaildir/issues/4). The `.notmuch` folder ignores are
-      necessary because smd crashes on symlinks ([bug #5](https://github.com/gares/syncmaildir/issues/5)).
-
-    * the above fails with remote folder as `Maildir-smd`. just
-      removed the folders for now, but they are important and
-      shouldn't be completely destroyed!
-      
-          server$ mkdir Maildir-smd-notmuch && mv Maildir-smd/.notmuch/{hooks,xapian,muchsync} Maildir-smd-notmuch
-
- 7. when the config looks sane, the next step is to [convert the
-    folder away from OfflineIMAP idiosyncracies](https://github.com/gares/syncmaildir#migration-from-offlineimap), particularly
-    removing the `X-OfflineIMAP` header. there I have found that the
-    suggested commandline:
-
-        find Mail -type f -exec sed -i '/^X-OfflineIMAP/d' {} \;
-
-    was problematic: it could [corrupt email bodies](https://github.com/gares/syncmaildir/issues/1) as it works on
-    complete messages, not just the headers. I wrote a generic [header
-    stripping script](https://gitlab.com/anarcat/scripts/blob/master/strip_header) to workaround that issue. to call it, use
-    this, which takes about three minutes, on both local and remote
-    servers (because yes, the remote server also has OfflineIMAP
-    headers somehow):
-
-        server$ ~/dist/syncmaildir/misc/strip-header Maildir-smd/ 2>&1 >&2 | pv -s 187000 -l > log
-        client$ ~/dist/syncmaildir/misc/strip-header Maildir-smd/ 2>&1 >&2 | pv -s 187000 -l > log
-
-    contributed upstream as [PR #3](https://github.com/gares/syncmaildir/pull/3).
-
- 8. then the files need to be renamed to please SMD, using, which
-    takes about 3 minutes and generates ~140k renames (basically all
-    files get renamed):
-
-        client$ smd-uniform-names -v
-
- 9. actually rename the files, using the script created, takes about 2
-    minutes:
-
-        client$ sh -x smd-rename.sh 2>&1 | pv -l -s $(wc -l smd-rename.sh | cut -f1 -d ' ') > log-rename
-
- 10. first dry-run pull, takes about 4 minutes. about 80k emails are
-     missing, probably because SMD considers all folders, even if not
-     subscribed ([bug #2](https://github.com/gares/syncmaildir/issues/2)). all filenames to transfer are also
-     dumped to stdout, ouch! ([bug #6](https://github.com/gares/syncmaildir/issues/6)). at least logs are in
-     `~/.smd/log/client.default.log`
-
-         client$ smd-pull --dry-run
-
-     update: after a full rerun, it turns out only 9k emails are
-     missing. after running the strip-header on both sides, only
-     ~600. but with s/^cp/^mv/, back to 1400. grml. after *another*
-     full rerun, the numbers are 401 new mails on pull.
-     
-     to get the count:
-     
-         grep stats::new-mails  /home/anarcat/.smd/log/client.default.log
-
-     ... grouped per folder:
-     
-         grep stats::mail-transferre /home/anarcat/.smd/log/client.default.log | sed -e 's/ , /\n/g' | sed 's#/cur/.*##' |  sort | uniq -c | sort -n
-
- 11. first dry-run push - problem: 15k mails missing from remote?
-     maybe because we're in dry-run mode? need to backup remote and
-     test again. nope. there are genuine diffs there, e.g. git-annex
-     folder totally different. maybe not subscribed?
-
-        client$ smd-push --dry-run
-    
-     result: 591 *more* duplicate emails.
- 
- 12. pull again: 490 mails deleted? push again, no change. wtf...
-
- 13. create email remotely and locally, go to 10
-
- 14. run `smd-loop` and hook into startup scripts? (TODO)
-
- 15. create restricted shell (TODO)
- 
- 16. call `notmuch new` in `~/.smd/hooks/post-pull.d/` (TODO)
-
-Clearing the slate can be done by running this command on both ends:
-
-    \rm -r .smd/workarea/ .smd/*.db.txt Maildir-smd/
-
-The migration did not work very well: as documented above, lots of
-problems with exclude patterns, weird error messages, large dumps on
-the console, and scripts I had to rewrite. I also end up with
-duplicate emails in the process, something I generally try to
-avoid. Even if it's about 500 emails over ~200 000, it's still
-annoying.
-
-I found it was better to start off with a clean slate and just copy
-all files as is to start with. The problem then of course is the
-directory layout is completely changed and is now incompatible with
-OfflineIMAP forever. But that is inevitable: the second we rename
-files and unmangle the headers to remove OfflineIMAP specific stuff,
-the folder cannot be reused, so it's unclear what the benefit of
-migrating from OfflineIMAP is over just using rsync to have a clean
-mirror to start with, avoiding all the messy rewrite rules logic.
-
-Full synchronization
-====================
-
-This is an alternative procedure from the above that just copies the
-files over and starts from a clean slate. Considering we lose
-compatibility with OfflineIMAP anyways, it seems like a much simpler
-procedure at no extra cost (except we need to copy files over).
-
-The layout of the files *will* change to something a little more
-obscure which requires some fixes in notmuch tagging scripts and my
-notmuch-emacs config, but that will actually simplify things as we
-reduce the deltas between machines. It also requires a rescan of
-notmuch, which takes a long time (30-60 minutes), but that's a
-one-time cost only without data loss.
-
- 1. configure SSH using the snippet from step 3 in the above procedure
-    and confirm logging in works:

(Diff truncated)
finalize smd documentation
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index aaa2a891..4177b2d1 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -1,4 +1,4 @@
-[[!meta title="Migrating from OfflineIMAP to syncmaildir (SMD)"]]
+[[!meta title="syncmaildir (SMD) configuration"]]
 
 I tried to follow the official procedure to migrate from OfflienIMAP
 to SMD. I hit some difficulties, which I documented in upstream
@@ -7,8 +7,11 @@ the synchronization and notes about the process.
 
 [[!toc]]
 
-Migration procedure
-===================
+OfflineIMAP migration
+=====================
+
+This procedure was attempted to migrate my OfflineIMAP mailbox to SMD,
+but ultimately failed for reasons explained below.
 
  1. run `smd-check-conf` to create a template configuration in
     `.smd/config.default` and configure it with:
@@ -29,13 +32,26 @@ Migration procedure
 
         # wrapper for smd
         Host smd-server-anarcat
-            BatchMode yes
-            Compression yes
             Hostname imap.anarc.at
             User anarcat
+            BatchMode yes
+            IdentitiesOnly yes
+            Compression yes
 
     This will be useful later to configure the restricted shell
-    account.
+    account. A quick overview of those options:
+    
+     * `Host`: a unique alias that is unlikely to be reused outside of
+       that configuration
+     * `Hostname`: make sure we connect to the right host regardless
+       of the alias defined in `Host`
+     * `User`: same, but for the user
+     * `BatchMode`: do not prompt so fails correctly if public key is
+       missing
+     * `IdentitiesOnly`: same, but do not look for external crypto
+       tokens like a Yubikey
+     * `Compression`: SMD does not do compression by default, so
+       delegate that to `SSH`
 
  4. create the test maildirs, takes about 2 minutes, on both the
     client and the server:
@@ -144,41 +160,24 @@ Clearing the slate can be done by running this command on both ends:
 
     \rm -r .smd/workarea/ .smd/*.db.txt Maildir-smd/
 
-Observations
-============
-
-The migration was not exactly perfect: as documented above, lots of
+The migration did not work very well: as documented above, lots of
 problems with exclude patterns, weird error messages, large dumps on
 the console, and scripts I had to rewrite. I also end up with
 duplicate emails in the process, something I generally try to
 avoid. Even if it's about 500 emails over ~200 000, it's still
 annoying.
 
-It might be better to start off with a clean slate and just rsync all
-files. The problem then of course is the directory layout is
-completely changed and is now incompatible with OfflineIMAP
-forever. Of course, this is inevitable: the second we rename files and
-unmangle the headers to remove OfflineIMAP specific stuff, the folder
-cannot be reused, so it's unclear what the benefit of migrating from
-OfflineIMAP is over just using rsync to have a clean mirror to start
-with, avoiding all the messy rewrite rules logic.
-
-Open questions
-==============
-
-How should SMD be started in a session? User level systemd service?
-There is an "applet" that can be used, but that could be annoying. How
-else should errors be reported? It does look simple enough and
-non-intrusive: by default it doesn't notify for new mail, which is
-good.
-
-How should a clean `rsync` based bootstrap be performed?
-
-It would be nice if `notmuch insert` could be used to deliver the
-emails locally instead of having to rescan the whole database.
+I found it was better to start off with a clean slate and just copy
+all files as is to start with. The problem then of course is the
+directory layout is completely changed and is now incompatible with
+OfflineIMAP forever. But that is inevitable: the second we rename
+files and unmangle the headers to remove OfflineIMAP specific stuff,
+the folder cannot be reused, so it's unclear what the benefit of
+migrating from OfflineIMAP is over just using rsync to have a clean
+mirror to start with, avoiding all the messy rewrite rules logic.
 
-Clean-slate procedure
-=====================
+Full synchronization
+====================
 
 This is an alternative procedure from the above that just copies the
 files over and starts from a clean slate. Considering we lose
@@ -186,17 +185,22 @@ compatibility with OfflineIMAP anyways, it seems like a much simpler
 procedure at no extra cost (except we need to copy files over).
 
 The layout of the files *will* change to something a little more
-obscure which will require some fixes in notmuch tagging scripts, but
-that will actually simplify things as we reduce the deltas between the
-machines.
+obscure which requires some fixes in notmuch tagging scripts and my
+notmuch-emacs config, but that will actually simplify things as we
+reduce the deltas between machines. It also requires a rescan of
+notmuch, which takes a long time (30-60 minutes), but that's a
+one-time cost only without data loss.
 
- 1. run step 2 and 3 in first procedure (SSH configuration)
+ 1. configure SSH using the snippet from step 3 in the above procedure
+    and confirm logging in works:
 
- 2. create a copy on remote server:
+        ssh smd-server-anarcat
+
+ 2. create a copy on remote server, takes about two minutes:
 
         ssh anarc.at cp -a Maildir Maildir-smd
 
- 3. strip offlineimap headers on remote:
+ 3. strip offlineimap headers on remote, takes about 3-4 minutes?
  
         ssh anarc.at "~/dist/syncmaildir/misc/strip-header Maildir-smd/ 2>&1 >&2 | pv -s 187000 -l > log"
 
@@ -204,14 +208,15 @@ machines.
  1`
         ssh anarc.at "tar czf - Maildir-smd/" | pv -s 7G | tar xfz -
  
- 5. cleanup cruft:
+ 5. cleanup IMAP server cruft copied from the remote folder:
  
         ( cd Maildir-smd
         find \( -name :list -o -name courierimapkeywords -o -name courierimapuiddb -o -name courierimapacl \) -a -delete
         find \( -name maildirfolder -o -name 'dovecot.index*' -o -name 'dovecot-uidlist' -o -name 'dovecot-keywords' \) -a -delete
         \rm dovecot-uidvalidity* dovecot.mailbox.log subscriptions )
 
- 6. replace notmuch metadata and do a backup
+ 6. clear out notmuch metadata from remote and keep a dump of our
+    current tags
  
         ( cd Maildir-smd
         \rm .notmuch/dump-201*
@@ -219,21 +224,24 @@ machines.
         cp -a Maildir/.notmuch/{xapian,hooks} Maildir-smd/.notmuch/
         notmuch dump > Maildir-smd/.notmuch/dump
 
- 7. we go live. do one last offlienimap run, stop offlineimap, stop
+ 7. *we go live*. do one last offlienimap run, stop offlineimap, stop
     sending email locally, stop whatever writes to `~/Maildir/`. on
     the server, this means stopping dovecot and postfix as well:
     
-        sudo systemctl stop dovecot postfix
-        mv Maildir Maildir.orig && mv Maildir-smd Maildir
+        killall -HUP offlineimap ; sleep 60 ; killall offlineimap
+        ssh anarc.at sudo systemctl stop dovecot postfix
+        ssh anarc.at mv Maildir Maildir.orig && mv Maildir-smd Maildir
 
- 8. put the right folder in place, change the path to the offlineimap
-    folder in `.offlineimaprc` in case it gets started by mistake.
+ 8. swap the old OfflineIMAP folder with the new folder.
  
         mv Maildir Maildir-offlineimap
         mv Maildir-smd Maildir
 
+    also change the path to the OfflineIMAP folder in `.offlineimaprc`
+    in case it gets started by mistake.
+
  9. change folders to point to real folders, change the translators,
-    and fix the exclude patterns:
+    and fix the exclude patterns in `~/.smd/config.default`:
  
         SERVERNAME=smd-server-anarcat
         CLIENTNAME=curie-anarcat
@@ -250,7 +258,10 @@ machines.
      smd on the `Maildir-smd` folder directly, before the rename. it
      might have been keeping state in `~/.smd/workarea` that was
      confusing things so instead we made this direct procedure. the
-     problem was reported upstream in [bug #8](https://github.com/gares/syncmaildir/issues/8).
+     problem was reported upstream in [bug
+     #8](https://github.com/gares/syncmaildir/issues/8), but it turns
+     out the bug was my mistake, as I wasn't renaming the folder on
+     the server side.
 
  11. test push, same:
  
@@ -275,15 +286,24 @@ machines.
  14. send myself an email, which should create an email locally and
      remotely, run pull/push again

(Diff truncated)
more issues identified by corbet
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index cb615cec..f09c2dce 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -95,13 +95,13 @@ networking-related system calls except `socket(AF_UNIX|AF_LOCAL)` with
 dropped network capabilities. AppArmor will `deny network` on some
 platforms while SELinux would do similar enforcement on others.
 
-Eddequiouaq said the complexity of those mechanisms is the
-responsibility of platform developers. Image developers can ship
-entitlements lists along container images, with a regular
-`docker build`, and sign the whole bundle, with `docker trust`. Because
-entitlements do not specify explicit low-level mechanisms, the resulting
-image is portable to different runtimes without change. Such portability
-helps Kubernetes on non-Linux platforms do its job.
+Eddequiouaq said the complexity of implementing those mechanisms is
+the responsibility of platform developers. Image developers can ship
+entitlements lists along container images, with a regular `docker
+build`, and sign the whole bundle, with `docker trust`. Because
+entitlements do not specify explicit low-level mechanisms, the
+resulting image is portable to different runtimes without change. Such
+portability helps Kubernetes on non-Linux platforms do its job.
 
 Entitlements shift the responsibility of designing sandboxing
 environments to image developers, but also empowers them to deliver
@@ -111,20 +111,22 @@ doing. Image end-users, in turn, benefit from verifiable security
 properties delivered by the bundles and the expertise of image
 developers when they `docker pull` and `run` those images.
 
-Eddequiouaq gave a demo of the community's nemesis: Docker inside Docker
-(DinD). He picked that use case because it requires a lot of privileges,
-which usually means using the dreaded `--privileged` flag. With the
-entitlements patch, he was able to run DinD with `network.admin`,
-`security.admin` and `host.devices.admin`, which *looks* like
-`--privileged`, but actually means some protections are still in place.
-According to Eddequiouaq, "everything works and we didn't have to
-disable all the seccomp and AppArmor profiles". He also gave a demo of
-how to build an image and demonstrated how `docker inspect` shows the
-entitlements bundled inside the image. With such an image, `docker run`
-starts a DinD image without any special flags. That requires a way to
-trust the content publisher because suddenly images can elevate their
-own privileges without the caller specifying anything on the Docker
-command line.
+Eddequiouaq gave a demo of the community's nemesis: Docker inside
+Docker (DinD). He picked that use case because it requires a lot of
+privileges, which usually means using the dreaded `--privileged`
+flag. With the entitlements patch, he was able to run DinD with
+`network.admin`, `security.admin` and `host.devices.admin`, which
+*looks* like `--privileged`, but actually means some protections are
+still in place.  According to Eddequiouaq, "everything works and we
+didn't have to disable all the seccomp and AppArmor profiles". He also
+gave a demo of how to build an image and demonstrated how `docker
+inspect` shows the entitlements bundled inside the image. With such an
+image, `docker run` starts a DinD image without any special flags,
+which means that suddenly images can elevate their own privileges
+without the caller specifying anything on the Docker command
+line. This means that runtimes will require a way to specify they
+trust the content publisher before the extra permissions will be
+granted to the image.
 
 Goals and future
 ----------------

entitlements in progress
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 80fc1818..cb615cec 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -1,63 +1,74 @@
-Entitlements: Understandable Container Security Controls
-========================================================
-
-During [KubeCon + CloudNativeCon Europe 2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/), Justin Cormack and
-Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level) simpler security parameters in
-containers ecosystems. Containers, as much as they [exist at all
-within the kernel](https://lwn.net/Articles/740621/) depend on a large set of intricate security
-primitives that can have weird interactions. Because they are so hard
-to use, people often just turn the whole thing off. The goal of the
-proposal is to make those controls easier to understand and use and is
-partly inspired by mobile apps on iOS and Android platforms, an idea
-that trickled back in Microsoft and Apple desktops. The time now seems
-now ripe for improving the field of container security that
-desperately needs simpler controls.
-
-[![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
+[[!meta title="Easier container security with entitlements"]]
+\[LWN subscriber-only content\]
+-------------------------------
+
+[[!meta date="2018-05-22T00:00:00+0000"]]
+[[!meta updated="2018-05-22T18:21:40-0400"]]
+
+[[!toc levels=2]]
+
+During [KubeCon + CloudNativeCon Europe
+2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/),
+Justin Cormack and Nassim Eddequiouaq
+[presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level)
+a proposal to simplify the setting of security parameters for
+containerized applications. Containers depend on a large set of
+intricate security primitives that can have weird interactions. Because
+they are so hard to use, people often just turn the whole thing off. The
+goal of the proposal is to make those controls easier to understand and
+use and is partly inspired by mobile apps on iOS and Android platforms,
+an idea that trickled back into Microsoft and Apple desktops. The time
+now seems now ripe to improve the field of container security, which is
+in desperate need of simpler controls.
 
 The problem with container security
 -----------------------------------
 
 Cormack first stated that container security is too complicated. His
 slides stated bluntly that "unusable security is not security" and he
-pleaded for simpler container security mechanisms with clear
-guarantees for users.
-
-"Container security" is a catch phrase that actually includes all
-sorts of measures, some of which we have [previously
-covered](https://lwn.net/Articles/754433/). Cormack presented an overview of those mechanisms:
-capabilities, seccomp, AppArmor, SELinux, namespaces, cgroups, the
-list goes on. He showed how [docker run --help](https://docs.docker.com/engine/reference/commandline/run/) has a "ridiculously
-large number of options"; a hundred, on my machine, with about fifteen
-just for security mechanisms. He said that "most developers don't know
-how to actually apply those mechanisms to make sure their containers
-are secure". In the best-case scenario, some people may know what the
-options are, but in most cases people don't actually understand each
-mechanism in detail.
+pleaded for simpler container security mechanisms with clear guarantees
+for users.
+
+[![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
+
+"Container security" is a catch phrase that actually includes all sorts
+of measures, some of which we have [previously
+covered](https://lwn.net/Articles/754433/). Cormack presented an
+overview of those mechanisms, including capabilities, seccomp, AppArmor,
+SELinux, namespaces, control groups — the list goes on. He showed how
+[docker run
+--help](https://docs.docker.com/engine/reference/commandline/run/) has a
+"ridiculously large number of options"; there are around one hundred on
+my machine, with about fifteen just for security mechanisms. He said
+that "most developers don't know how to actually apply those mechanisms
+to make sure their containers are secure". In the best-case scenario,
+some people may know what the options are, but in most cases people
+don't actually understand each mechanism in detail.
 
 He gave the example of capabilities; there are about forty possible
-values, just for that one `--cap-drop` option, each with its own
-meaning. He described some capabilities as "understandable", but said
-that others end up in overly broad boxes. Because of the kernel's data
-structure, there can't be more than 64 capabilities, so a bunch of
-functionality was lumped together into `CAP_SYS_ADMIN`.
-
-Cormack also talked about namespaces and seccomp. While there are
-fewer namespaces than capabilities, he said that "it's very unclear
-for a general user what their security properties are". For example,
-"some combinations of capabilities and namespaces will let you escape
-from a container, and other ones don't". He also described seccomp as
-a "long JSON file" as that's the way Kubernetes configures it. Even
-though he said those files could "usefully be even more complicated"
-and said that the files are "very difficult to write".
-
-Cormack stopped his enumeration there, but the same applies to the
-other mechanisms. He said that while developers could sit down and
-write those policies for their application by hand, it's a real mess
-and makes their heads explode. So instead developers run their
-containers in `--privileged` mode. It works, but it disables all the
-nice security mechanisms containers provides. This is why "containers
-do not contain", as Dan Walsh now famously [quipped](https://blog.docker.com/2014/07/new-dockercon-video-docker-security-renamed-from-docker-and-selinux/).
+values that can be provided for the `--cap-drop` option, each with its
+own meaning. He described some capabilities as "understandable", but
+said that others end up in overly broad boxes. Because of the kernel's
+data structure there can't be more than 64 capabilities, so a bunch of
+functionality was lumped together into `CAP_SYS_ADMIN`, he said.
+
+Cormack also talked about namespaces and seccomp. While there are fewer
+namespaces than capabilities, he said that "it's very unclear for a
+general user what their security properties are". For example, "some
+combinations of capabilities and namespaces will let you escape from a
+container, and other ones don't". He also described seccomp as a "long
+JSON file" as that's the way Kubernetes configures it. Even though he
+said those files could "usefully be even more complicated" and said that
+the files are "very difficult to write".
+
+Cormack stopped his enumeration there, but the same applies to the other
+mechanisms. He said that while developers could sit down and write those
+policies for their application by hand, it's a real mess and makes their
+heads explode. So instead developers run their containers in
+`--privileged` mode. It works, but it disables all the nice security
+mechanisms that the container abstraction provides. This is why
+"containers do not contain", as Dan Walsh now famously
+[quipped](https://blog.docker.com/2014/07/new-dockercon-video-docker-security-renamed-from-docker-and-selinux/).
 
 Introducing entitlements
 ------------------------
@@ -67,76 +78,73 @@ Introducing entitlements
 There must be a better way. Eddequiouaq proposed this simple idea: "to
 provide something humans can actually understand without diving into
 code or possibly even without reading documentation". The solution
-proposed by the Docker security team is "entitlements": the ability
-for users to choose simple permissions on the command line.
-Eddequiouaq said that app users and developers alike don't need to
-understand the low-level security mechanisms or how they interact
-within the kernel; "people don't care about that, they want to make
-sure their app is secure."
+proposed by the Docker security team is "entitlements": the ability for
+users to choose simple permissions on the command line. Eddequiouaq said
+that app users and developers alike don't need to understand the
+low-level security mechanisms or how they interact within the kernel;
+"people don't care about that, they want to make sure their app is
+secure."
 
 Entitlements divide resources into meaningful domains like "network",
 "security", or "host resources" (like devices). Behind the scenes,
-Docker translates those into whatever security mechanisms that is
-available. This implies that the actual mechanism deployed will vary
-between runtimes, depending on the implementation.  For example, a
+Docker translates those into whatever security mechanisms are
+available.  This implies that the actual mechanism deployed will vary
+between runtimes, depending on the implementation. For example, a
 "confined" network access might mean a seccomp filter blocking all
-networking-related system calls except `socket(AF_UNIX | AF_LOCAL)`
-with dropped network capabilities. AppArmor will `deny network` on
-some platforms while SELinux would do similar enforcement on others.
+networking-related system calls except `socket(AF_UNIX|AF_LOCAL)` with
+dropped network capabilities. AppArmor will `deny network` on some
+platforms while SELinux would do similar enforcement on others.
 
 Eddequiouaq said the complexity of those mechanisms is the
 responsibility of platform developers. Image developers can ship
-entitlements lists along container images, with a regular `docker
-build`, and sign the whole bundle, with `docker trust`. Because
-entitlement do not specify explicit low-level mechanisms, the
-resulting image is portable to different runtimes without change. Such
-portability helps Kubernetes non-Linux platforms do their job. Indeed,
-the [previous discussion on the topic]([Tim Allclair's presentation](https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google)) pointed out how this would
-be useful to the [SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows) people.
+entitlements lists along container images, with a regular
+`docker build`, and sign the whole bundle, with `docker trust`. Because
+entitlements do not specify explicit low-level mechanisms, the resulting
+image is portable to different runtimes without change. Such portability
+helps Kubernetes on non-Linux platforms do its job.
 
 Entitlements shift the responsibility of designing sandboxing
 environments to image developers, but also empowers them to deliver
-security mechanisms directly to end users. Developers are the ones
-with the best knowledge about what their applications should or should
-not be doing. Image end-users, in turn, benefit from verifiable
-security properties delivered by the bundles and the expertise of
-image developers when they `docker pull` and `run` those images.
-
-Eddequiouaq gave the demo of the community's nemesis: Docker inside
-Docker (DinD). He picked that use case because it requires a lot of
-privileges, which usually means using the dreaded `--privileged`
-flag. With the entitlements patch, he was able to run DinD with
-`network.admin`, `security.admin` and `host.devices.admin`, which
-*looks* like `--privileged`, but actually means some protections are
-still in place. According to Eddequiouaq , "everything works and we
-didn't have to disable all the seccomp and apparmor profiles". He also
-gave a demo of how to build an image and demonstrated how `docker
-inspect` shows the entitlements bundled inside the image. With such an
-image, `docker run` starts a DinD image without any special flag. That
-requires a way to trust the content publisher because suddenly
-images can elevate their own privileges without the caller specifying

(Diff truncated)
fixes after corbets feebdack
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 660d89b3..80fc1818 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -2,13 +2,16 @@ Entitlements: Understandable Container Security Controls
 ========================================================
 
 During [KubeCon + CloudNativeCon Europe 2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/), Justin Cormack and
-Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level) a possible simplification security
-parameters in containers ecosystems, with the goal of making those
-controls easier to understand. The proposal is partly inspired by
-mobile apps on iOS and Android platforms, an idea that trickled
-back in Microsoft and Apple desktops. The time now seems now ripe for
-improving the field of container security that desperately needs
-simpler controls.
+Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level) simpler security parameters in
+containers ecosystems. Containers, as much as they [exist at all
+within the kernel](https://lwn.net/Articles/740621/) depend on a large set of intricate security
+primitives that can have weird interactions. Because they are so hard
+to use, people often just turn the whole thing off. The goal of the
+proposal is to make those controls easier to understand and use and is
+partly inspired by mobile apps on iOS and Android platforms, an idea
+that trickled back in Microsoft and Apple desktops. The time now seems
+now ripe for improving the field of container security that
+desperately needs simpler controls.
 
 [![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
 
@@ -39,13 +42,14 @@ that others end up in overly broad boxes. Because of the kernel's data
 structure, there can't be more than 64 capabilities, so a bunch of
 functionality was lumped together into `CAP_SYS_ADMIN`.
 
-Cormack also talked about namespaces and seccomp. While there are less
-namespaces than capabilities, he said that "it's very unclear for a
-general user what their security properties are". For example, "some
-combinations of capabilities and namespaces will let you escape from a
-container, and other ones don't". As for seccomp, he described it as a
-"long JSON file" that could "usefully be even more complicated" and
-said that the files are "very difficult to write".
+Cormack also talked about namespaces and seccomp. While there are
+fewer namespaces than capabilities, he said that "it's very unclear
+for a general user what their security properties are". For example,
+"some combinations of capabilities and namespaces will let you escape
+from a container, and other ones don't". He also described seccomp as
+a "long JSON file" as that's the way Kubernetes configures it. Even
+though he said those files could "usefully be even more complicated"
+and said that the files are "very difficult to write".
 
 Cormack stopped his enumeration there, but the same applies to the
 other mechanisms. He said that while developers could sit down and
@@ -65,14 +69,15 @@ provide something humans can actually understand without diving into
 code or possibly even without reading documentation". The solution
 proposed by the Docker security team is "entitlements": the ability
 for users to choose simple permissions on the command line.
-Eddequiouaq said that users don't need to understand the low-level
-security mechanisms or how they interact within the kernel; "people
-don't care about that, they want to make sure their app is secure."
+Eddequiouaq said that app users and developers alike don't need to
+understand the low-level security mechanisms or how they interact
+within the kernel; "people don't care about that, they want to make
+sure their app is secure."
 
-Entitlements divide resources in meaningful domains like "network",
+Entitlements divide resources into meaningful domains like "network",
 "security", or "host resources" (like devices). Behind the scenes,
-Docker translate those into whatever security mechanisms
-available. This implies the actual mechanism deployed will vary
+Docker translates those into whatever security mechanisms that is
+available. This implies that the actual mechanism deployed will vary
 between runtimes, depending on the implementation.  For example, a
 "confined" network access might mean a seccomp filter blocking all
 networking-related system calls except `socket(AF_UNIX | AF_LOCAL)`
@@ -121,7 +126,7 @@ by the platforms instead of opting out of security configurations when
 they get a "permission denied" error. Eddequiouaq said that Docker
 eventually wants to "ditch the `--privileged` flag because it is
 really a bad habit". Instead, applications should run with the
-smallest privileges it needs. He said that "this is not the case [as]
+smallest privileges they need. He said that "this is not the case [as]
 currently, everyone works with defaults that work with 95% of the
 applications out there." Those Docker defaults, he said, provide a
 "way too big attack surface". 
@@ -139,7 +144,7 @@ something that's actually available on phones, where an app can only
 talk with a specific set of services. But that is also relevant to
 containers in Kubernetes clusters as administrators often need to
 restrict network access with more granularity than the
-"open/filter/close" options. An example of such policy could to allow
+"open/filter/close" options. An example of such policy could allow
 the "web" container to talk with the "database" container, although it
 might be difficult to specify such high-level policies in practice.
 
@@ -151,8 +156,8 @@ with the Docker security flags. But as we have [previously
 discussed](https://lwn.net/Articles/754443/), another challenge in Kubernetes security is that the
 security models of Kubernetes and Docker are not *exactly* identical.
 
-Eddequiouaq said that entitlements could help with share best security
-policies for a pod in Kubernetes. He proposed that such user input
+Eddequiouaq said that entitlements could help share best security
+policies for a pod in Kubernetes. He proposed that such configuration
 would happen through the `SecurityContext`. Another way would be an
 admission controller that would avoid conflicts between the
 entitlements in the image and existing `SecurityContext` profiles
@@ -171,15 +176,26 @@ controller.
 The Docker security team wants to create a "widely adopted standard"
 supported by Docker swarm, Kubernetes, or any container scheduler.
 But it's still unclear how deep into the Kubernetes stack entitlements
-belong. In the teams' current implementation, Docker translates
+belong. In the team's current implementation, Docker translates
 entitlements into the security mechanisms right before calling its
 runtime ([containerd](https://containerd.io/)), but it might be possible to push the
 entitlements concept straight into the runtime itself, as it knows
-best how the platform operates. Eddequiouaq said the proposal was open
-to changes and discussion so this is all work in progress at this
-stage. The next steps are to make a proposal to the Kubernetes
-community before working on an actual implementation outside of
-Docker.
+best how the platform operates. 
+
+Some grumpy kernel gurus might also notice fundamental similarities
+between this and other mechanisms such as OpenBSD's `pledge()`, which
+made me wonder if entitlements belong in user-space in the first
+place. Cormack observed that seccomp was such a "pain to work with to
+do complicated policies". He said that having [eBPF seccomp
+filters](https://lwn.net/Articles/747229/) would make it easier to deal with conflicts between
+policies and also mentioned the work done on the [Checkmate](https://lwn.net/Articles/696344/) and
+[Landlock](https://lwn.net/Articles/703876/) security modules as interesting avenues to explore. It
+seems that none of those kernel mechanisms are ready for prime time,
+at least not enough that Docker can use it in production. Eddequiouaq
+said the proposal was open to changes and discussion so this is all
+work in progress at this stage. The next steps are to make a proposal
+to the Kubernetes community before working on an actual implementation
+outside of Docker.
 
 I have found the core idea of protecting users from all the
 complicated stuff in container security interesting. It is a recurring

update procedure to remove more cruft and cuter notmuch hook
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 976b35c5..aaa2a891 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -207,7 +207,8 @@ machines.
  5. cleanup cruft:
  
         ( cd Maildir-smd
-        find \( -name 'dovecot.index*' -o -name 'dovecot-uidlist' -o -name 'dovecot-keywords' \) -a -delete
+        find \( -name :list -o -name courierimapkeywords -o -name courierimapuiddb -o -name courierimapacl \) -a -delete
+        find \( -name maildirfolder -o -name 'dovecot.index*' -o -name 'dovecot-uidlist' -o -name 'dovecot-keywords' \) -a -delete
         \rm dovecot-uidvalidity* dovecot.mailbox.log subscriptions )
 
  6. replace notmuch metadata and do a backup
@@ -274,7 +275,11 @@ machines.
  14. send myself an email, which should create an email locally and
      remotely, run pull/push again
 
- 15. hook `notmuch new` in `~/.smd/hooks/post-pull.d/` (TODO)
+ 15. hook `notmuch new` in `~/.smd/hooks/post-pull.d/`, using this
+     simple executable (a text file with that only line, made
+     executable):
+     
+        #!/usr/bin/notmuch new
 
  16. hook `smd-loop` *somewhere*
 

rewrite sync procedure to include real downtime and full sync
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 5b83eeb3..976b35c5 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -190,8 +190,7 @@ obscure which will require some fixes in notmuch tagging scripts, but
 that will actually simplify things as we reduce the deltas between the
 machines.
 
- 1. run step 1-3 in first procedure (configuration, one-time, but
-    remove translators)
+ 1. run step 2 and 3 in first procedure (SSH configuration)
 
  2. create a copy on remote server:
 
@@ -201,81 +200,85 @@ machines.
  
         ssh anarc.at "~/dist/syncmaildir/misc/strip-header Maildir-smd/ 2>&1 >&2 | pv -s 187000 -l > log"
 
- 4. synchronize the Maildir folder:
+ 4. synchronize the Maildir folder, this takes about 20 minutes:
+ 1`
+        ssh anarc.at "tar czf - Maildir-smd/" | pv -s 7G | tar xfz -
  
-        ssh anarc.at "tar cf - Maildir-smd/ | pv -s 13G | gzip -c" | tar xf -
-
- 5. run `smd-check-conf` (step 5 above) to see if things are sane
-
- 6. cleanup cruft:
+ 5. cleanup cruft:
  
         ( cd Maildir-smd
         find \( -name 'dovecot.index*' -o -name 'dovecot-uidlist' -o -name 'dovecot-keywords' \) -a -delete
-        rm dovecot-uidvalidity* dovecot.mailbox.log subscriptions )
+        \rm dovecot-uidvalidity* dovecot.mailbox.log subscriptions )
 
- 7. replace notmuch metadata:
+ 6. replace notmuch metadata and do a backup
  
-        rm Maildir-smd/.notmuch/dump-201*
-        \rm -r Maildir-smd/.notmuch/{muchsync,xapian,hooks}/
+        ( cd Maildir-smd
+        \rm .notmuch/dump-201*
+        \rm -r .notmuch/{muchsync,xapian,hooks}/ )
         cp -a Maildir/.notmuch/{xapian,hooks} Maildir-smd/.notmuch/
+        notmuch dump > Maildir-smd/.notmuch/dump
 
- 8. run a test pull, should be no change:
- 
-        smd-pull --dry-run
-
- 9. test push, same:
- 
-        smd-push --dry-run
+ 7. we go live. do one last offlienimap run, stop offlineimap, stop
+    sending email locally, stop whatever writes to `~/Maildir/`. on
+    the server, this means stopping dovecot and postfix as well:
+    
+        sudo systemctl stop dovecot postfix
+        mv Maildir Maildir.orig && mv Maildir-smd Maildir
 
- 10. that works, so we go live. do one last offlienimap run, stop
-     offlineimap, stop sending email locally, stop whatever writes to
-     `~/Maildir/`
- 
- 11. put the right folder in place, change the path to the offlineimap
-     folder in `.offlineimaprc` in case it gets started by mistake.
+ 8. put the right folder in place, change the path to the offlineimap
+    folder in `.offlineimaprc` in case it gets started by mistake.
  
         mv Maildir Maildir-offlineimap
         mv Maildir-smd Maildir
 
- 12. change folders to point to real folders, change the translators,
-     and fix the exclude patterns:
+ 9. change folders to point to real folders, change the translators,
+    and fix the exclude patterns:
  
+        SERVERNAME=smd-server-anarcat
+        CLIENTNAME=curie-anarcat
         MAILBOX=Maildir
         EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"
         TRANSLATOR_LR="smd-translate -m move -d LR default"
         TRANSLATOR_RL="smd-translate -m move -d RL default"
 
- 13. test push/pull again, only newer folders on remote should be
-     pulled, no change should have happened locally
+ 10. run a test pull, should be no change:
  
         smd-pull --dry-run
+
+     we had problems at this step previously because we were running
+     smd on the `Maildir-smd` folder directly, before the rename. it
+     might have been keeping state in `~/.smd/workarea` that was
+     confusing things so instead we made this direct procedure. the
+     problem was reported upstream in [bug #8](https://github.com/gares/syncmaildir/issues/8).
+
+ 11. test push, same:
+ 
         smd-push --dry-run
 
-     the above somewhat fails with:
+ 12. run `notmuch new`. this involves first fixing the `notmuch-tag`
+     and `notmuch-purge` script to remove the host-specific exception,
+     then dropping all the `unread`, `flagged` and `inbox` tags that
+     will be re-imported by mistake, then restoring from the backup:
      
-        smd-server: ERROR: Client aborted, removing /home/anarcat/.smd/curie-anarcat__Maildir.db.txt.new and /home/anarcat/.smd/curie-anarcat__Maildir.db.txt.mtime.new
-        smd-client: ERROR: Failed to add Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS since a file with the same name
-        smd-client: ERROR: exists but its content is different.
-        smd-client: ERROR: To fix this problem you should rename Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS
-        smd-client: ERROR: Executing `cd; mv -n "Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS" "Maildir/.bresil/cur/1527001064.1.localhost"` should work.
-        default: smd-client@localhost: TAGS: error::context(mail-addition) probable-cause(concurrent-mailbox-edit) human-intervention(necessary) suggested-actions(run(mv -n "/home/anarcat/.smd/workarea/Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS" "/home/anarcat/.smd/workarea/Maildir/.bresil/tmp/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS"))
-
-     i have tried the above `mv` command (the first one), only to get
-     hit with a similar error on a different message on the same
-     mailbox. i suspect it will bug me like that for ever one of the
-     187k emails, so there's something definitely wrong with
-     this. also notice how there are two possible `mv` commands to run
-     here, which are slightly different, and it's unclear which one to
-     run.
-
- 14. run `notmuch new`
-
- 15. run the real pull/push
+        notmuch new
+        notmuch tag -inbox -flagged -unread
+        pv ~/Maildir/.notmuch/dump | notmuch restore
+
+    running `notmuch new` the first time takes a loong time, maybe as
+    long as if it was a clean database. might be worth not copying the
+    `xapian` folder after all and just start from scratch, as long as
+    we restore the previous dump.
+
+ 13. run the real pull/push
  
- 16. send myself an email, which should create an email locally and
+ 14. send myself an email, which should create an email locally and
      remotely, run pull/push again
 
+ 15. hook `notmuch new` in `~/.smd/hooks/post-pull.d/` (TODO)
+
+ 16. hook `smd-loop` *somewhere*
 
+ 17. create restricted shell
 
 Performance comparison
 ======================

full sync attempt, fails
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index ba957ab7..5b83eeb3 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -207,7 +207,75 @@ machines.
 
  5. run `smd-check-conf` (step 5 above) to see if things are sane
 
- 6. run `smd-pull`, `smd-push`, etc. step 10 and following above.
+ 6. cleanup cruft:
+ 
+        ( cd Maildir-smd
+        find \( -name 'dovecot.index*' -o -name 'dovecot-uidlist' -o -name 'dovecot-keywords' \) -a -delete
+        rm dovecot-uidvalidity* dovecot.mailbox.log subscriptions )
+
+ 7. replace notmuch metadata:
+ 
+        rm Maildir-smd/.notmuch/dump-201*
+        \rm -r Maildir-smd/.notmuch/{muchsync,xapian,hooks}/
+        cp -a Maildir/.notmuch/{xapian,hooks} Maildir-smd/.notmuch/
+
+ 8. run a test pull, should be no change:
+ 
+        smd-pull --dry-run
+
+ 9. test push, same:
+ 
+        smd-push --dry-run
+
+ 10. that works, so we go live. do one last offlienimap run, stop
+     offlineimap, stop sending email locally, stop whatever writes to
+     `~/Maildir/`
+ 
+ 11. put the right folder in place, change the path to the offlineimap
+     folder in `.offlineimaprc` in case it gets started by mistake.
+ 
+        mv Maildir Maildir-offlineimap
+        mv Maildir-smd Maildir
+
+ 12. change folders to point to real folders, change the translators,
+     and fix the exclude patterns:
+ 
+        MAILBOX=Maildir
+        EXCLUDE="Maildir/.notmuch/hooks/* Maildir/.notmuch/xapian/*"
+        TRANSLATOR_LR="smd-translate -m move -d LR default"
+        TRANSLATOR_RL="smd-translate -m move -d RL default"
+
+ 13. test push/pull again, only newer folders on remote should be
+     pulled, no change should have happened locally
+ 
+        smd-pull --dry-run
+        smd-push --dry-run
+
+     the above somewhat fails with:
+     
+        smd-server: ERROR: Client aborted, removing /home/anarcat/.smd/curie-anarcat__Maildir.db.txt.new and /home/anarcat/.smd/curie-anarcat__Maildir.db.txt.mtime.new
+        smd-client: ERROR: Failed to add Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS since a file with the same name
+        smd-client: ERROR: exists but its content is different.
+        smd-client: ERROR: To fix this problem you should rename Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS
+        smd-client: ERROR: Executing `cd; mv -n "Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS" "Maildir/.bresil/cur/1527001064.1.localhost"` should work.
+        default: smd-client@localhost: TAGS: error::context(mail-addition) probable-cause(concurrent-mailbox-edit) human-intervention(necessary) suggested-actions(run(mv -n "/home/anarcat/.smd/workarea/Maildir/.bresil/cur/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS" "/home/anarcat/.smd/workarea/Maildir/.bresil/tmp/1190657273_35.6763.mumia,U=10,FMD5=dbed73e537122b2e2505089b78e7b27c:2,FS"))
+
+     i have tried the above `mv` command (the first one), only to get
+     hit with a similar error on a different message on the same
+     mailbox. i suspect it will bug me like that for ever one of the
+     187k emails, so there's something definitely wrong with
+     this. also notice how there are two possible `mv` commands to run
+     here, which are slightly different, and it's unclear which one to
+     run.
+
+ 14. run `notmuch new`
+
+ 15. run the real pull/push
+ 
+ 16. send myself an email, which should create an email locally and
+     remotely, run pull/push again
+
+
 
 Performance comparison
 ======================

final first draft
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index f01ee9eb..660d89b3 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -2,41 +2,42 @@ Entitlements: Understandable Container Security Controls
 ========================================================
 
 During [KubeCon + CloudNativeCon Europe 2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/), Justin Cormack and
-Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level) a new proposal to simplify security
-in containers ecosystems, with the goal of making security controls
-understandable. The proposal is partly inspired by mobile app security
-on iOS and Android platforms. That idea also trickled back in
-Microsoft and Apple desktops and seems now ripe for improving the
-field of container security that desperately needs simpler controls.
+Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level) a possible simplification security
+parameters in containers ecosystems, with the goal of making those
+controls easier to understand. The proposal is partly inspired by
+mobile apps on iOS and Android platforms, an idea that trickled
+back in Microsoft and Apple desktops. The time now seems now ripe for
+improving the field of container security that desperately needs
+simpler controls.
 
 [![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
 
 The problem with container security
 -----------------------------------
 
-Cormack first stated that container security is too complicated and
-hard to figure out. His slides stated bluntly that "unusable security
-is not security". He argued that container security mechanisms should
-be simple with clear guarantees for users.
-
-The problem is that container security includes all sorts of measures,
-some of which we have [previously covered](https://lwn.net/Articles/754433/). Cormack presented an
-overview of those mechanisms: capabilities, seccomp, AppArmor,
-SELinux, namespaces, cgroups, the list goes on. He showed how the
-[docker run](https://docs.docker.com/engine/reference/commandline/run/) help has a "ridiculously large number of options" with
-about fifteen just for security mechanisms. He said that "most
-developers don't know how to actually apply those mechanisms to make
-sure their containers are secure". In the best-case scenario, some
-people may know what the options are, but in most cases people don't
-actually understand each mechanism in detail.
-
-He gave the example of capabilities, which, remember, are not enabled
-but *dropped* with `docker run --cap-drop`. There are about 40
-possible values for that one specific option. He described some
-capabilities as "understandable", but said that others end up in
-overly broad boxes. Because of the kernel's data structure, there
-can't be more than 64 capabilities, which made a bunch of
-functionality lumped together into `CAP_SYS_ADMIN`.
+Cormack first stated that container security is too complicated. His
+slides stated bluntly that "unusable security is not security" and he
+pleaded for simpler container security mechanisms with clear
+guarantees for users.
+
+"Container security" is a catch phrase that actually includes all
+sorts of measures, some of which we have [previously
+covered](https://lwn.net/Articles/754433/). Cormack presented an overview of those mechanisms:
+capabilities, seccomp, AppArmor, SELinux, namespaces, cgroups, the
+list goes on. He showed how [docker run --help](https://docs.docker.com/engine/reference/commandline/run/) has a "ridiculously
+large number of options"; a hundred, on my machine, with about fifteen
+just for security mechanisms. He said that "most developers don't know
+how to actually apply those mechanisms to make sure their containers
+are secure". In the best-case scenario, some people may know what the
+options are, but in most cases people don't actually understand each
+mechanism in detail.
+
+He gave the example of capabilities; there are about forty possible
+values, just for that one `--cap-drop` option, each with its own
+meaning. He described some capabilities as "understandable", but said
+that others end up in overly broad boxes. Because of the kernel's data
+structure, there can't be more than 64 capabilities, so a bunch of
+functionality was lumped together into `CAP_SYS_ADMIN`.
 
 Cormack also talked about namespaces and seccomp. While there are less
 namespaces than capabilities, he said that "it's very unclear for a
@@ -46,13 +47,13 @@ container, and other ones don't". As for seccomp, he described it as a
 "long JSON file" that could "usefully be even more complicated" and
 said that the files are "very difficult to write".
 
-Cormack stopped the demonstration there, but the same applies to other
-mechanisms. The point is that while developers could sit down and
+Cormack stopped his enumeration there, but the same applies to the
+other mechanisms. He said that while developers could sit down and
 write those policies for their application by hand, it's a real mess
-and will make your head explode. So instead developers run their
-containers in `--privileged` mode because it works, but that disables
-all the nice security mechanisms. This is why "containers do not
-contain", as Dan Walsh [quipped](https://blog.docker.com/2014/07/new-dockercon-video-docker-security-renamed-from-docker-and-selinux/).
+and makes their heads explode. So instead developers run their
+containers in `--privileged` mode. It works, but it disables all the
+nice security mechanisms containers provides. This is why "containers
+do not contain", as Dan Walsh now famously [quipped](https://blog.docker.com/2014/07/new-dockercon-video-docker-security-renamed-from-docker-and-selinux/).
 
 Introducing entitlements
 ------------------------
@@ -65,12 +66,12 @@ code or possibly even without reading documentation". The solution
 proposed by the Docker security team is "entitlements": the ability
 for users to choose simple permissions on the command line.
 Eddequiouaq said that users don't need to understand the low-level
-security mechanisms or how they interact within the kernel: "people
+security mechanisms or how they interact within the kernel; "people
 don't care about that, they want to make sure their app is secure."
 
-The resources are divided in meaningful domains, for example
-"network", "security", or "host resources" (like devices). Behind the
-scenes, the runtime translate those into the security mechanisms
+Entitlements divide resources in meaningful domains like "network",
+"security", or "host resources" (like devices). Behind the scenes,
+Docker translate those into whatever security mechanisms
 available. This implies the actual mechanism deployed will vary
 between runtimes, depending on the implementation.  For example, a
 "confined" network access might mean a seccomp filter blocking all
@@ -79,104 +80,110 @@ with dropped network capabilities. AppArmor will `deny network` on
 some platforms while SELinux would do similar enforcement on others.
 
 Eddequiouaq said the complexity of those mechanisms is the
-responsibility of platform developers and {abstracted away} from
-end-users to make their lives easier. But entitlements are also
-designed to ship in container images (with a regular `docker build`)
-and image developers can sign the whole bundle (with `docker
-trust`). Because entitlement do not specify explicit low-level
-mechanisms, the resulting security profile is portable to different
-runtimes without change as well. Such an abstractions are also a key
-part in the work to port Kubernetes to non-Linux platforms, something
-the [previous discussion on the topic]([Tim Allclair's presentation](https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google)) said would be useful to the
-[SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows) people.
-
-Entitlements shifts the responsibility of designing sandboxing
+responsibility of platform developers. Image developers can ship
+entitlements lists along container images, with a regular `docker
+build`, and sign the whole bundle, with `docker trust`. Because
+entitlement do not specify explicit low-level mechanisms, the
+resulting image is portable to different runtimes without change. Such
+portability helps Kubernetes non-Linux platforms do their job. Indeed,
+the [previous discussion on the topic]([Tim Allclair's presentation](https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google)) pointed out how this would
+be useful to the [SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows) people.
+
+Entitlements shift the responsibility of designing sandboxing
 environments to image developers, but also empowers them to deliver
 security mechanisms directly to end users. Developers are the ones
 with the best knowledge about what their applications should or should
 not be doing. Image end-users, in turn, benefit from verifiable
 security properties delivered by the bundles and the expertise of
-image developers when they `docker pull` and run those images.
+image developers when they `docker pull` and `run` those images.
+
+Eddequiouaq gave the demo of the community's nemesis: Docker inside
+Docker (DinD). He picked that use case because it requires a lot of
+privileges, which usually means using the dreaded `--privileged`
+flag. With the entitlements patch, he was able to run DinD with
+`network.admin`, `security.admin` and `host.devices.admin`, which
+*looks* like `--privileged`, but actually means some protections are
+still in place. According to Eddequiouaq , "everything works and we
+didn't have to disable all the seccomp and apparmor profiles". He also
+gave a demo of how to build an image and demonstrated how `docker
+inspect` shows the entitlements bundled inside the image. With such an
+image, `docker run` starts a DinD image without any special flag. That
+requires a way to trust the content publisher because suddenly
+images can elevate their own privileges without the caller specifying
+anything on the Docker command line.
 
 Goals and future
 ----------------
 
-The specification {aims to} provide the best user experience possible,
+The specification aims to provide the best user experience possible,
 so that people actually start using the security mechanisms provided
 by the platforms instead of opting out of security configurations when
-they get a "permission denied" error. The goal is also to create a
-"widely adopted standard" supported by Docker swarm, Kubernetes, or
-any container scheduler. Eddequiouaq said that Docker eventually wants
-to "ditch the `--privileged` flag because it is really a bad
-habit". Instead, applications should run with the smallest privileges
-it needs. He said that "this is not the case [as] currently, everyone
-works with defaults that work with 95% of the applications out there."
-Those Docker defaults, he said, provide a "way too big attack
-surface". He also opened the possibility for developers to define
-custom entitlements because "it's hard to come up with a set that will
-cover all needs".
+they get a "permission denied" error. Eddequiouaq said that Docker
+eventually wants to "ditch the `--privileged` flag because it is
+really a bad habit". Instead, applications should run with the
+smallest privileges it needs. He said that "this is not the case [as]
+currently, everyone works with defaults that work with 95% of the
+applications out there." Those Docker defaults, he said, provide a
+"way too big attack surface". 
+
+Eddequiouaq opened the door for developers to define custom
+entitlements because "it's hard to come up with a set that will cover
+all needs". One way the team thought of dealing with that uncertainty
+is to have versions of the specification but it is unclear how that
+would work in practice. Would the version be in the entitlement labels
+(e.g. `network-v1.admin`), or out of band?
 
 Another feature proposed is the control of API access and
 service-to-service communication in the security profile. This is
 something that's actually available on phones, where an app can only
-talk with a specific set of services. {This also applies to the
-container world as allowing network access is one thing, but

(Diff truncated)
second pass on first draft
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index f26a21f3..f01ee9eb 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -1,12 +1,13 @@
 Entitlements: Understandable Container Security Controls
 ========================================================
 
-During {KubeCon}, Justin Cormack and Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level)
-a new proposal to simplify security in containers ecosystems, with the
-goal of making security controls understandable. The proposal is
-clearly inspired by mobile app security on iOS and Android platforms,
-but that has also trickled back in Microsoft and Apple desktops as
-well.
+During [KubeCon + CloudNativeCon Europe 2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/), Justin Cormack and
+Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level) a new proposal to simplify security
+in containers ecosystems, with the goal of making security controls
+understandable. The proposal is partly inspired by mobile app security
+on iOS and Android platforms. That idea also trickled back in
+Microsoft and Apple desktops and seems now ripe for improving the
+field of container security that desperately needs simpler controls.
 
 [![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
 
@@ -14,44 +15,44 @@ The problem with container security
 -----------------------------------
 
 Cormack first stated that container security is too complicated and
-hard to figure out, stating the motto "unusable security is not
-security". Security, instead, should be simple with a clear set of
-guarantees for the users.
+hard to figure out. His slides stated bluntly that "unusable security
+is not security". He argued that container security mechanisms should
+be simple with clear guarantees for users.
 
 The problem is that container security includes all sorts of measures,
-some of which we have [previously covered][{}]. Cormack presented what
-seemed like an endless list of such mechanisms: capabilities, seccomp,
-AppArmor, SELinux, namespaces, cgroups... Some people may know what
-they are from the documentation, but it takes a lot of effort to
-actually understand in detail what each does. The result is that most
+some of which we have [previously covered](https://lwn.net/Articles/754433/). Cormack presented an
+overview of those mechanisms: capabilities, seccomp, AppArmor,
+SELinux, namespaces, cgroups, the list goes on. He showed how the
+[docker run](https://docs.docker.com/engine/reference/commandline/run/) help has a "ridiculously large number of options" with
+about fifteen just for security mechanisms. He said that "most
 developers don't know how to actually apply those mechanisms to make
-sure their containers are secure. As an example, he said the [docker
-run](https://docs.docker.com/engine/reference/commandline/run/) help has a "ridiculously large number of options" with about
-fifteen relevant to security. So right off the bat, there is a wall of
-possible mechanisms to figure out.
-
-But even individual mechanisms are hard to understand and
-configure. He gave the example of capabilities {`docker run
---cap-drop`}: there are about 40 possible values for that one specific
-option. He described some capabilities as "understandable", but said
-that others end up in overly broad boxes. For example, since there
-can't be more than 64 (because of the kernel's data structure),
-everything else is lumped into `CAP_SYS_ADMIN`.
+sure their containers are secure". In the best-case scenario, some
+people may know what the options are, but in most cases people don't
+actually understand each mechanism in detail.
+
+He gave the example of capabilities, which, remember, are not enabled
+but *dropped* with `docker run --cap-drop`. There are about 40
+possible values for that one specific option. He described some
+capabilities as "understandable", but said that others end up in
+overly broad boxes. Because of the kernel's data structure, there
+can't be more than 64 capabilities, which made a bunch of
+functionality lumped together into `CAP_SYS_ADMIN`.
 
 Cormack also talked about namespaces and seccomp. While there are less
 namespaces than capabilities, he said that "it's very unclear for a
-general user what the security properties are". For example, "some
+general user what their security properties are". For example, "some
 combinations of capabilities and namespaces will let you escape from a
-container, and other ones don't". As for seccomp, he described it as
-an "extremely long JSON file" that could "usefully be even more
-complicated" and said that the files are "very difficult to write".
-
-The list of mechanisms that make up containers in the Linux kernel
-could go on and on. The point is that while developers could sit down
-and write those policies for their application by hand, it's a real
-mess and makes their head explode. So instead developers run their
-containers in `--privileged` mode because it works, but then all those
-nice security mechanisms are thrown out the window.
+container, and other ones don't". As for seccomp, he described it as a
+"long JSON file" that could "usefully be even more complicated" and
+said that the files are "very difficult to write".
+
+Cormack stopped the demonstration there, but the same applies to other
+mechanisms. The point is that while developers could sit down and
+write those policies for their application by hand, it's a real mess
+and will make your head explode. So instead developers run their
+containers in `--privileged` mode because it works, but that disables
+all the nice security mechanisms. This is why "containers do not
+contain", as Dan Walsh [quipped](https://blog.docker.com/2014/07/new-dockercon-video-docker-security-renamed-from-docker-and-selinux/).
 
 Introducing entitlements
 ------------------------
@@ -61,62 +62,59 @@ Introducing entitlements
 There must be a better way. Eddequiouaq proposed this simple idea: "to
 provide something humans can actually understand without diving into
 code or possibly even without reading documentation". The solution
-proposed are "entitlements": the ability for users to choose simple
-permissions on the command line. "Users don't need to understand the
-low-level security mechanisms or how they interact within the
-kernel"{check}, Eddequiouaq said. "People don't care about that, they
-want to make sure their app is secure."
-
-The resources are divided in meaningful domains, for example if the
-"network" should be confined, proxied, or with full access. Similar
-properties are defined for "security" or "host resources" (like
-devices). Behind the scenes, the runtime translate those into the
-security mechanisms available. This implies the actual mechanism
-deployed will vary between runtimes, depending on the implementation.
-For example, a confined network access might mean a seccomp filter
-blocking all networking-related system calls except `socket(AF_UNIX |
-AF_LOCAL)` with dropped network capabilities. AppArmor will be used to
-`deny network` on some platforms while SELinux would be used to do
-similar enforcement on others.
-
-Eddequiouaq said the complexity of those mechanisms is responsibility
-of platform developers and should be abstracted away for
-end-users. This makes it easier for users but also enables application
-developers to translate their knowledge of their application in a
-entitlement list bundled with the image. Entitlements are designed to
-be baked into container images by `docker build` and the whole thing
-can be signed by the image developers (with `docker trust`). Because
-entitlement do not include low-level specifications like AppArmor, the
-resulting security profile is portable to different backends without
-change as well. Such an abstracted mechanism would also be a key step
-in supporting non-Linux runtimes as well. {check SIG Windows}
-
-{This shifts the responsibility of designing sandboxing environments
-to image developers, but also empowers them to deliver security
-mechanisms to the end user. They are the ones with the most knowledge
-about what their applications should or should not be doing and should
-be allowed to specify that as part of their images. Image end-users,
-in turn, benefit from verifiable security properties delivered by the
-bundles and the expertise of image developers when they `docker pull`
-and run those images.}
+proposed by the Docker security team is "entitlements": the ability
+for users to choose simple permissions on the command line.
+Eddequiouaq said that users don't need to understand the low-level
+security mechanisms or how they interact within the kernel: "people
+don't care about that, they want to make sure their app is secure."
+
+The resources are divided in meaningful domains, for example
+"network", "security", or "host resources" (like devices). Behind the
+scenes, the runtime translate those into the security mechanisms
+available. This implies the actual mechanism deployed will vary
+between runtimes, depending on the implementation.  For example, a
+"confined" network access might mean a seccomp filter blocking all
+networking-related system calls except `socket(AF_UNIX | AF_LOCAL)`
+with dropped network capabilities. AppArmor will `deny network` on
+some platforms while SELinux would do similar enforcement on others.
+
+Eddequiouaq said the complexity of those mechanisms is the
+responsibility of platform developers and {abstracted away} from
+end-users to make their lives easier. But entitlements are also
+designed to ship in container images (with a regular `docker build`)
+and image developers can sign the whole bundle (with `docker
+trust`). Because entitlement do not specify explicit low-level
+mechanisms, the resulting security profile is portable to different
+runtimes without change as well. Such an abstractions are also a key
+part in the work to port Kubernetes to non-Linux platforms, something
+the [previous discussion on the topic]([Tim Allclair's presentation](https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google)) said would be useful to the
+[SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows) people.
+
+Entitlements shifts the responsibility of designing sandboxing
+environments to image developers, but also empowers them to deliver
+security mechanisms directly to end users. Developers are the ones
+with the best knowledge about what their applications should or should
+not be doing. Image end-users, in turn, benefit from verifiable
+security properties delivered by the bundles and the expertise of
+image developers when they `docker pull` and run those images.
 
 Goals and future
 ----------------
 
-The specification aims to provide the best user experience possible,
+The specification {aims to} provide the best user experience possible,
 so that people actually start using the security mechanisms provided
 by the platforms instead of opting out of security configurations when
 they get a "permission denied" error. The goal is also to create a
-"widely adopted standard" that would be supported by all platforms
-like Kubernetes, Docker swarm or any container schedulers. Eddequiouaq
-said that Docker eventually wants to "ditch the `--privileged` flag
-because it is really a bad habit". Instead, applications should run
-with the smallest privileges it needs. He said that "this is not the
-case [as] currently, everyone works with defaults that work with 95%
-of the applications out there." Those Docker defaults, he said,
-provide a "way too big attack surface". He also opened the possibility
-for developers to define custom entitlements because "it's hard to
-come up with a set that will cover all needs"{check}.
+"widely adopted standard" supported by Docker swarm, Kubernetes, or
+any container scheduler. Eddequiouaq said that Docker eventually wants
+to "ditch the `--privileged` flag because it is really a bad
+habit". Instead, applications should run with the smallest privileges

(Diff truncated)
finish first draft, remove notes from previous KC
diff --git a/blog/container-isolation.mdwn b/blog/container-isolation.mdwn
deleted file mode 100644
index fb0ed8e1..00000000
--- a/blog/container-isolation.mdwn
+++ /dev/null
@@ -1,95 +0,0 @@
-containers isolation
---------------------
-
-Tim Allclair. works on the node team, focus on security
-
-lead a discussion on container isolation. just ask a bunch of
-question.
-
-two big security areas
-
-usability of isolation mechs. namespaces, caps, seccomp, apparmor,
-selinux... to do maximum security you need to be an expert in the
-linux kernel or you need to run something on top.
-
-proposal from Nas and the secteam at docker for something like
-entitlements. inspired by the permissions model on apps and
-android. higher level abstractions. proxy network traffic, local
-storage, etc. to get portability. SIG windows.
-
-libentitlement under the moby project in progress
-
-pushed into the runtimes? cri-o gets list of entitlements and then
-does its own thing and then containerd does its own thing... or do we
-resolve in the kubelet? or in the control plane... prolly a bad idea
-because mixed setups will have problems.
-
-what should the list be, and how does that map to the lower-level. 
-
-is this feasible? are there enough high order bits to fit workloads
-into those buckets or is there a smooth spectrum of what apps need. do
-the defaults work now no? --privilege
-
-selinux is the same. setenforce=0. introspection: show which command
-to run to allow this instead of disabling everything.
-
-most commons ones should be portable across platforms. we won't get
-the set right the first time.. 
-
-example? docker proposal: https://github.com/moby/moby/issues/32801
-
-will be more coarse than previous ones. you may give out more
-permissions.
-
-QoS impl. in k8s is coarsed-grain makes it pretty easy to get things
-right, instead of fiddling around.
-
-this could be integrated in RBAC to have mixed node envs. will need
-support from scheduler
-
-acess devices to get accelerators will require some of this for some
-devices... ?
-
-next steps: proposal around this in k8s libentitlement already being
-dev'd, planned to be integrated into docker. come up with a common
-model to get the same abstractions across platforms.
-
-https://github.com/moby/libentitlement
-
-secure containers: stronger isolation than what the linux kernel
-offers. or not.
-
-sandboxing untrusted code: blackbox legacy app that i don't trust to
-limit compromise or multitenant: multiple mutually untrusted users.
-
-right level of abstractions
-
-
-what are we talking about?
-
-using the term sandbox instead of hypervisor or vm - we may not want
-to resrtct ourselves
-
-having a hypervisor sandbox may mean giving more privilege at the
-linux level.
-
-we also need to consider resource-isolation, as a reliability problem
-local DOS issues are out of the threat model. adding it to the
-sandboxing model would add it to the thread model. fork bomb was the
-first example. it is in the roadmap.
-
-k8s uses a per-pod policy for separation. pod-specific model also
-models serverless models as well and fits better with the networking
-layer
-
-at the container level allows more fine-grained model. example:
-trusted istio proxy or monitoring agents that has a secret to talk
-with a server.
-
-you could do both - have the sandbox at the pod boundary and have the
-container refine it. and ther could be a use case for this. 
-
-do we want to choose implementation or meaning? aka "VM" vs
-"Kata". maybe entitlements shuld define what the final level is.
-
-try and get out a proposal on this. 
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 95541ae3..f26a21f3 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -1,107 +1,97 @@
 Entitlements: Understandable Container Security Controls
 ========================================================
 
-{write lead}
-
-https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level
-
-https://www.youtube.com/watch?v=Jbqxsli2tRw
-
-overlaps:
-https://kccnceu18.sched.com/event/Dqvz/horizontal-pod-autoscaler-reloaded-scale-on-custom-metrics-maciej-pytel-google-solly-ross-red-hat-intermediate-skill-level
-
-notes from last kubecon relevant here:
-
- * [Docker proposal](https://github.com/moby/moby/issues/32801) from
-   april 2017
- * [libentitlement](https://github.com/moby/libentitlement) is a
-   thing, but no dev since december
-
-Justin Cormack, Nassim Eddequiouaq, from Docker security dept.
+During {KubeCon}, Justin Cormack and Nassim Eddequiouaq [presented](https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level)
+a new proposal to simplify security in containers ecosystems, with the
+goal of making security controls understandable. The proposal is
+clearly inspired by mobile app security on iOS and Android platforms,
+but that has also trickled back in Microsoft and Apple desktops as
+well.
 
 [![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
 
-{intro}
-
-{With the goal of making security controls understandable.}
-
 The problem with container security
 -----------------------------------
 
-Cormack started by saying that container security is often seen as a
-"complicated subject" but that should be simple with a set of
-garantees. Kubernetes should respect the motto: "unusable security is
-not security".
-
-Container security includes all sorts of measures, some of which we
-have [previously covered][{}]. Cormack presented what seemed likle an
-endless list of measures: capabilities, seccomp, AppArmor, SELinux,
-namespaces, cgroups... He said that some people may know what they are
-from the documentation, but that it takes a lot of effort to actually
-understand in detail what they do. The result is that most developers
-don't know how to actually apply those mechanisms to make sure their
-containers are secure. He gave the example of the [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)
-help page that has a "ridiculously large number of options" with about
-fifteen of those relevant to security configuration.
-
-Individual mechanisms are hard to understand and configuration. He
-gave the example of capabilities that are controlled by `docker run
---cap-drop` : there are about 40 possible values for that one specific
-option. Since there can't be more than 64 (because of the kernel's
-data structure), everything else is lumped into `CAP_SYS_ADMIN` that
-allows everything. He described some capabilities as understandable,
-but said that others end up in overly broad boxes. The kernel has less
-namespaces (7) than capabilities, yet he said that "it's very unclear
-for a general user what the security properties are". For example,
-"some combinations of capabilities and namespaces will let you escape
-from a container, and other ones don't". He described seccomp as an
-"extremely long JSON file" that could "usefully be even more
-complicated". Those files are obviously "very difficult to write".
-
-The list of such parameters to feed into the kernel could go on. The
-point is that while developers could sit down and write those policies
-for their application by hand, it's a real mess and makes their head
-explode. So instead developers run their containers in `--privileged`
-mode because it works, but then all those nice security mechanisms are
-thrown out the window.
+Cormack first stated that container security is too complicated and
+hard to figure out, stating the motto "unusable security is not
+security". Security, instead, should be simple with a clear set of
+guarantees for the users.
+
+The problem is that container security includes all sorts of measures,
+some of which we have [previously covered][{}]. Cormack presented what
+seemed like an endless list of such mechanisms: capabilities, seccomp,
+AppArmor, SELinux, namespaces, cgroups... Some people may know what
+they are from the documentation, but it takes a lot of effort to
+actually understand in detail what each does. The result is that most
+developers don't know how to actually apply those mechanisms to make
+sure their containers are secure. As an example, he said the [docker
+run](https://docs.docker.com/engine/reference/commandline/run/) help has a "ridiculously large number of options" with about
+fifteen relevant to security. So right off the bat, there is a wall of
+possible mechanisms to figure out.
+
+But even individual mechanisms are hard to understand and
+configure. He gave the example of capabilities {`docker run
+--cap-drop`}: there are about 40 possible values for that one specific
+option. He described some capabilities as "understandable", but said
+that others end up in overly broad boxes. For example, since there

(Diff truncated)
tweaks
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 8df27bf7..95541ae3 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -1,6 +1,8 @@
 Entitlements: Understandable Container Security Controls
 ========================================================
 
+{write lead}
+
 https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level
 
 https://www.youtube.com/watch?v=Jbqxsli2tRw
@@ -177,14 +179,16 @@ somehow it gets punted forward in his priorities.
 {entitlements will need to be versionned because we won't get it right
 the first time.}
 
-OPA worth looking at. nice way of writing policies, but still need way
-to implement them... not reasonable to write about that level. you
-*can* write seccomp policies in OPA directly, but i wouldn't recommend
-it.
+{conclusion}
 
 Notes
 =====
 
+{[OPA](https://www.openpolicyagent.org/) worth looking at. nice way of writing policies, but still need way
+to implement them... not reasonable to write about that level. you
+*can* write seccomp policies in OPA directly, but i wouldn't recommend
+it.}
+
 {Eddequiouaq compared the approach to the one used by Apple or
 Microsoft for their applications and said we should do the same with
 our containers.}
@@ -201,6 +205,6 @@ the admission controller {?}}
 {gvisor is emulation based, does not let you do higher privilege
 stuff.}
 
-{q: followup work on [Tim Allclair's presentation](https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google).nassim was
+{q: followup work on [Tim Allclair's presentation](https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google)? nassim was
 supposed to present but couldn't make it.}
 

barely finish first draft
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 1d6044d2..8df27bf7 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -8,7 +8,6 @@ https://www.youtube.com/watch?v=Jbqxsli2tRw
 overlaps:
 https://kccnceu18.sched.com/event/Dqvz/horizontal-pod-autoscaler-reloaded-scale-on-custom-metrics-maciej-pytel-google-solly-ross-red-hat-intermediate-skill-level
 
-
 notes from last kubecon relevant here:
 
  * [Docker proposal](https://github.com/moby/moby/issues/32801) from
@@ -135,66 +134,54 @@ services. This also applies to the container world as allowing network
 access is one thing, but restricting it to a set of service is also a
 very valid use case and an interesting security property to have.
 
-16m22, keep going.
-
- * still an open discussion
-
-demo on docker because that's where we had to implement it first
-anyways. we want to implement our nemesis: docker in docker, which
-needs a lot of privileges. this translates into net_admin, security
-admin an host.devices=admin, instead of --privileged. everything
-works - we didn't have to disable all the seccomp/apparmor profiles.
-
-docker inspect shows the entitlements
-
-then run works without any options
-
-what about k8s? same usability issues, PodSecurityPolicy map 1:1 to
-docker issues. no way to share best security policies for a pod.
-
-user input would happen through securitycontext or admission
-controller, which would avoid conflicts with existing seccomp [?]
-profiles. not sure if we want to have the rules additive or
-restrictive.  we don't want to break the SecurityContext format. 
-
-simple mech to inherit entitlements from images, WIP in docker
-builds. in Dockerfile you can inherit, but you should be able to do
-that in the admission controller
-
-translation is done before handing over to containerd
-
-seccomp is a real pain to work w/ . 
-
-landlock and checmate progressed recently to improve ebpd...
-
-gvisor is emulation based, does not let you do higher privilege
-stuff.
+{16m22.. Eddequiouaq said the proposal was open to changes and those
+possible improvements were completely open to discussion.}
+
+He {who} gave the demo of the "nemesis": Docker inside Docker, a good
+exapmle because it requires a lot of privileges. In entitlements, this
+translates into `net=admin`, `security=admin` and
+`host.devices=admin`. According to {who}, "everything works, we didn't
+have to disable all the seccomp/apparmor profiles" so no more
+`--privileged`.
+
+{docker inspect shows the entitlements
+
+then run works without any options}
+
+Entitlements were first implemented in Docker as this is a
+pre-requisite to get it working in Kubernetes, even though there are
+now [other runtimes]({}). Kubernetes has the same usability issues as
+Docker, however, so ultimetely, the goal is to get that proposal
+working there as well. The `PodSecurityPolicy` maps (almost)
+one-to-one with the issues and there is currently no good way to share
+best security policies for a pod. One of the issues to implement this
+is exactly that the security models of Kubernetes and Docker are not
+*exactly* identical, as we've [previously discussed][{}]. This means
+entitlements will be {are?} easier to implement in [Docker swarm][{}]
+as it obviously uses the Docker semantics.
+
+The speakers {who?} proposed that user input would happen through the
+`SecurityContext` or admission controller, which would avoid conflicts
+with existing seccomp {?} profiles. He is not sure if the rules
+additive or restrictive: "we don't want to break the existing
+`SecurityContext` {format?}." A simple mechanism to inherit
+entitlements from images is available in docker build as a work in
+progress.
+
+Again, the core idea here is to protect users from all the complicated
+stuff: right now Docker does the translation before passing the
+profiles to the runtime ([containerd][{}], in this case) but this
+really belongs in the CRI. Cormack wants to see this standardized but
+somehow it gets punted forward in his priorities. 
+
+{entitlements will need to be versionned because we won't get it right
+the first time.}
 
 OPA worth looking at. nice way of writing policies, but still need way
 to implement them... not reasonable to write about that level. you
 *can* write seccomp policies in OPA directly, but i wouldn't recommend
 it.
 
-we want to get this standardized, it can be in the admission
-controller first, but it really belongs in CRI. there's weird stuff in
-the spec right now, it would be nice to fix it there. Cormack's job,
-but gets punted often... 
-
-
-
-entitlements will need to be versionned because we won't get it right
-the first time.
-
-q: followup work on Tim Allclair's presentation?
-
-re:
-https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google
-
-yes, nassim was supposed to present but couldn't make it
-
-easier to make it with swarm, possible with kubernetes, but the model
-doesn't map directly, only 95%... 
-
 Notes
 =====
 
@@ -204,3 +191,16 @@ our containers.}
 
 q: can we review the profile?
 
+{in Dockerfile you can inherit, but you should be able to do that in
+the admission controller {?}}
+
+{seccomp is a real pain to work w/ . }
+
+{landlock and checmate progressed recently to improve ebpd...}
+
+{gvisor is emulation based, does not let you do higher privilege
+stuff.}
+
+{q: followup work on [Tim Allclair's presentation](https://kccncna17.sched.com/event/D2Xz/sig-node-deep-dive-hosted-by-dawn-chen-google).nassim was
+supposed to present but couldn't make it.}
+

add documentation on encrypted remotes
diff --git a/services/backup.mdwn b/services/backup.mdwn
index ae4d62f4..4dc06209 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -13,7 +13,7 @@ drive.
 
 Most backups are performed with [borg](http://borgbackup.rtfd.org/) but some offsite backups are
 still done with [bup](https://bup.github.io/) for historical reasons but may be migrated to
-another storage system.
+another storage system, see below for progress.
 
 Backup storage
 --------------
@@ -258,7 +258,10 @@ controller that didn't support the 8TB drive (it was detected as 2TB)
 so I had to connect it in my workstation instead (an Intel NUC, which
 meant a tangled mess).
 
-Remaining work on borg:
+All this needs to be documented better and merged with the above
+documentation.
+
+### Remaining work on borg
 
  1. decice what to do with `/var/log` (currently excluded because we
     want lower retention on those)
@@ -266,7 +269,7 @@ Remaining work on borg:
  2. prune policies, skipped for now because [incompatible with
     append-only]( https://github.com/borgbackup/borg/issues/2251)
 
- 3. enable crypto:
+ 3. automate crypto:
  
     a. change passphrase
     b. include it in script here
@@ -290,7 +293,7 @@ Remaining work on borg:
 
  7. document this in the borg documentation itself?
 
-Remaining work on git-annex:
+### Remaining work on git-annex
 
  1. switch git-annex remotes and borg repo to remote server when drive
     is installed (done)
@@ -314,6 +317,8 @@ Remaining work on git-annex:
  4. make repositories made [append-only](https://git-annex.branchable.com/todo/append-only_mode/?updated), not currently supported
     by git-annex
 
+### Random git-annex docs
+
 This is how the git-annex repositories were setup at first:
 
     for r in  audiobooks books espresso incoming mp3 playlists podcast roms video; do 
@@ -325,7 +330,83 @@ This is how the git-annex repositories were setup at first:
         git -C /srv/$r annex sync --content
     done
 
-References:
+### Encrypted remotes
+
+To setup the encrypted for pictures remotes, first the git-annex
+objects:
+
+    Photos$ git annex initremote offsite-annex type=rsync rsyncurl=user@example.net:/srv/Photos.annex/ encryption=hybrid keyid=8DC901CE64146C048AD50FBB792152527B75921E
+    Photos$ git annex sync --content offsite-annex
+
+Then the git objects themselves:
+
+    Photos$ git remote add offsite-git gcrypt::rsync://user@example.net:/srv/Photos.git/
+    Photos$ git annex sync offsite-git
+
+It is still unclear to me why those need to be separate. I first tried
+as a single repo with encryption, as [documented on the website](https://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/)
+but it turns out this has significant performance problems,
+e.g. [gcrypt remote: every sync uploads huge manifest](http://git-annex.branchable.com/bugs/gcrypt_remote__58___every_sync_uploads_huge_manifest/). So
+spwhitton suggested the above approach of splitting the repositories
+in two.
+
+What I don't understand is why git-annex can't simply encrypt the
+blobs and pass them down its regular remote structures like bare git
+repositories. Using rsync creates unnecessary overhead and complex
+URLs. The user interface on transfers is also far from intuitive:
+
+    $ git annex sync --content offsite-annex
+    commit
+    Sur la branche master
+    rien à valider, la copie de travail est propre
+    ok
+    copy 1969/12/31/20120415_009.mp4 (checking offsite-annex...) (to offsite-annex...)
+    sending incremental file list
+    840/
+    840/562/
+    840/562/GPGHMACSHA1--71995881d2ebb35a364558125d30999cf1c956d5/
+    840/562/GPGHMACSHA1--71995881d2ebb35a364558125d30999cf1c956d5/GPGHMACSHA1--71995881d2ebb35a364558125d30999cf1c956d5
+          1,289,927 100%  199.82MB/s    0:00:00 (xfr#1, to-chk=0/5)
+    ok
+
+Parallel transfers also don't have progress information. It really
+feels like encryption is a second-class citizen here. I also feel it
+will be rather difficult to reconstruct this repository from scratch,
+and an attempt will need to be made before we feel confident of our
+restore capacities.
+
+### Encrypted repos restore procedure
+
+To access the files in a bare-metal restore, first the OpenPGP keyring
+needs to be extracted from somewhere of course. Then a blank git
+repository is created:
+
+    git init Photos
+
+And the first git remote is added and fetched:
+
+    git remote add origin gcrypt::rsync://user@example.net:/srv/Photos.git/
+    git fetch origin
+    git merge origin/master
+
+Then the object store is added and fetched:
+
+    git annex enableremote offsite-annex type=rsync rsyncurl=user@example.net:/srv/Photos.annex/ encryption=hybrid keyid=8DC901CE64146C048AD50FBB792152527B75921E
+    git annex get --from offsite-annex
+
+The first line is critical: `initremote` might create a new encryption
+key instead of reusing the existing one?
+
+### References
+
+Borg:
 
  * [borg append-only-mode notes](https://borgbackup.readthedocs.io/en/stable/usage/notes.html#append-only-mode)
  * script inspired by [the borg local deployment notes](https://borgbackup.readthedocs.io/en/stable/deployment/automated-local.html)
+
+Once we figure out git-annex, the following pages need to be updated:
+
+ * [encryption](https://git-annex.branchable.com/encryption/)
+ * [tips](https://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/)
+ * [gcrypt remote](https://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/)
+

start working on article 4
diff --git a/blog/container-isolation.mdwn b/blog/container-isolation.mdwn
index 5c0dc951..fb0ed8e1 100644
--- a/blog/container-isolation.mdwn
+++ b/blog/container-isolation.mdwn
@@ -1,7 +1,7 @@
 containers isolation
 --------------------
 
-tim auclair. works on the node team, forcus on security
+Tim Allclair. works on the node team, focus on security
 
 lead a discussion on container isolation. just ask a bunch of
 question.
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 83ab3380..1d6044d2 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -8,107 +8,134 @@ https://www.youtube.com/watch?v=Jbqxsli2tRw
 overlaps:
 https://kccnceu18.sched.com/event/Dqvz/horizontal-pod-autoscaler-reloaded-scale-on-custom-metrics-maciej-pytel-google-solly-ross-red-hat-intermediate-skill-level
 
-Justin Cormack, Nassim Eddequiouaq, Docker
 
-nassim: file:///home/anarcat/card-bak/DCIM/103_FUJI/DSCF3380.JPG
-
-justin: file:///home/anarcat/card-bak/DCIM/103_FUJI/DSCF3389.JPG
-
-making security controls understandable
-
-complicated subject
-
-should be simple with a set of garantees
-
-"unusable security is not security"
-
-all sorts of measures, we may know what they are from the
-documentation, may understand what they do. it's a lot of work to
-understand in detail what they do. people don't know how to configure
-it
-
-`docker run` has a ridiculous number of security-related options.
-
-capabilities example: there are 40, there can't be more than 64 so
-everything is lumped into SYS_ADMIN. some of them are understandable
-but others are in to too large boxes. `--cap-drop`
-
-namespaces `--pid=host` unclear what the security properties are, some
-combinations with caps will let you escape, some won't
-
-seccomp is an extremeely long json file. very difficult to write
-
-apparmor on ubuntu/debian distros, yet another totally different
-languages
-
-the idea is to do this in a better way
-
-something humans can understand possibly without even reading
-documentation
-
-very simple set of permissions on the commandline. user focuses on
-what makes sense at their level, they don't need to understand the
-interactions in kernel components or how it works. people don't care
-about that, they want to make sure their app is secure.
-
-we divided resources in domains that make sense:
-
- * network confined, user, proxy, full admin
- * security
- * host resources like host.devices
-
-behind the scenes, the platform handles all the translations into
-apparmor, seccomp, capabilities, MaskedPaths. abstracting the complex
-stuff is not the user's role.
-
-e.g. a confined network disallow all network operations *except*
-`AF_UNIX | AF_LOCAL` for example. this depends on what is supported by
-the platform.
-
-we shold leverage the knowledge the devs have on their apps so they
-can pass along info to the ops people that will run that in prod
-
-docker build (or other image builders) can take the image and
-entitlements should create the image and a security profile that is
-independent from the underlying machine
-
-then when you trust sign, the bundle is delivered as such to the end
-user. e.g. if i want nginx with the best sandboxing environment, nginx
-people are the best people to do that.
-
-mac/windows do that with their apps, we should do the same with our
-containers
-
-pull/run workflow does not change, but benefits automatically from the
-entitlements and get the expertise from the content publishers
-
-q: can we review the profile?
-
-goals:
-
- * best UX possible. people use the sec options and we want the
-   end-user not opt out of security config if they get perm denied
-
- * empower people who write the apps themselves. they have the
-   knowledge, so they need to be able to easily allow/deny permissions
-
- * create a standard that would be supported by most platforms, k8s,
-   swarm, other schedulers
-
- * deprecate the --privileged flag
- 
- * smallest privileges the app needs. this is not the case right now,
-   everyone works with defaults that work with 95% of the apps out
-   there. we give way too big attack surface. you allow syscalls that
-   are not necessary in the application
-
-stretch goals:
-
- * allow people to set custom entitlements. to be discussed
- 
- * API access control and service-to-service communication
-   control. this contain should only be able to talk to those kind of
-   services. e.g. on iOS this app can only talk to those daemons
+notes from last kubecon relevant here:
+
+ * [Docker proposal](https://github.com/moby/moby/issues/32801) from
+   april 2017
+ * [libentitlement](https://github.com/moby/libentitlement) is a
+   thing, but no dev since december
+
+Justin Cormack, Nassim Eddequiouaq, from Docker security dept.
+
+[![Justin Cormack](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3389.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/35)
+
+{intro}
+
+{With the goal of making security controls understandable.}
+
+The problem with container security
+-----------------------------------
+
+Cormack started by saying that container security is often seen as a
+"complicated subject" but that should be simple with a set of
+garantees. Kubernetes should respect the motto: "unusable security is
+not security".
+
+Container security includes all sorts of measures, some of which we
+have [previously covered][{}]. Cormack presented what seemed likle an
+endless list of measures: capabilities, seccomp, AppArmor, SELinux,
+namespaces, cgroups... He said that some people may know what they are
+from the documentation, but that it takes a lot of effort to actually
+understand in detail what they do. The result is that most developers
+don't know how to actually apply those mechanisms to make sure their
+containers are secure. He gave the example of the [`docker run`](https://docs.docker.com/engine/reference/commandline/run/)
+help page that has a "ridiculously large number of options" with about
+fifteen of those relevant to security configuration.
+
+Individual mechanisms are hard to understand and configuration. He
+gave the example of capabilities that are controlled by `docker run
+--cap-drop` : there are about 40 possible values for that one specific
+option. Since there can't be more than 64 (because of the kernel's
+data structure), everything else is lumped into `CAP_SYS_ADMIN` that
+allows everything. He described some capabilities as understandable,
+but said that others end up in overly broad boxes. The kernel has less
+namespaces (7) than capabilities, yet he said that "it's very unclear
+for a general user what the security properties are". For example,
+"some combinations of capabilities and namespaces will let you escape
+from a container, and other ones don't". He described seccomp as an
+"extremely long JSON file" that could "usefully be even more
+complicated". Those files are obviously "very difficult to write".
+
+The list of such parameters to feed into the kernel could go on. The
+point is that while developers could sit down and write those policies
+for their application by hand, it's a real mess and makes their head
+explode. So instead developers run their containers in `--privileged`
+mode because it works, but then all those nice security mechanisms are
+thrown out the window.
+
+Introducing entitlements
+------------------------
+
+[![Nassim Eddequiouaq](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3381.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/34)
+
+There must be a better way. Eddequiouaq explained the idea is "to
+provide something humans can actually understand without diving into
+code or possibly even without reading documentation". The solution
+proposed are Docker {and hopefully Kubernetes} "entitlements": the
+ability for users to choose from a very simple set of permissions on
+the commandline. The point of the abstraction is that users don't need
+to understand the low-level security mechanisms or their interactions
+in the kernel. Eddequiouaq said: "people don't care about that, they
+want to make sure their app is secure."
+
+The resources are divided in meaningful domains, for example if the
+network should be confined, proxied, or with full access. Similar
+properties are defined for "security" or "host resources" (like device
+files). {Behind the scenes, the platform (e.g. Docker or Kubernetes
+runtimes) translate those into the security mechanisms available. This
+means the actual effect will vary between platforms, depending on the
+supported security mechanisms.} For example, a confined network access
+might mean a seccomp filter blocking all networking-related system

(Diff truncated)
add another backup todo
diff --git a/services/backup.mdwn b/services/backup.mdwn
index bf873e71..ae4d62f4 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -299,9 +299,18 @@ Remaining work on git-annex:
 
  3. resync everything again (in progress)
 
- 3. add Photos repo with [git-annex encryption](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/) (blocker: [error
+ 4. add Photos repo with [git-annex encryption](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/) (blocker: [error
     while setting up gcrypt remote](https://git-annex.branchable.com/bugs/gcrypt_repository_not_found/))
 
+ 5. restricted shell, see [git-annex-shell](https://git-annex.branchable.com/git-annex-shell/):
+
+        command="GIT_ANNEX_SHELL_LIMITED=true git-annex-shell -c \"$SSH_ORIGINAL_COMMAND\"",restrict ssh-rsa AAAAB3NzaC1y[...] user@example.com
+
+    `GIT_ANNEX_SHELL_DIRECTORY` would be useful, but we have multiple
+    repositories we want to allow, and that, if I read
+    `CmdLine.GitAnnexShell.Checks.checkDirectory` correctly, is not
+    pattern-based but an exact match (using `equalFilePath`).
+
  4. make repositories made [append-only](https://git-annex.branchable.com/todo/append-only_mode/?updated), not currently supported
     by git-annex
 

update status with git-annex
diff --git a/services/backup.mdwn b/services/backup.mdwn
index 262780b3..bf873e71 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -295,13 +295,15 @@ Remaining work on git-annex:
  1. switch git-annex remotes and borg repo to remote server when drive
     is installed (done)
 
- 2. resync all repos and enable in script
+ 2. enable in script sync in script (done)
 
- 2. add Photos repo with [git-annex encryption](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/)
+ 3. resync everything again (in progress)
 
- 3. make repositories made "append-only" somehow. right now there's
-    only [this post](http://git-annex.branchable.com/todo/git-hook_to_sanity-check_git-annex_branch_pushes/) which discusses the possibility, and a
-    proof-of-concept hook for the [Isuma project](https://isuma-media-players.readthedocs.io/en/latest/design.html#security-issues)
+ 3. add Photos repo with [git-annex encryption](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/) (blocker: [error
+    while setting up gcrypt remote](https://git-annex.branchable.com/bugs/gcrypt_repository_not_found/))
+
+ 4. make repositories made [append-only](https://git-annex.branchable.com/todo/append-only_mode/?updated), not currently supported
+    by git-annex
 
 This is how the git-annex repositories were setup at first:
 

update status
diff --git a/services/backup.mdwn b/services/backup.mdwn
index 5a4e4638..262780b3 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -292,7 +292,10 @@ Remaining work on borg:
 
 Remaining work on git-annex:
 
- 1. switch git-annex remotes and borg repo to remote server when drive is installed
+ 1. switch git-annex remotes and borg repo to remote server when drive
+    is installed (done)
+
+ 2. resync all repos and enable in script
 
  2. add Photos repo with [git-annex encryption](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/)
 

move notes from offsite script to docs
diff --git a/services/backup.mdwn b/services/backup.mdwn
index 86141ba8..5a4e4638 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -235,3 +235,83 @@ Web: install apache2 + restore wiki.
    console [1.25 GiB RAM, 15 GiB Disk](https://billing.prgmr.com/index.php/order/main/packages/xen/?group_id=10)
 
  * Gandi: 4$/mth 256MiB RAM, 3GB disk
+
+Offsite procedures
+------------------
+
+A new offsite backup system was created. Previously, it was a manual
+process: bring the drives back to the server, pop them in a SATA
+enclosure, start the backup script by hand, wait, return the drives to
+the offsite location. This "pull" configuration had the advantage of
+being resilient against an attacker wanting to destroy all data, but
+the manual process meant the backups were never done as often as they
+should have.
+
+A new design based on borg and git-annex assumes a remote server
+online that receives the backups (a "push" configuration). The goal is
+to setup the backup in "append-only" mode so that an attacker is
+limited in its capacity to destroy stuff on the server.
+
+A first sync was done locally to bootstrap the dataset. This was
+harder than expected because the external enclosure had an older SATA
+controller that didn't support the 8TB drive (it was detected as 2TB)
+so I had to connect it in my workstation instead (an Intel NUC, which
+meant a tangled mess).
+
+Remaining work on borg:
+
+ 1. decice what to do with `/var/log` (currently excluded because we
+    want lower retention on those)
+
+ 2. prune policies, skipped for now because [incompatible with
+    append-only]( https://github.com/borgbackup/borg/issues/2251)
+
+ 3. enable crypto:
+ 
+    a. change passphrase
+    b. include it in script here
+    c. include a GnuPG symmetric encrypted copy of the pass on the offsite disk
+
+    Note: this approach should work, but needs a full shell when the
+    key is changed, so it is fundamentally [incompatible with
+    restricted shell provider](https://github.com/borgbackup/borg/issues/3585#issuecomment-362110929)
+
+ 4. set append-only mode on server:
+
+        borg config PATH append_only 1
+
+ 5. somehow make sure server is called with:
+
+        borg serve --append-only
+
+ 5. test full run again
+
+ 6. switch to restricted shell, remove on-disk SSH keys from `authorized_keys`
+
+ 7. document this in the borg documentation itself?
+
+Remaining work on git-annex:
+
+ 1. switch git-annex remotes and borg repo to remote server when drive is installed
+
+ 2. add Photos repo with [git-annex encryption](http://git-annex.branchable.com/tips/fully_encrypted_git_repositories_with_gcrypt/)
+
+ 3. make repositories made "append-only" somehow. right now there's
+    only [this post](http://git-annex.branchable.com/todo/git-hook_to_sanity-check_git-annex_branch_pushes/) which discusses the possibility, and a
+    proof-of-concept hook for the [Isuma project](https://isuma-media-players.readthedocs.io/en/latest/design.html#security-issues)
+
+This is how the git-annex repositories were setup at first:
+
+    for r in  audiobooks books espresso incoming mp3 playlists podcast roms video; do 
+        git init /mnt/$r
+        git -C /srv/$r remote add offsite /mnt/$r
+        git -C /srv/$r annex sync
+        git -C /srv/$r annex wanted offsite standard
+        git -C /srv/$r annex group offsite backup
+        git -C /srv/$r annex sync --content
+    done
+
+References:
+
+ * [borg append-only-mode notes](https://borgbackup.readthedocs.io/en/stable/usage/notes.html#append-only-mode)
+ * script inspired by [the borg local deployment notes](https://borgbackup.readthedocs.io/en/stable/deployment/automated-local.html)

another smd migration procedure
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 8b75814a..ba957ab7 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -43,14 +43,14 @@ Migration procedure
         server$ cp -a Maildir/ Maildir-smd/
         client$ cp -a Maildir/Anarcat/ Maildir-smd/
 
- 7. rename the `INBOX` folder on the client. the problem here is that
+ 5. rename the `INBOX` folder on the client. the problem here is that
      the `INBOX` folder exists locally (thanks to offlineimap) and not
      remotely (thanks to dovecot). this was reported in [bug #7](https://github.com/gares/syncmaildir/issues/7)
      and it seems the workaround might be, on the client:
      
         client$ mv Maildir-smd/INBOX/{cur,new,tmp} Maildir-smd/ && rmdir Maildir-smd/INBOX/
 
- 5. run `smd-check-conf` repeatedly until it stops complaining and
+ 6. run `smd-check-conf` repeatedly until it stops complaining and
     looks sane. steps taken to cleanup remote directory:
 
     * removed top-level stray folders
@@ -70,7 +70,7 @@ Migration procedure
       
           server$ mkdir Maildir-smd-notmuch && mv Maildir-smd/.notmuch/{hooks,xapian,muchsync} Maildir-smd-notmuch
 
- 6. when the config looks sane, the next step is to [convert the
+ 7. when the config looks sane, the next step is to [convert the
     folder away from OfflineIMAP idiosyncracies](https://github.com/gares/syncmaildir#migration-from-offlineimap), particularly
     removing the `X-OfflineIMAP` header. there I have found that the
     suggested commandline:
@@ -177,6 +177,38 @@ How should a clean `rsync` based bootstrap be performed?
 It would be nice if `notmuch insert` could be used to deliver the
 emails locally instead of having to rescan the whole database.
 
+Clean-slate procedure
+=====================
+
+This is an alternative procedure from the above that just copies the
+files over and starts from a clean slate. Considering we lose
+compatibility with OfflineIMAP anyways, it seems like a much simpler
+procedure at no extra cost (except we need to copy files over).
+
+The layout of the files *will* change to something a little more
+obscure which will require some fixes in notmuch tagging scripts, but
+that will actually simplify things as we reduce the deltas between the
+machines.
+
+ 1. run step 1-3 in first procedure (configuration, one-time, but
+    remove translators)
+
+ 2. create a copy on remote server:
+
+        ssh anarc.at cp -a Maildir Maildir-smd
+
+ 3. strip offlineimap headers on remote:
+ 
+        ssh anarc.at "~/dist/syncmaildir/misc/strip-header Maildir-smd/ 2>&1 >&2 | pv -s 187000 -l > log"
+
+ 4. synchronize the Maildir folder:
+ 
+        ssh anarc.at "tar cf - Maildir-smd/ | pv -s 13G | gzip -c" | tar xf -
+
+ 5. run `smd-check-conf` (step 5 above) to see if things are sane
+
+ 6. run `smd-pull`, `smd-push`, etc. step 10 and following above.
+
 Performance comparison
 ======================
 

relink the architecture diagram
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index aed0041f..68e53dd1 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -64,6 +64,8 @@ or [Google's
 Stackdriver](https://github.com/kubernetes/heapster/blob/d25a176baee4554dc59052785c4ee940fc94d305/docs/google.md)
 to store samples, which made deploying an HPA challenging.
 
+![Architecture diagram](https://raw.githubusercontent.com/kubernetes/community/master/contributors/design-proposals/instrumentation/monitoring_architecture.png)
+
 In late 2016, the "autoscaling special interest group" ([SIG
 autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling))
 decided that the pipeline needed a redesign that would allow scaling

link to local image
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index 087dda5e..aed0041f 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -30,14 +30,14 @@ implements the standardized APIs.
 The old and new autoscalers
 ---------------------------
 
+[![Frederic Branczyk](https://photos.anarc.at/events/kubecon-eu-2018/DSCF3333.JPG)](https://photos.anarc.at/events/kubecon-eu-2018/#/27)
+
 Branczyk first covered the history of the autoscaler architecture and
 how it has evolved through time. Kubernetes, since version 1.2, features
 a horizontal pod autoscaler (HPA), which dynamically allocates resources
 depending on the detected workload. When the load becomes too high, the
 HPA increases the number of
 [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/)
-[![\[Fréderic
-Branczyk\]](https://static.lwn.net/images/conf/2018/kubecon/FredericBranczyk-sm.jpg "Fréderic Branczyk")](https://lwn.net/Articles/754155/)
 replicas and, when the load goes down again, it removes superfluous
 copies. In the old HPA, a component called
 [Heapster](https://github.com/kubernetes/heapster) would pull usage

last autoscaling corrections, article online
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index 7463f27c..087dda5e 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -1,4 +1,4 @@
-[[!meta title="Autoscaling Kubernetes workloads"]]
+[[!meta title="Autoscaling for Kubernetes workloads"]]
 \[LWN subscriber-only content\]
 -------------------------------
 
@@ -13,19 +13,19 @@ variable demands placed on the system. Actually implementing that
 scaling can be a challenge, though. During [KubeCon + CloudNativeCon
 Europe
 2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/),
-Frederic Branczyk from CoreOS (now part of RedHat) held a packed
+Frederic Branczyk from CoreOS (now part of Red Hat) held a packed
 [session](https://kccnceu18.sched.com/event/Dqun/autoscale-your-kubernetes-workload-with-prometheus-frederic-branczyk-coreos-intermediate-skill-level)
 to introduce a standard and officially recommended way to scale
 workloads automatically in [Kubernetes](https://kubernetes.io/)
 clusters.
 
-Kubernetes has had an autoscaler since the earlier days but only
-recently did the community implement a more flexible and extensible
-mechanism to make decisions on when to add more resources to fulfill
-workload requirements. The new API integrates not only the
-[Prometheus](https://prometheus.io) project popular in Kubernetes
-deployments, but also any arbitrary monitoring vendor that implements
-the standardized APIs.
+Kubernetes has had an autoscaler since the early days, but only recently
+did the community implement a more flexible and extensible mechanism to
+make decisions on when to add more resources to fulfill workload
+requirements. The new API integrates not only the
+[Prometheus](https://prometheus.io) project, which is popular in
+Kubernetes deployments, but also any arbitrary monitoring system that
+implements the standardized APIs.
 
 The old and new autoscalers
 ---------------------------
@@ -46,7 +46,7 @@ monitoring daemon and the HPA controller would then scale workloads up
 or down based on those metrics.
 
 Unfortunately, the controller would only make decisions based on CPU
-utilization, even if Heapster provides [other
+utilization, even though Heapster provides [other
 metrics](https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md)
 like disk, memory, or network usage. According to Branczyk, while in
 theory any workload can be converted to a CPU-bound problem, this is an
@@ -69,12 +69,12 @@ autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling
 decided that the pipeline needed a redesign that would allow scaling
 based on arbitrary metrics from external monitoring systems. The result
 is that Kubernetes 1.6 shipped with a new API specification defining how
-the autoscaler integrates with those systems. To avoid mistakes made in
-Heapster, the API is only a specification without an implementation, so
-development is not slowed down by a new component. This shifts
-responsibility of maintenance to the monitoring vendors: instead of
-"dumping" their glue code in Heapster, vendors now have to maintain
-their own adapter conforming to a well-defined API to get
+the autoscaler integrates with those systems. Having learned from the
+Heapster experience, the developers specified the new API, but did not
+implement it for any specific system. This shifts responsibility of
+maintenance to the monitoring vendors: instead of "dumping" their glue
+code in Heapster, vendors now have to maintain their own adapter
+conforming to a well-defined API to get
 [certified](https://github.com/cncf/k8s-conformance).
 
 The new specification defines core metrics like CPU, memory, and disk
@@ -82,7 +82,7 @@ usage. Kubernetes provides a canonical implementation of those metrics
 through the [metrics
 server](https://github.com/kubernetes-incubator/metrics-server), a
 stripped down version of Heapster. The metrics server provides the core
-metrics required by Kubernetes so that scheduling, autoscaling and
+metrics required by Kubernetes so that scheduling, autoscaling, and
 things like `kubectl top` work out of the box. This means that any
 Kubernetes 1.8 cluster now supports autoscaling using those metrics out
 of the box: for example
@@ -140,19 +140,19 @@ The ultimate goal of the new API, however, is to support arbitrary
 metrics, through the [custom metrics
 API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics).
 This behaves like the core metrics, except that Kubernetes does not ship
-or define a set of custom metrics directly which is where vendors like
+or define a set of custom metrics directly, which is where systems like
 Prometheus come in. Branczyk demonstrated the
 [k8s-prometheus-adapter](https://github.com/directXMan12/k8s-prometheus-adapter),
 which connects any Prometheus metric to the Kubernetes HPA, allowing the
 autoscaler to add new pods to reduce request latency, for example. Those
-metrics are bound to Kubernetes objects (e.g. pod, node, etc) but an
+metrics are bound to Kubernetes objects (e.g. pod, node, etc.) but an
 "external metrics API" was also introduced in the last two months to
 allow arbitrary metrics to influence autoscaling. This could allow
 Kubernetes to scale up a workload to deal with a larger load on an
 external message broker service, for example.
 
 Here is an example of the custom metrics API pulling metrics from
-Prometheus to make sure that each pod handles at most 200 requests per
+Prometheus to make sure that each pod handles around 200 requests per
 second:
 
           metrics:
@@ -161,8 +161,8 @@ second:
               metricName: http_requests
               targetAverageValue: 200
 
-Here `http_requests` is a metrics exposed by the Prometheus server which
-looks at how many requests each pod is processing. To avoid putting to
+Here `http_requests` is a metric exposed by the Prometheus server which
+looks at how many requests each pod is processing. To avoid putting too
 much load on each pod, the HPA will then ensure that this number will be
 around a target value by spawning or killing pods as appropriate.
 
@@ -177,10 +177,10 @@ way in another group ([SIG
 instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation))
 to finish moving away from the older design.
 
-Another thing community is looking into is vertical scaling. Horizontal
-scaling is fine for certain workloads, like caching servers or
-application frontends, but database servers, most notably, are harder to
-scale by just adding more replicas: in this case what an autoscaler
+Another thing the community is looking into is vertical scaling.
+Horizontal scaling is fine for certain workloads, like caching servers
+or application frontends, but database servers, most notably, are harder
+to scale by just adding more replicas; in this case what an autoscaler
 should do is increase the *size* of the replicas instead of their
 numbers. Kubernetes supports this through the [vertical pod
 autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler)
@@ -195,9 +195,9 @@ workloads where there is only a single pod or a fixed number of pods
 like
 [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/).
 
-Branczyk ventured on a set of predictions for other improvements that
-could come down the pipeline. One issue he identified is that while, the
-HPA and VPA can scale *pods*, there is different [Cluster
+Branczyk gave a set of predictions for other improvements that could
+come down the pipeline. One issue he identified is that, while the HPA
+and VPA can scale *pods*, there is a different [Cluster
 Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
 (CA) that manages *nodes*, which are the actual machines running the
 pods. The CA allows a cluster to move pods between the nodes to remove
@@ -210,21 +210,21 @@ functionality: scaling a workload by giving it more resources.
 
 Another hope is that
 [OpenMetrics](https://github.com/RichiH/OpenMetrics/) will emerge as a
-standard for metrics across vendors: this seems to be well under way
-with Kubernetes already using the Prometheus library, which serves as a
-basis for the standard, and with commercial vendors like [Datadog
+standard for metrics across vendors. This process seems to be well under
+way with Kubernetes already using the Prometheus library, which serves
+as a basis for the standard, and with commercial vendors like [Datadog
 supporting the Prometheus
 API](https://www.datadoghq.com/blog/monitor-prometheus-metrics/) as
 well. Another area of possible standardization is the
 [gRPC](https://grpc.io/) protocol used in some Kubernetes clusters to
 communicate between microservices. Those endpoints can now expose
 metrics through "interceptors" that get executed before the request is
-passed to the application. One of those interceptor is the
+passed to the application. One of those interceptors is the
 [go-grpc-prometheus
 adapter](https://github.com/grpc-ecosystem/go-grpc-prometheus), which
 enables Prometheus to scrape metrics from any gRPC-enabled service. The
 ultimate goal is to have standard metrics deployed across an entire
-cluster, allowing the creation of reusable dashboards, alerts and
+cluster, allowing the creation of reusable dashboards, alerts, and
 autoscaling mechanisms in a uniform system.
 
 Conclusion
@@ -245,9 +245,8 @@ subsystems that Kubernetes has become.
 
 A [video of the talk](https://www.youtube.com/watch?v=VQFoc0ukCvI) and
 [slides](https://github.com/brancz/slides/blob/master/cloudnativecon-2018-copenhagen-autoscaling-with-prometheus/autoscale-your-kubernetes-workload-with-prometheus.pdf)
-\[PDF\] are available. As members of the related [SIG
-autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling),
-Marcin Wielgus and Solly Ross presented an
+\[PDF\] are available. SIG autoscaling members Marcin Wielgus and Solly
+Ross presented an
 [introduction](https://kccnceu18.sched.com/event/DrnS/sig-autoscaling-intro-marcin-wielgus-google-solly-ross-red-hat-any-skill-level)
 ([video](https://www.youtube.com/watch?v=oJyjW8Vz314)) and [deep
 dive](https://kccnceu18.sched.com/event/DroC/sig-autoscaling-deep-dive-marcin-wielgus-google-solly-ross-red-hat-intermediate-skill-leve)
@@ -258,6 +257,8 @@ Kubernetes autoscaling.
 \[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting
 my travel to the event.\]
 
+------------------------------------------------------------------------
+
 
 
 > *This article [first appeared][] in the [Linux Weekly News][].*

add short issue summary
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 07168ea5..8b75814a 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -226,3 +226,19 @@ A fairer loop would be based on `sleep 20` and would match better with
 the general OfflineIMAP loop frequency (`-k
 Account_Anarcat:autorefresh=0.1` means 6 seconds plus the ~13 seconds
 run time). Such a loop converges to about 5% extra CPU usage.
+
+Issue summary
+=============
+
+This is a summary of the issues reported upstream, already mentioned
+above, in chronological order:
+
+ * [offlineimap migration script might corrupt messages](https://github.com/gares/syncmaildir/issues/1) (fixed)
+ * [excluding subscribed folders](https://github.com/gares/syncmaildir/issues/2)
+ * [safely strip offlineimap headers](https://github.com/gares/syncmaildir/pull/3) (merged)
+ * [fails to exclude remote folder](https://github.com/gares/syncmaildir/issues/4)
+ * [should ignore symlinks in "mailboxes"](https://github.com/gares/syncmaildir/issues/5)
+ * [smd-pull/push --dry-run way too verbose](https://github.com/gares/syncmaildir/issues/6)
+ * [Maildir/INBOX exception](https://github.com/gares/syncmaildir/issues/7)
+
+Trivia: all of the 7 first issues reported against SMD were from me.

small style fixes
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 4941b8ac..07168ea5 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -1,13 +1,14 @@
-Migrating from OfflineIMAP to syncmaildir (SMD)
-===============================================
+[[!meta title="Migrating from OfflineIMAP to syncmaildir (SMD)"]]
 
 I tried to follow the official procedure to migrate from OfflienIMAP
 to SMD. I hit some difficulties, which I documented in upstream
 issues. What follows is the detailed test procedure I followed to test
-the synchronization.
+the synchronization and notes about the process.
+
+[[!toc]]
 
 Migration procedure
--------------------
+===================
 
  1. run `smd-check-conf` to create a template configuration in
     `.smd/config.default` and configure it with:
@@ -144,7 +145,7 @@ Clearing the slate can be done by running this command on both ends:
     \rm -r .smd/workarea/ .smd/*.db.txt Maildir-smd/
 
 Observations
-------------
+============
 
 The migration was not exactly perfect: as documented above, lots of
 problems with exclude patterns, weird error messages, large dumps on
@@ -163,7 +164,7 @@ OfflineIMAP is over just using rsync to have a clean mirror to start
 with, avoiding all the messy rewrite rules logic.
 
 Open questions
---------------
+==============
 
 How should SMD be started in a session? User level systemd service?
 There is an "applet" that can be used, but that could be annoying. How
@@ -177,7 +178,7 @@ It would be nice if `notmuch insert` could be used to deliver the
 emails locally instead of having to rescan the whole database.
 
 Performance comparison
-----------------------
+======================
 
 OfflineIMAP is definitely slower. Here is a single run, although with
 a password input prompt that I estimate takes about one second:

performance analysis, open questions, almost complete procedure
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 1088a434..4941b8ac 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -1,8 +1,13 @@
 Migrating from OfflineIMAP to syncmaildir (SMD)
 ===============================================
 
-This incomplete procedure is the process I followed to migrate from
-OfflienIMAP to SMD. It is still incomplete.
+I tried to follow the official procedure to migrate from OfflienIMAP
+to SMD. I hit some difficulties, which I documented in upstream
+issues. What follows is the detailed test procedure I followed to test
+the synchronization.
+
+Migration procedure
+-------------------
 
  1. run `smd-check-conf` to create a template configuration in
     `.smd/config.default` and configure it with:
@@ -126,14 +131,97 @@ OfflienIMAP to SMD. It is still incomplete.
  
  12. pull again: 490 mails deleted? push again, no change. wtf...
 
- 13. create email remotely and locally, go to 5
-
- 15. if all looks well, pull then push
+ 13. create email remotely and locally, go to 10
 
- 16. if all looks well, smd-loop
+ 14. run `smd-loop` and hook into startup scripts? (TODO)
 
- 17. create restricted shell
+ 15. create restricted shell (TODO)
+ 
+ 16. call `notmuch new` in `~/.smd/hooks/post-pull.d/` (TODO)
 
-Clearing the slate:
+Clearing the slate can be done by running this command on both ends:
 
     \rm -r .smd/workarea/ .smd/*.db.txt Maildir-smd/
+
+Observations
+------------
+
+The migration was not exactly perfect: as documented above, lots of
+problems with exclude patterns, weird error messages, large dumps on
+the console, and scripts I had to rewrite. I also end up with
+duplicate emails in the process, something I generally try to
+avoid. Even if it's about 500 emails over ~200 000, it's still
+annoying.
+
+It might be better to start off with a clean slate and just rsync all
+files. The problem then of course is the directory layout is
+completely changed and is now incompatible with OfflineIMAP
+forever. Of course, this is inevitable: the second we rename files and
+unmangle the headers to remove OfflineIMAP specific stuff, the folder
+cannot be reused, so it's unclear what the benefit of migrating from
+OfflineIMAP is over just using rsync to have a clean mirror to start
+with, avoiding all the messy rewrite rules logic.
+
+Open questions
+--------------
+
+How should SMD be started in a session? User level systemd service?
+There is an "applet" that can be used, but that could be annoying. How
+else should errors be reported? It does look simple enough and
+non-intrusive: by default it doesn't notify for new mail, which is
+good.
+
+How should a clean `rsync` based bootstrap be performed?
+
+It would be nice if `notmuch insert` could be used to deliver the
+emails locally instead of having to rescan the whole database.
+
+Performance comparison
+----------------------
+
+OfflineIMAP is definitely slower. Here is a single run, although with
+a password input prompt that I estimate takes about one second:
+
+    *** Finished account 'Anarcat' in 0:13
+    11.46user 2.96system 0:13.39elapsed 107%CPU (0avgtext+0avgdata 84760maxresident)k
+    0inputs+112outputs (0major+51610minor)pagefaults 0swaps
+
+That is with `postsynchook` disabled and running in "once" mode (`-k
+Account_Anarcat:postsynchook=true -o`), the hook (notmuch) takes about
+3 seconds on its own.
+
+SMD, on a snapshot of that mailbox about an hour old (so essentially
+the same):
+
+    $ time sh -c "smd-pull ; smd-push"
+    1.25user 0.82system 0:06.88elapsed 30%CPU (0avgtext+0avgdata 49180maxresident)k
+    0inputs+115240outputs (0major+89930minor)pagefaults 0swaps
+
+So OfflineIMAP is at least two times slower, 8 full seconds
+overall. It definitely feels slower and clunkier. Even better, to
+fetch new mail we actually only need `pull`, which takes even less
+time:
+
+    $ time smd-pull
+    0.29user 0.04system 0:03.90elapsed 8%CPU (0avgtext+0avgdata 31480maxresident)k
+    0inputs+57592outputs (0major+5400minor)pagefaults 0swaps
+
+Five times faster! And notice that much lower CPU usage.
+
+Server-side usage is harder to diagnose, but I couldn't "see" smd on
+the server side during `smd-pull` at all: maybe it just popped in and
+out of existence without me noticing with `top(1)`. But I definitely
+noticed Dovecot's `imap` process during the OfflineIMAP
+run. Circumstancial evidence from Prometheus monitoring shows a 18%
+CPU usage of Dovecot during fetches on marcos.
+
+Running `smd pull; smd push` on a `sleep 1` busy loop does make
+`mddiff`, `lua5.1` and `xdelta` show up in `top` eventually, and CPU
+usage *does* seem higher than OfflineIMAP then (22%) - but it's a bit
+of an unfair comparison, because the updates *are* running much
+faster.
+
+A fairer loop would be based on `sleep 20` and would match better with
+the general OfflineIMAP loop frequency (`-k
+Account_Anarcat:autorefresh=0.1` means 6 seconds plus the ~13 seconds
+run time). Such a loop converges to about 5% extra CPU usage.

update instructions with latest test
diff --git a/services/mail/offlineimap2smd.mdwn b/services/mail/offlineimap2smd.mdwn
index 851a9147..1088a434 100644
--- a/services/mail/offlineimap2smd.mdwn
+++ b/services/mail/offlineimap2smd.mdwn
@@ -9,11 +9,11 @@ OfflienIMAP to SMD. It is still incomplete.
 
         SERVERNAME=smd-server-anarcat
         CLIENTNAME=curie-anarcat
-        MAILBOX_LOCAL=Maildir-smd/
-        MAILBOX_REMOTE=Maildir/
+        MAILBOX_LOCAL=Maildir-smd
+        MAILBOX_REMOTE=Maildir-smd
         TRANSLATOR_RL="smd-translate -m oimap-dovecot -d RL default"
         TRANSLATOR_LR="smd-translate -m oimap-dovecot -d LR default"
-        EXCLUDE="$MAILBOX_LOCAL/.notmuch/hooks/* Maildir/.notmuch/xapian/*"
+        EXCLUDE="$MAILBOX_LOCAL/.notmuch/hooks/* $MAILBOX_LOCAL/.notmuch/xapian/*"
 
  2. authenticate remote server:
 
@@ -31,9 +31,18 @@ OfflienIMAP to SMD. It is still incomplete.
     This will be useful later to configure the restricted shell
     account.
 
- 4. create the test maildir, takes about 30 seconds:
+ 4. create the test maildirs, takes about 2 minutes, on both the
+    client and the server:
  
-        cp -a Maildir/Anarcat/ Maildir-smd/
+        server$ cp -a Maildir/ Maildir-smd/
+        client$ cp -a Maildir/Anarcat/ Maildir-smd/
+
+ 7. rename the `INBOX` folder on the client. the problem here is that
+     the `INBOX` folder exists locally (thanks to offlineimap) and not
+     remotely (thanks to dovecot). this was reported in [bug #7](https://github.com/gares/syncmaildir/issues/7)
+     and it seems the workaround might be, on the client:
+     
+        client$ mv Maildir-smd/INBOX/{cur,new,tmp} Maildir-smd/ && rmdir Maildir-smd/INBOX/
 
  5. run `smd-check-conf` repeatedly until it stops complaining and
     looks sane. steps taken to cleanup remote directory:
@@ -49,6 +58,12 @@ OfflienIMAP to SMD. It is still incomplete.
       filed as [bug #4](https://github.com/gares/syncmaildir/issues/4). The `.notmuch` folder ignores are
       necessary because smd crashes on symlinks ([bug #5](https://github.com/gares/syncmaildir/issues/5)).
 
+    * the above fails with remote folder as `Maildir-smd`. just
+      removed the folders for now, but they are important and
+      shouldn't be completely destroyed!
+      
+          server$ mkdir Maildir-smd-notmuch && mv Maildir-smd/.notmuch/{hooks,xapian,muchsync} Maildir-smd-notmuch
+
  6. when the config looks sane, the next step is to [convert the
     folder away from OfflineIMAP idiosyncracies](https://github.com/gares/syncmaildir#migration-from-offlineimap), particularly
     removing the `X-OfflineIMAP` header. there I have found that the
@@ -59,44 +74,66 @@ OfflienIMAP to SMD. It is still incomplete.
     was problematic: it could [corrupt email bodies](https://github.com/gares/syncmaildir/issues/1) as it works on
     complete messages, not just the headers. I wrote a generic [header
     stripping script](https://gitlab.com/anarcat/scripts/blob/master/strip_header) to workaround that issue. to call it, use
-    this, which takes about a minute:
+    this, which takes about three minutes, on both local and remote
+    servers (because yes, the remote server also has OfflineIMAP
+    headers somehow):
 
-        syncmaildir-git-3c87388/misc/strip-header Maildir-smd/ >&2 2>&1 | pv -s 100000 -l > log
+        server$ ~/dist/syncmaildir/misc/strip-header Maildir-smd/ 2>&1 >&2 | pv -s 187000 -l > log
+        client$ ~/dist/syncmaildir/misc/strip-header Maildir-smd/ 2>&1 >&2 | pv -s 187000 -l > log
 
     contributed upstream as [PR #3](https://github.com/gares/syncmaildir/pull/3).
 
- 7. then the files need to be renamed to please SMD, using, which
-    takes about 3 minutes and generates ~88k renames:
+ 8. then the files need to be renamed to please SMD, using, which
+    takes about 3 minutes and generates ~140k renames (basically all
+    files get renamed):
 
-        smd-uniform-names
+        client$ smd-uniform-names -v
 
- 8. actually rename the files, using the script created, takes about 2
+ 9. actually rename the files, using the script created, takes about 2
     minutes:
 
-        sh -x smd-rename.sh | pv -l -s $(wc -l smd-rename.sh | cut -f1 -d ' ') > log-rename
+        client$ sh -x smd-rename.sh 2>&1 | pv -l -s $(wc -l smd-rename.sh | cut -f1 -d ' ') > log-rename
+
+ 10. first dry-run pull, takes about 4 minutes. about 80k emails are
+     missing, probably because SMD considers all folders, even if not
+     subscribed ([bug #2](https://github.com/gares/syncmaildir/issues/2)). all filenames to transfer are also
+     dumped to stdout, ouch! ([bug #6](https://github.com/gares/syncmaildir/issues/6)). at least logs are in
+     `~/.smd/log/client.default.log`
 
- 9. first dry-run pull, takes about 4 minutes. about 80k emails are
-    missing, probably because SMD considers all folders, even if not
-    subscribed ([bug #2](https://github.com/gares/syncmaildir/issues/2)). all filenames to transfer are also
-    dumped to stdout, ouch! ([bug #6](https://github.com/gares/syncmaildir/issues/6))
+         client$ smd-pull --dry-run
 
-        smd-pull --dry-run
+     update: after a full rerun, it turns out only 9k emails are
+     missing. after running the strip-header on both sides, only
+     ~600. but with s/^cp/^mv/, back to 1400. grml. after *another*
+     full rerun, the numbers are 401 new mails on pull.
+     
+     to get the count:
+     
+         grep stats::new-mails  /home/anarcat/.smd/log/client.default.log
 
- 10. first dry-run push - problem: 15k mails missing from remote?
+     ... grouped per folder:
+     
+         grep stats::mail-transferre /home/anarcat/.smd/log/client.default.log | sed -e 's/ , /\n/g' | sed 's#/cur/.*##' |  sort | uniq -c | sort -n
+
+ 11. first dry-run push - problem: 15k mails missing from remote?
      maybe because we're in dry-run mode? need to backup remote and
      test again. nope. there are genuine diffs there, e.g. git-annex
      folder totally different. maybe not subscribed?
 
-        smd-push --dry-run
+        client$ smd-push --dry-run
+    
+     result: 591 *more* duplicate emails.
+ 
+ 12. pull again: 490 mails deleted? push again, no change. wtf...
 
- 11. create email remotely and locally, go to 5
+ 13. create email remotely and locally, go to 5
 
- 12. make another tarball first? takes about 4 minutes, so cheap.
+ 15. if all looks well, pull then push
 
- 13. remove local folders removed remotely? or will pull take care of that?
+ 16. if all looks well, smd-loop
 
- 14. if all looks well, pull then push
+ 17. create restricted shell
 
- 15. if all looks well, smd-loop
+Clearing the slate:
 
- 16. create restricted shell
+    \rm -r .smd/workarea/ .smd/*.db.txt Maildir-smd/

ranting images
diff --git a/blog/kubecon-rant/home-depo-speed.png b/blog/kubecon-rant/home-depo-speed.png
new file mode 100644
index 00000000..507b218b
Binary files /dev/null and b/blog/kubecon-rant/home-depo-speed.png differ
diff --git a/blog/kubecon-rant/toaster-project.jpg b/blog/kubecon-rant/toaster-project.jpg
new file mode 100644
index 00000000..d5224ab9
Binary files /dev/null and b/blog/kubecon-rant/toaster-project.jpg differ
diff --git a/blog/kubecon-rant/usaf-cost-savings.png b/blog/kubecon-rant/usaf-cost-savings.png
new file mode 100644
index 00000000..ff83e950
Binary files /dev/null and b/blog/kubecon-rant/usaf-cost-savings.png differ

cheaper macro lens
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index d3e962a2..f5806719 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -192,7 +192,7 @@ Reference
  * [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("an extraordinary lens"),
    700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 460$ on kijiji
  * [60mm f/2.4 R Macro ø39mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf60mmf24_r_macro/), [Rockwell](https://kenrockwell.com/fuji/x-mount-lenses/60mm-f24.htm), [Photograph blog](http://www.photographyblog.com/reviews/fujifilm_xf_60mm_f2_4_r_review/),
-   425-700$ on kijiji
+   420-700$ on kijiji
  * [X100f ø49](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x100f/) 1200-1600$ on kijiji
  * lens cap holder: [1](https://www.bhphotovideo.com/c/product/834774-REG/Sensei_CK_L_Cap_Keeper_for_Lens.html?sts=pi), [2](https://www.bhphotovideo.com/c/product/850525-REG/Sensei_ck_lp_Cap_Keeper_Plus_Lens.html), haven't found others
  * [cleaning pen](https://www.bhphotovideo.com/c/product/1051483-REG/lenspen_nlp1_c_nlp1c_lens_pen.html): ~10USD. haven't looked at alternative brushes.

review rant
diff --git a/blog/kubecon-rant.mdwn b/blog/kubecon-rant.mdwn
index ea051855..5f8c8bb9 100644
--- a/blog/kubecon-rant.mdwn
+++ b/blog/kubecon-rant.mdwn
@@ -1,98 +1,108 @@
-This is a rant I wrote while attending Kubecon 2018.
+This is a rant I wrote while attending Kubecon 2018. I am not sure how
+else to frame this but as a deep discomfort I have with the way one of
+the most cutting edge projects in my community is moving, as a symptom
+of so many things wrong in society at large.
 
 Diversity and education
 -----------------------
 
-There is great talk about diversity at kubecon, and that is truly
-honorbale. One of the places where I have seen the best efforts towads
-that goal. Yet it is still one of the less diverse places i've ever
-been in: in comparison, pycon just "feels" more diverse. Those are not
-hard facts, true{...}
+There is great talk about diversity at Kubecon, and that is truly
+honorable. It's one of the places where I have seen the best efforts
+towards that goal. Yet it is still one of the less diverse places I've
+ever been in: in comparison, [Pycon](https://en.wikipedia.org/wiki/Python_Conference) just "feels" more diverse. I
+think it still says something about diversity efforts in our
+communities.
 
-4000 {?} white men: DSCF3307.JPG
-
-or around file:///home/anarcat/card-bak/DCIM/103_FUJI/DSCF3481.JPG
+![4000 white men](DSCF3307.JPG)
 
 The truth is that contrary to programmer communities, devops and
-sysadmin communities' knowledge comes not from institutional eduction,
-but from self-learning. Even though i have spent years studying in
-university, the true day to day functional knowledge I have used for
-over a decade in a hosting shop came not from the university, but from
-my experiments done late at home on my home computer, first on Mac
-inherited by family, then from the source code of FreeBSD passed down
-as a rune from an uncle. And sure, my programming skills were useful
-there too, but they were acquired *before* going to university, when
-even there I was expected to learn languages such as C in-between
-sessions.
-
-diversity program: DSCF3502.JPG
-
-The truth is that the real solution to diversity in computing
-communities not only comes from a change in culture in the
-communities, but real investments in society at large. The large
-mega-corporations subsidizing those events get a lot of good press out
-of those diversity sponsorship, but that is nothing compared to how
-much they are spared in tax savings by the states. Amzn's Jeff Bezos
-earned billions of dollars only in market cap this year. 
-
-https://www.jwz.org/blog/2018/05/amazon-puts-7000-jobs-on-hold-because-of-a-tax-that-would-help-seattles-homeless-population/
-
-Google,
-Facebook, Microsoft, and Apple all evade taxes like the best
-gansters. And this is important because society will change through
-eduction. This is how more traditional STEM sectors like engineering
-and medecine have changed: women, minorities and lower class citizens
-were finally allowed into schools through grants, yes, but also simply
-because schools were made more accessible in the 1970s. That trend is
-reversing now, but not everywhere and the impact is long-lasting. By
-evading taxes, those companies are keeping the state from extremely
-large sources of revenues that could and should be used to level the
-playing field through affordable education.
-
-Hell, even *any* education in the field would help. There *are* no
-relevant sysadmin eduction curriculum right now. Sure you can become a
-Cisco CCNA or Microsoft certified engineer through private
-courses. Anyone who's been seriously involved in running a Linux
-cluster (and that's most of what's relevant out there these days)
-knows that is just a scam: that will only get you so far, never to
-executive positions, only in base "remote hands monkey" position.
-
-Move fast and break things
---------------------------
-
-But this would require the field to slow down so that any gathered
-knowledge would have time to trickle down into education
-curriculum. Configuration management is actually quite old, but
-because the changes in tooling are so far, any curriculum built in the
-last decade (or more of course) is made mostly irrelevant every year
-or so. Puppet release a major new version every $year? Kubernetes is
-barely 2 years old.
-
-Here at Kubecon, the motto "move fast and break things" is everywhere,
-lambasting how Home Depot, to compete with Amazon to sell more
-hammers, is now deploying to prod multiple times a day, solving a
-problem that wasn't there in some new absurd faith that deploying to
-prod will naturally make people more happy and Home Depot sell more
-hammers. And that's after telling us that Cloud Foundry helped the
-USAF save 600M$ by moving their databases to the cloud. No one seems
-to be bothered by the idea that one of the largest and most powerful
-military bodies in existence would move private data into a private
-cloud out of the control of any government. It's the name of the game,
-at Kubecon.
-
-c.f. page 21-22 in 
-https://schd.ws/hosted_files/kccnceu18/1f/AKearns_KubeCon_Final%20%281%29.pdf
-
-In his [keynote](https://kccnceu18.sched.com/event/Duok/keynote-cncf-20-20-vision-alexis-richardson-founder-ceo-weaveworks), Richardson presents [the toaster project](http://www.thetoasterproject.org/) is
-given as an example of what *not* to do: we know now that we need
-reusable components, du-uh, obviously missing the fundamental point of
-the experiment, which was aimed at showing how civilization is
-fragile... We depend on layers upon layers of quickly replacable
-components. This includes people too, and not just white dudes in
-california, but also workers in terrible conditions that were
-outsourced out of california decades ago. It depends on precious
-metals and the mines of Africa and the specialized labour and
-intricate knowledge of the factories in Asia.
+sysadmin communities' knowledge comes not from institutional
+education, but from self-learning. Even though i have spent years
+studying in university, the day to day knowledge I needed in my work
+as a sysadmin for about twenty years now came not from the university,
+but from my experiments my home computer network, late at home. This
+was first on the family Macintosh, then on the FreeBSD source code of
+passed down as a magic word from an uncle and finally through Debian
+consecrated as the leftist's true computing way. And sure, my
+programming skills were useful there too, but I acquired those
+*before* going to university: even there teachers expected students to
+learn programming languages (such as C!) in-between sessions.
+
+![Diversity program](DSCF3502.JPG)
+
+The real solutions to the lack of diversity in our communities not
+only comes from a change in culture, but real investments in society
+at large. The large mega-corporations subsidizing those events get a
+lot of good press out of those diversity sponsorship, but that is
+nothing compared to how much tax evasion they aggressively run in
+their home states. As an example, [Amazon recently put 7000 jobs on
+hold because of a tax the city of Seattle wanted to impose on
+corporations to help the homeless population](https://www.jwz.org/blog/2018/05/amazon-puts-7000-jobs-on-hold-because-of-a-tax-that-would-help-seattles-homeless-population/). Google, Facebook,
+Microsoft, and Apple all evade taxes like gangsters. This is important
+because society will change through education, and those cost
+money. Education is how more traditional STEM sectors like engineering
+and medicine have changed: women, minorities, and poorer populations
+were finally allowed into schools after epic social struggles in the
+1970s finally yielded somewhat more accessible education. That trend
+is obviously reversing now, but not everywhere and such impacts are
+long-lasting. By evading taxes, those companies are keeping the state
+from large sources of revenues that could level the playing field
+through affordable education.
+
+Hell, *any* education in the field would help. There is basically no
+sysadmin education curriculum right now. Sure you can follow a Cisco
+[CCNA](https://en.wikipedia.org/wiki/CCNA) (Cisco) or [MSCE](https://en.wikipedia.org/wiki/Microsoft_Certified_Professional) (Microsoft) training programs, which
+are, by the way, private courses. But anyone who's been seriously
+involved in running any serious computing infrastructure knows those
+are a scam: that will only get you so far, rarely to executive
+positions, and only in basic "remote hands monkey" positions.
+
+Velocity
+--------
+
+Providing an education curriculum would require the operations field
+to slow down so that any gathered knowledge would have time to settle
+and trickle down into a curriculum. Configuration management is
+[pretty old](https://en.wikipedia.org/wiki/Configuration_management#History), but because the changes in tooling are so fast, any
+curriculum built in the last decade (or more) is now
+irrelevant. [Puppet](https://en.wikipedia.org/wiki/Puppet_%28software%29) publishes a new release [every 6 month](https://puppet.com/misc/puppet-enterprise-lifecycle),
+Kubernetes is barely 4 years old now, and is changing rapidly with a
+[~3 month release schedule](https://gravitational.com/blog/kubernetes-release-cycle/).
+
+![Home Depot ecstasy](kubecon-rant/home-depot-speed.png)
+
+Here at Kubecon, Mark Zuckerberg's mantra of "move fast and break
+things" is everywhere. We call it "[velocity](https://www.cncf.io/blog/2017/06/05/30-highest-velocity-open-source-projects/)": where you are going
+does not matter as much as how fast you're going there. At one of the
+many [keynotes](https://kccnceu18.sched.com/event/E5wI/keynote-shaping-the-cloud-native-future-abby-kearns-executive-director-cloud-foundry-foundation-slides-attached), Abby Kearns boasted at how Home Depot, to try and
+sell more hammers than Amazon, is now deploying code to production
+multiple times a day. We're solving a problem that wasn't there in
+some new absurd faith that code deployments will naturally make people
+happier and Home Depot sell more hammers. And that's after telling us
+that Cloud Foundry helped the USAF save 600M$ by moving their
+databases to the cloud. No one seems bothered by the idea that the
+most powerful military in existence would move state secrets into a
+private cloud, out of the control of any government. It's the name of
+the game, at Kubecon.
+
+![USAF saves money](kubecon-rant/usaf-cost-savings.png)
+
+In his [keynote](https://kccnceu18.sched.com/event/Duok/keynote-cncf-20-20-vision-alexis-richardson-founder-ceo-weaveworks), Richardson presented [the toaster project](http://www.thetoasterproject.org/) as
+an example of what *not* to do. He should have used "reusable
+components", du-uh, obviously missing the fundamental point of the
+experiment, which was to show how civilization is fragile... We depend
+on layers upon layers of disposable components. In this totalitarian
+view, people are also disposable component, and not just the white
+dudes in California, but also workers outsourced of the Americas
+decades ago: it depends on precious metals and the miners of Africa,
+the specialized labour of the factories and intricate knowledge of the
+factory workers in Asia, the coal miners, the flooded forests of the
+first nations powering this terrifying surveillance machine.
+
+Privilege
+---------
+
+![The toaster experiment](kubecon-rant/toaster-project.jpg)
 
 > "Left to his own devices he couldn’t build a toaster. He could just

(Diff truncated)
updates from LWN
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index b50425d3..7463f27c 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -1,54 +1,77 @@
-Autoscaling  Kubernetes Workloads
-=================================
-
-During [KubeCon + CloudNativeCon Europe 2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/), [Frederic
-Branczyk](https://github.com/brancz) from CoreOS (now part of RedHat) held a packed
-[session](https://kccnceu18.sched.com/event/Dqun/autoscale-your-kubernetes-workload-with-prometheus-frederic-branczyk-coreos-intermediate-skill-level) to introduce a standard and officially recommended way to
-scale workloads automatically in [Kubernetes](https://kubernetes.io/) clusters. Kubernetes
-has had an autoscaler since the earlier days but only recently did the
-community implement a more flexible and extensible mechanism to make
-decisions on when to add more resources to fulfill workload
-requirements. The new API integrates not only the [Prometheus](https://prometheus.io)
-project popular in Kubernetes deployments, but also any arbitrary
-monitoring vendor that implements the standardized APIs.
+[[!meta title="Autoscaling Kubernetes workloads"]]
+\[LWN subscriber-only content\]
+-------------------------------
+
+[[!meta date="2018-05-10T00:00:00+0000"]]
+[[!meta updated="2018-05-10T13:17:48-0400"]]
+
+[[!toc levels=2]]
+
+Technologies like containers, clusters, and Kubernetes offer the
+prospect of rapidly scaling the available computing resources to match
+variable demands placed on the system. Actually implementing that
+scaling can be a challenge, though. During [KubeCon + CloudNativeCon
+Europe
+2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/),
+Frederic Branczyk from CoreOS (now part of RedHat) held a packed
+[session](https://kccnceu18.sched.com/event/Dqun/autoscale-your-kubernetes-workload-with-prometheus-frederic-branczyk-coreos-intermediate-skill-level)
+to introduce a standard and officially recommended way to scale
+workloads automatically in [Kubernetes](https://kubernetes.io/)
+clusters.
+
+Kubernetes has had an autoscaler since the earlier days but only
+recently did the community implement a more flexible and extensible
+mechanism to make decisions on when to add more resources to fulfill
+workload requirements. The new API integrates not only the
+[Prometheus](https://prometheus.io) project popular in Kubernetes
+deployments, but also any arbitrary monitoring vendor that implements
+the standardized APIs.
 
 The old and new autoscalers
 ---------------------------
 
-![Frederic Branczyk](https://paste.anarc.at/DSCF3333.JPG)
-
 Branczyk first covered the history of the autoscaler architecture and
-how it evolved through time. Kubernetes, since version 1.2, features a
-Horizontol Pod Autoscaler (HPA) which dynamically allocates resources
-depending on the detected workload. When the load becomes too high,
-the HPA increases the number of [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) replicas and when the load
-goes down again, it removes superfluous copies. In the old HPA, a
-component called [Heapster](https://github.com/kubernetes/heapster) would pull usage metrics from the
-internal [cAdvisor](https://github.com/google/cadvisor) monitoring daemon and the HPA controller would
-then scale workloads up or down based on those metrics. Unfortunately,
-the controller would only make decisions on CPU utilization, even if
-Heapster provides [other metrics](https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md) like disk, memory, or network
-usage. According to Branczyk, while in theory any workload can be
-converted to a CPU-bound problem, this is an inconvenient limitation,
-especially to respect higher-level Service Level Agreements (SLA). For
-example, an arbitrary SLA like "process 95% of requests within 100
-milliseconds" would be difficult to represent as CPU usage. Another
-limitation is that the [Heapster API](https://github.com/kubernetes/heapster/blob/master/docs/model.md) was only loosely defined and
-never officially adopted as part of the larger Kubernetes
-API. Heapster also required the help of a storage backend like
-[InfluxDB](https://github.com/kubernetes/heapster/blob/63f40593ec88a5d56dce7b23b56b1a44c5431184/docs/influxdb.md) or [Google's Stackdriver](https://github.com/kubernetes/heapster/blob/d25a176baee4554dc59052785c4ee940fc94d305/docs/google.md) to store samples, which
-made deploying an HPA challenging.
-
-![Architecture diagram](https://raw.githubusercontent.com/kubernetes/community/master/contributors/design-proposals/instrumentation/monitoring_architecture.png)
+how it has evolved through time. Kubernetes, since version 1.2, features
+a horizontal pod autoscaler (HPA), which dynamically allocates resources
+depending on the detected workload. When the load becomes too high, the
+HPA increases the number of
+[pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/)
+[![\[Fréderic
+Branczyk\]](https://static.lwn.net/images/conf/2018/kubecon/FredericBranczyk-sm.jpg "Fréderic Branczyk")](https://lwn.net/Articles/754155/)
+replicas and, when the load goes down again, it removes superfluous
+copies. In the old HPA, a component called
+[Heapster](https://github.com/kubernetes/heapster) would pull usage
+metrics from the internal [cAdvisor](https://github.com/google/cadvisor)
+monitoring daemon and the HPA controller would then scale workloads up
+or down based on those metrics.
+
+Unfortunately, the controller would only make decisions based on CPU
+utilization, even if Heapster provides [other
+metrics](https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md)
+like disk, memory, or network usage. According to Branczyk, while in
+theory any workload can be converted to a CPU-bound problem, this is an
+inconvenient limitation, especially when implementing higher-level
+service level agreements. For example, an arbitrary agreement like
+"process 95% of requests within 100 milliseconds" would be difficult to
+represent as a CPU-usage problem. Another limitation is that the
+[Heapster
+API](https://github.com/kubernetes/heapster/blob/master/docs/model.md)
+was only loosely defined and never officially adopted as part of the
+larger Kubernetes API. Heapster also required the help of a storage
+backend like
+[InfluxDB](https://github.com/kubernetes/heapster/blob/63f40593ec88a5d56dce7b23b56b1a44c5431184/docs/influxdb.md)
+or [Google's
+Stackdriver](https://github.com/kubernetes/heapster/blob/d25a176baee4554dc59052785c4ee940fc94d305/docs/google.md)
+to store samples, which made deploying an HPA challenging.
 
 In late 2016, the "autoscaling special interest group" ([SIG
-autoscaling][]) decided that the pipeline needed a redesign which
-would allow scaling not only on CPU usage, but also arbitrary metrics
-from external monitoring systems. The result is that Kubernetes 1.6
-shipped with a new API specification defining how the autoscaler
-integrates with external monitoring systems. To avoid mistakes made in
-Heapster, the API is only a specification without an implementation,
-so development is not slowed down by a new component. This shifts
+autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling))
+decided that the pipeline needed a redesign that would allow scaling
+based on arbitrary metrics from external monitoring systems. The result
+is that Kubernetes 1.6 shipped with a new API specification defining how
+the autoscaler integrates with those systems. To avoid mistakes made in
+Heapster, the API is only a specification without an implementation, so
+development is not slowed down by a new component. This shifts
 responsibility of maintenance to the monitoring vendors: instead of
 "dumping" their glue code in Heapster, vendors now have to maintain
 their own adapter conforming to a well-defined API to get
@@ -56,209 +79,190 @@ their own adapter conforming to a well-defined API to get
 
 The new specification defines core metrics like CPU, memory, and disk
 usage. Kubernetes provides a canonical implementation of those metrics
-through the [metrics server](https://github.com/kubernetes-incubator/metrics-server), a stripped down version of
-Heapster. The metrics server provides the core metrics required by
-Kubernetes so that scheduling, autoscaling and things like `kubectl
-top` work out of the box. This means that any Kubernetes 1.8 cluster
-now supports autoscaling using those metrics out of the box: for
-example [minikube](https://github.com/kubernetes/minikube) or [Google's Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE) both
-offer a native metrics server without an external database or
-monitoring system.
+through the [metrics
+server](https://github.com/kubernetes-incubator/metrics-server), a
+stripped down version of Heapster. The metrics server provides the core
+metrics required by Kubernetes so that scheduling, autoscaling and
+things like `kubectl top` work out of the box. This means that any
+Kubernetes 1.8 cluster now supports autoscaling using those metrics out
+of the box: for example
+[minikube](https://github.com/kubernetes/minikube) or [Google's
+Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) both
+offer a native metrics server without an external database or monitoring
+system.
 
 In terms of configuration syntax, the change is minimal. Here is an
-example of how to [configure the autoscaler](https://v1-6.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in earlier Kubernetes
-releases, taken from the [OpenShift Container Platform
+example of how to [configure the
+autoscaler](https://v1-6.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
+in earlier Kubernetes releases, taken from the [OpenShift Container
+Platform
 documentation](https://docs.openshift.com/container-platform/3.9/dev_guide/pod_autoscaling.html#creating-a-hpa):
 
-    apiVersion: extensions/v1beta1
-    kind: HorizontalPodAutoscaler
-    metadata:
-      name: frontend 
-    spec:
-      scaleRef:
-        kind: DeploymentConfig 
-        name: frontend 
-        apiVersion: v1 
-        subresource: scale
-      minReplicas: 1 
-      maxReplicas: 10 
-      cpuUtilization:
-        targetPercentage: 80
+        apiVersion: extensions/v1beta1
+        kind: HorizontalPodAutoscaler
+        metadata:
+          name: frontend 
+        spec:
+          scaleRef:
+            kind: DeploymentConfig 
+            name: frontend 
+            apiVersion: v1 
+            subresource: scale
+          minReplicas: 1 
+          maxReplicas: 10 
+          cpuUtilization:
+            targetPercentage: 80
 
 The new API configuration is more flexible:
 
-    apiVersion: autoscaling/v2beta1
-    kind: HorizontalPodAutoscaler
-    metadata:
-      name: hpa-resource-metrics-cpu 
-    spec:
-      scaleTargetRef:
-        apiVersion: apps/v1beta1 
-        kind: ReplicationController 
-        name: hello-hpa-cpu 
-      minReplicas: 1 

(Diff truncated)
comments on system76 and dell
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index 5353c642..da879b23 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -221,9 +221,25 @@ Heard lots of good things about the XPS13. First, it's small-ish
 support. It's also well supported in linux for firmware updates. But
 might be too pricey.
 
+Dell apparently ships some machines with Linux pre-installed, although
+I wasn't able to find such a machine on their website at the time of
+writing (summer 2018). They do support the [standard firmware update
+service](https://fwupd.org/) as well, which is quite nice.
+
 System76
 --------
 
+Ssytem76 is a strange shop: they say they are dedicated to Ubuntu and
+Debian and in general Linux support, yet what they do is basically
+resell cheap Chinese generic brands (see below). They say they do a
+vetting process on the hardware, and they do some software
+development: in fact they now have their own Linux distribution called
+[Pop! OS](https://system76.com/pop). They are also seemingly [refusing with standard firmware
+distribution tool](https://blogs.gnome.org/hughsie/2018/05/09/system76-and-the-lvfs/) that have been adopted, at least in part, by
+large vendors like Dell, Lenovo or HP. They also have a rather bad
+policy on returns: dead pixels and screen defects are not accepted,
+for example.
+
 ### Galaga pro
 
 <https://system76.com/laptops/galago>

another nice lens
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index e1c2d6a6..d3e962a2 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -189,6 +189,8 @@ Reference
    on kijiji
  * [35mm f/2 R WR ø43](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf2_r_wr/), [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f2.htm), [fstoppers](https://fstoppers.com/gear/fstoppers-reviews-fujifilm-35mm-f2-wr-158227), nice size,
    sealed, 350-400$ on kijiji 
+ * [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("an extraordinary lens"),
+   700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 460$ on kijiji
  * [60mm f/2.4 R Macro ø39mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf60mmf24_r_macro/), [Rockwell](https://kenrockwell.com/fuji/x-mount-lenses/60mm-f24.htm), [Photograph blog](http://www.photographyblog.com/reviews/fujifilm_xf_60mm_f2_4_r_review/),
    425-700$ on kijiji
  * [X100f ø49](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x100f/) 1200-1600$ on kijiji

yet another review, resend to LWN
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index aeeadf11..b50425d3 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -8,7 +8,9 @@ scale workloads automatically in [Kubernetes](https://kubernetes.io/) clusters.
 has had an autoscaler since the earlier days but only recently did the
 community implement a more flexible and extensible mechanism to make
 decisions on when to add more resources to fulfill workload
-requirements.
+requirements. The new API integrates not only the [Prometheus](https://prometheus.io)
+project popular in Kubernetes deployments, but also any arbitrary
+monitoring vendor that implements the standardized APIs.
 
 The old and new autoscalers
 ---------------------------
@@ -19,50 +21,53 @@ Branczyk first covered the history of the autoscaler architecture and
 how it evolved through time. Kubernetes, since version 1.2, features a
 Horizontol Pod Autoscaler (HPA) which dynamically allocates resources
 depending on the detected workload. When the load becomes too high,
-the HPA increases the number of pod replicas and when the load goes
-down again, it removes superfluous copies. In the old HPA, a component
-called [Heapster](https://github.com/kubernetes/heapster) would pull usage metrics from [cAdvisor](https://github.com/google/cadvisor) and
-the HPA controller would then make decisions based on that
-data. Unfortunately, the HPA would only make decisions on CPU
-utilization, even if Heapster provides [other metrics](https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md) like disk,
-memory, or network usage. According to Branczyk, while in theory every
-workload can be converted to a CPU-bound problem, this is an
-inconvenient limitation, especially to respect Service Level
-Agreements. For example, a service might have to process requests
-within 100 milliseconds, or have a queue length lower than a
-predefined threshold. Another limitation was that the [Heapster
-API](https://github.com/kubernetes/heapster/blob/master/docs/model.md) is only loosely defined and was never officially adopted as
-part of the larger Kubernetes API, which led to badly maintained
-implementations from vendors. Heapster also required the help of a
-storage backend like [InfluxDB](https://github.com/kubernetes/heapster/blob/63f40593ec88a5d56dce7b23b56b1a44c5431184/docs/influxdb.md) or [Google's Stackdriver](https://github.com/kubernetes/heapster/blob/d25a176baee4554dc59052785c4ee940fc94d305/docs/google.md) to
-keep its samples, which further complicated deployments.
-
-In late 2016, the [instrumentation special interest group](https://github.com/kubernetes/community/tree/master/sig-instrumentation) (SIG
-instrumentation) decided that the pipeline needed a redesign that would
-allow scaling not only on CPU usage, but also arbitrary metrics from
-external monitoring systems like [Prometheus](https://prometheus.io). The result is that
-Kubernetes 1.8 ships a new API specification that defines three sets
-of metrics: core, custom and external metrics. The API expects more or
-less the Prometheus wire protocol: each metric is a single value
-provided over an HTTP path (`/metrics`). The API is only a
-specification which Kubernetes does not fully implement, so that
-development is not slowed down by a new complicated component. This
-also shifts responsibility of maintenance to the monitoring
-vendors.
-
-The new protocol also defines core metrics like CPU, memory, and disk
-usage. Kubernetes provide a canonical implementation of those metrics
+the HPA increases the number of [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) replicas and when the load
+goes down again, it removes superfluous copies. In the old HPA, a
+component called [Heapster](https://github.com/kubernetes/heapster) would pull usage metrics from the
+internal [cAdvisor](https://github.com/google/cadvisor) monitoring daemon and the HPA controller would
+then scale workloads up or down based on those metrics. Unfortunately,
+the controller would only make decisions on CPU utilization, even if
+Heapster provides [other metrics](https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md) like disk, memory, or network
+usage. According to Branczyk, while in theory any workload can be
+converted to a CPU-bound problem, this is an inconvenient limitation,
+especially to respect higher-level Service Level Agreements (SLA). For
+example, an arbitrary SLA like "process 95% of requests within 100
+milliseconds" would be difficult to represent as CPU usage. Another
+limitation is that the [Heapster API](https://github.com/kubernetes/heapster/blob/master/docs/model.md) was only loosely defined and
+never officially adopted as part of the larger Kubernetes
+API. Heapster also required the help of a storage backend like
+[InfluxDB](https://github.com/kubernetes/heapster/blob/63f40593ec88a5d56dce7b23b56b1a44c5431184/docs/influxdb.md) or [Google's Stackdriver](https://github.com/kubernetes/heapster/blob/d25a176baee4554dc59052785c4ee940fc94d305/docs/google.md) to store samples, which
+made deploying an HPA challenging.
+
+![Architecture diagram](https://raw.githubusercontent.com/kubernetes/community/master/contributors/design-proposals/instrumentation/monitoring_architecture.png)
+
+In late 2016, the "autoscaling special interest group" ([SIG
+autoscaling][]) decided that the pipeline needed a redesign which
+would allow scaling not only on CPU usage, but also arbitrary metrics
+from external monitoring systems. The result is that Kubernetes 1.6
+shipped with a new API specification defining how the autoscaler
+integrates with external monitoring systems. To avoid mistakes made in
+Heapster, the API is only a specification without an implementation,
+so development is not slowed down by a new component. This shifts
+responsibility of maintenance to the monitoring vendors: instead of
+"dumping" their glue code in Heapster, vendors now have to maintain
+their own adapter conforming to a well-defined API to get
+[certified](https://github.com/cncf/k8s-conformance).
+
+The new specification defines core metrics like CPU, memory, and disk
+usage. Kubernetes provides a canonical implementation of those metrics
 through the [metrics server](https://github.com/kubernetes-incubator/metrics-server), a stripped down version of
-Heapster. This service provides the core metrics required by
+Heapster. The metrics server provides the core metrics required by
 Kubernetes so that scheduling, autoscaling and things like `kubectl
-top` work out of the box without deploying a complete monitoring
-system. This means that any Kubernetes 1.8 cluster now supports
-autoscaling out of the box: for example [minikube](https://github.com/kubernetes/minikube) or [Google's
-Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE) implement the metrics server out of the
-box.
-
-Here is an example of how to [configure the autoscaler](https://v1-6.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in
-Kubernetes 1.6, taken from the [OpenShift Container Platform
+top` work out of the box. This means that any Kubernetes 1.8 cluster
+now supports autoscaling using those metrics out of the box: for
+example [minikube](https://github.com/kubernetes/minikube) or [Google's Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE) both
+offer a native metrics server without an external database or
+monitoring system.
+
+In terms of configuration syntax, the change is minimal. Here is an
+example of how to [configure the autoscaler](https://v1-6.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in earlier Kubernetes
+releases, taken from the [OpenShift Container Platform
 documentation](https://docs.openshift.com/container-platform/3.9/dev_guide/pod_autoscaling.html#creating-a-hpa):
 
     apiVersion: extensions/v1beta1
@@ -80,7 +85,7 @@ documentation](https://docs.openshift.com/container-platform/3.9/dev_guide/pod_a
       cpuUtilization:
         targetPercentage: 80
 
-In 1.10 the above configuration becomes the more flexible:
+The new API configuration is more flexible:
 
     apiVersion: autoscaling/v2beta1
     kind: HorizontalPodAutoscaler
@@ -100,8 +105,8 @@ In 1.10 the above configuration becomes the more flexible:
           targetAverageUtilization: 50
 
 Notice how the `cpuUtilization` field is replaced by a more flexible
-`metrics` field which considers CPU utilization the same way, but can
-already support memory usage or other core metrics.
+`metrics` field that targets CPU utilization, but can support other
+core metrics like memory usage.
 
 The ultimate goal of the new API, however, is to support arbitrary
 metrics, through the [custom metrics API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics). This behaves like the
@@ -111,12 +116,13 @@ in. Branczyk demonstrated the [k8s-prometheus-adapter](https://github.com/direct
 connects any Prometheus metric to the Kubernetes HPA, allowing the
 autoscaler to add new pods to reduce request latency, for
 example. Those metrics are bound to Kubernetes objects (e.g. pod,
-node, etc) but an "external metrics API" also allows for arbitrary
-metrics to influence autoscaling. This could allow Kubernetes to scale
-up a workload to deal with a larger load on an external message broker
-service, for example. Here is an example of how the custom metrics API
-could be used to pull metrics from Prometheus to make sure each pod
-handles a limit of 200 requests per second per pod:
+node, etc) but an "external metrics API" was also introduced in the
+last two months to allow arbitrary metrics to influence
+autoscaling. This could allow Kubernetes to scale up a workload to
+deal with a larger load on an external message broker service, for
+example. Here is an example of the custom metrics API pulling metrics
+from Prometheus to make sure that each pod handles at most 200
+requests per second:
 
       metrics:
       - type: Pods
@@ -124,24 +130,26 @@ handles a limit of 200 requests per second per pod:
           metricName: http_requests
           targetAverageValue: 200
 
-Here `http_requests` is a metrics exposed by the Prometheus which
-knows how many requests each pod is processing, in effect performing
-the following PromQL request to get the rate by pod:
+Here `http_requests` is a metrics exposed by the Prometheus server
+which looks at how many requests each pod is processing. For those
+familiar with the Prometheus query language, this translates in the
+following query in the Prometheus backend:
 
     sum(rate(http_requests_total{namespace="hpa-namespace"}[5m])) by(pod)
 
-![Architecture diagram](https://raw.githubusercontent.com/kubernetes/community/master/contributors/design-proposals/instrumentation/monitoring_architecture.png)
-
-While the HPA itself has been in "general availability" for a while,
-the new custom metrics API is currently in beta.
+This will extract the number of requests per second (`rate`) in the
+namespace relevant to the HPA (`namespace`) over a 5 minute period,
+and grouped by pod. To avoid putting to much load on each pod, the HPA
+will then ensure that this number will be around a target value by
+spawning or killing pods as appropriate.
 
 Upcoming features
 -----------------
 
 The SIG seem to have rounded up everything quite neatly. The next step
-is to deprecate Heapster: as of 1.10, the critical parts of Kubernetes
-use the new API so a [discussion](https://github.com/kubernetes/heapster/pull/2022) is under way to move away from
-the older design.
+is to deprecate Heapster: as of 1.10, all critical parts of Kubernetes
+use the new API so a [discussion](https://github.com/kubernetes/heapster/pull/2022) is under way in another group
+([SIG instrumentation][]) to finish moving away from the older design.
 
 Another thing community is looking into is vertical
 scaling. Horizontal scaling is fine for certain workload like caching
@@ -150,105 +158,107 @@ are harder to scale by just adding more replicas: in this case what an
 autoscaler should do is increase the *size* of replicas instead of
 their numbers. Kubernetes supports this through the [vertical pod
 autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) (VPA). It is less practical than the HPA because there
-is a physical limit to the size of individual servers which the
-autoscaler cannot go beyond (even though that limit is getting bigger
-and bigger every year), while the HPA can scale up as long as you add
-new servers. According to Branczyk, the VPA is also more "complicated
-and fragile, so a lot more thought needs to go into that." VPA
-functionality is currently in alpha and only support Prometheus as a
-backend. The VPA is also not fully compatible with the HPA and is
-relevant only in cases where the HPA cannot do the job: workloads

(Diff truncated)
it seems to be instrumentation, not autoscaling
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index 2b279e6f..aeeadf11 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -37,8 +37,8 @@ implementations from vendors. Heapster also required the help of a
 storage backend like [InfluxDB](https://github.com/kubernetes/heapster/blob/63f40593ec88a5d56dce7b23b56b1a44c5431184/docs/influxdb.md) or [Google's Stackdriver](https://github.com/kubernetes/heapster/blob/d25a176baee4554dc59052785c4ee940fc94d305/docs/google.md) to
 keep its samples, which further complicated deployments.
 
-In late 2016, the [autoscaling special interest group](https://github.com/kubernetes/community/tree/master/sig-autoscaling) (SIG
-autoscaling) decided that the pipeline needed a redesign that would
+In late 2016, the [instrumentation special interest group](https://github.com/kubernetes/community/tree/master/sig-instrumentation) (SIG
+instrumentation) decided that the pipeline needed a redesign that would
 allow scaling not only on CPU usage, but also arbitrary metrics from
 external monitoring systems like [Prometheus](https://prometheus.io). The result is that
 Kubernetes 1.8 ships a new API specification that defines three sets
@@ -138,10 +138,10 @@ the new custom metrics API is currently in beta.
 Upcoming features
 -----------------
 
-The SIG autoscaling team seem to have rounded up everything quite
-neatly. The next step is to deprecate Heapster: as of 1.10, the
-critical parts of Kubernetes use the new API so a [discussion](https://github.com/kubernetes/heapster/pull/2022) is
-under way to move away from the older design.
+The SIG seem to have rounded up everything quite neatly. The next step
+is to deprecate Heapster: as of 1.10, the critical parts of Kubernetes
+use the new API so a [discussion](https://github.com/kubernetes/heapster/pull/2022) is under way to move away from
+the older design.
 
 Another thing community is looking into is vertical
 scaling. Horizontal scaling is fine for certain workload like caching
@@ -220,11 +220,11 @@ business logic inside such a complex system more easily. Any simpler
 design will surely be welcome in the maelstrom of APIs and subsystems
 that Kubernetes has become.
 
-A [video of the talk](https://www.youtube.com/watch?v=VQFoc0ukCvI ) is available on Youtube. SIG Autoscaling
-members Marcin Wielgus and Solly Ross presented an [introduction](https://kccnceu18.sched.com/event/DrnS/sig-autoscaling-intro-marcin-wielgus-google-solly-ross-red-hat-any-skill-level)
-([video](https://www.youtube.com/watch?v=oJyjW8Vz314)) and [deep dive](https://kccnceu18.sched.com/event/DroC/sig-autoscaling-deep-dive-marcin-wielgus-google-solly-ross-red-hat-intermediate-skill-leve) ([video](https://www.youtube.com/watch?v=s2RKAYm9oJg)) talks that might be
-interesting to our readers who want all the gory details about
-Kubernetes autoscaling.
+A [video of the talk](https://www.youtube.com/watch?v=VQFoc0ukCvI ) is available on Youtube. As members of the
+related [SIG autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling), Marcin Wielgus and Solly Ross presented
+an [introduction](https://kccnceu18.sched.com/event/DrnS/sig-autoscaling-intro-marcin-wielgus-google-solly-ross-red-hat-any-skill-level) ([video](https://www.youtube.com/watch?v=oJyjW8Vz314)) and [deep dive](https://kccnceu18.sched.com/event/DroC/sig-autoscaling-deep-dive-marcin-wielgus-google-solly-ross-red-hat-intermediate-skill-leve) ([video](https://www.youtube.com/watch?v=s2RKAYm9oJg))
+talks that might be interesting to our readers who want all the gory
+details about Kubernetes autoscaling.
 
 Notes
 =====
@@ -247,3 +247,8 @@ questions needing confirmation with Branczyk (under way):
    OpenMetrics?
  * contradiction in talk between "kubernetes does not implement the
    new metrics API" and "it implements the core metrics"
+
+another source which confirms SIG instrumentation:
+
+https://brancz.com/2018/01/05/prometheus-vs-heapster-vs-kubernetes-metrics-apis/
+

rework first draft, sent to LWN
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index 033c3095..2b279e6f 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -1,141 +1,249 @@
 Autoscaling  Kubernetes Workloads
 =================================
 
-At KubeCon 2018, Frederic Branczyk from CoreOS held a packed
-[session](https://kccnceu18.sched.com/event/Dqun/autoscale-your-kubernetes-workload-with-prometheus-frederic-branczyk-coreos-intermediate-skill-level) aiming to introduce a standard and officially reocmmended
-way to autoscale workloads in Kubernetes clusters. Kubernetes has had
-an autoscaler since the early days but only recently did the community
-implement a more flexible and extensible mechanism.
+During [KubeCon + CloudNativeCon Europe 2018](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/), [Frederic
+Branczyk](https://github.com/brancz) from CoreOS (now part of RedHat) held a packed
+[session](https://kccnceu18.sched.com/event/Dqun/autoscale-your-kubernetes-workload-with-prometheus-frederic-branczyk-coreos-intermediate-skill-level) to introduce a standard and officially recommended way to
+scale workloads automatically in [Kubernetes](https://kubernetes.io/) clusters. Kubernetes
+has had an autoscaler since the earlier days but only recently did the
+community implement a more flexible and extensible mechanism to make
+decisions on when to add more resources to fulfill workload
+requirements.
 
 The old and new autoscalers
 ---------------------------
 
-Branczyk started by explaining the history of the architecture and how
-it evolved through time. Kubernetes, since version 1.2, features a
+![Frederic Branczyk](https://paste.anarc.at/DSCF3333.JPG)
+
+Branczyk first covered the history of the autoscaler architecture and
+how it evolved through time. Kubernetes, since version 1.2, features a
 Horizontol Pod Autoscaler (HPA) which dynamically allocates resources
-("pods", which are really containers) depending on the detected
-workload. When the load becomes too high, the HPA increases the number
-of replicas and when the load goes down again, it removes {the
-replicas in excess}. In the old HPA, a component called {Heapster}
-pulls usage metrics from {cAdvisor} and the HPA then makes decisions
-based on that data. Unfortunately, only CPU utilization can be used in
-the older HPA: even if cAdvisor provides other metrics like memory
-usage, the autoscaler only relies on CPU usage. While in theory every
-workload can be converted in a CPU-bound problem, this was seen as an
-inconvenient limitation, especially when the objective is to realize
-"Service Level Objectives (SLO) to respect Service Level Agreements
-(SLA) by looking at Service Level Indicators (SLI)" (yes, this is the
-kind of business acronyms that are frequent at Kubecon). For example,
-a service might have to process requests within 100 miliseconds, or
-have a queue length lower than a predefined threshold. The Heapster
-APIs were also only loosely defined which led to implementations that
-vendors wouldn't maintain properly.
-
-So about a year and a half ago, it was decided that the pipeline
-needed a redesign that would allow scaling not only on CPU usage, but
-also arbitrary metrics including third-party tools like
-Prometheus. This was shipped with Kubernetes 1.8 in an API
-specification that defines resources and custom metrics. The API relies {confirm?} on the
-OpenMetrics protocol, itself based on the Prometheus wire protocol:
-each metric is a single value, the latest representation of the
-information requested. 
-
-{No complete implementation is implemented in Kubernetes itself so
-that its development is not slowed down and to shift responsability of
-adapter maintenance to the vendors..}
-
-But the new protocol also defines core metrics like CPU, memory, and
-filesystem usage. A canonical implementation is shipped with
-Kubernetes {why?} so that a monitoring vendor like Prometheus doesn't
-become an dependency to get what has become a core
-functionality. While metrics used to be stored in an on disk database,
-they are now only kept in memory by deployments like [minicube][] or
-[GKE][] {acronym?}.
-
-Where the new API really shines, however, is through the custom
-metrics API. It behaves the same as the core metrics, except that no
-built-in metrics are shipped at all by Kubernetes. This is where
-vendors like Prometheus come in. Branczyk demonstrated the
-[k8s-prometheus-adapter](https://github.com/directXMan12/k8s-prometheus-adapter) which connects any arbitrary Prometheus
-metrics to the Kubernetes HPA. Custom metrics are bound to Kubernetes
-objects (e.g. pod, node, etc) but an "external metrics API" also
-allows for arbitrary object to influence autoscaling. This could allow
-Kubernetes to scale up a workload to deal with a larger load on an
-external message broker service, for example. 
-
-{example k8s rule / config?}
+depending on the detected workload. When the load becomes too high,
+the HPA increases the number of pod replicas and when the load goes
+down again, it removes superfluous copies. In the old HPA, a component
+called [Heapster](https://github.com/kubernetes/heapster) would pull usage metrics from [cAdvisor](https://github.com/google/cadvisor) and
+the HPA controller would then make decisions based on that
+data. Unfortunately, the HPA would only make decisions on CPU
+utilization, even if Heapster provides [other metrics](https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md) like disk,
+memory, or network usage. According to Branczyk, while in theory every
+workload can be converted to a CPU-bound problem, this is an
+inconvenient limitation, especially to respect Service Level
+Agreements. For example, a service might have to process requests
+within 100 milliseconds, or have a queue length lower than a
+predefined threshold. Another limitation was that the [Heapster
+API](https://github.com/kubernetes/heapster/blob/master/docs/model.md) is only loosely defined and was never officially adopted as
+part of the larger Kubernetes API, which led to badly maintained
+implementations from vendors. Heapster also required the help of a
+storage backend like [InfluxDB](https://github.com/kubernetes/heapster/blob/63f40593ec88a5d56dce7b23b56b1a44c5431184/docs/influxdb.md) or [Google's Stackdriver](https://github.com/kubernetes/heapster/blob/d25a176baee4554dc59052785c4ee940fc94d305/docs/google.md) to
+keep its samples, which further complicated deployments.
+
+In late 2016, the [autoscaling special interest group](https://github.com/kubernetes/community/tree/master/sig-autoscaling) (SIG
+autoscaling) decided that the pipeline needed a redesign that would
+allow scaling not only on CPU usage, but also arbitrary metrics from
+external monitoring systems like [Prometheus](https://prometheus.io). The result is that
+Kubernetes 1.8 ships a new API specification that defines three sets
+of metrics: core, custom and external metrics. The API expects more or
+less the Prometheus wire protocol: each metric is a single value
+provided over an HTTP path (`/metrics`). The API is only a
+specification which Kubernetes does not fully implement, so that
+development is not slowed down by a new complicated component. This
+also shifts responsibility of maintenance to the monitoring
+vendors.
+
+The new protocol also defines core metrics like CPU, memory, and disk
+usage. Kubernetes provide a canonical implementation of those metrics
+through the [metrics server](https://github.com/kubernetes-incubator/metrics-server), a stripped down version of
+Heapster. This service provides the core metrics required by
+Kubernetes so that scheduling, autoscaling and things like `kubectl
+top` work out of the box without deploying a complete monitoring
+system. This means that any Kubernetes 1.8 cluster now supports
+autoscaling out of the box: for example [minikube](https://github.com/kubernetes/minikube) or [Google's
+Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE) implement the metrics server out of the
+box.
+
+Here is an example of how to [configure the autoscaler](https://v1-6.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in
+Kubernetes 1.6, taken from the [OpenShift Container Platform
+documentation](https://docs.openshift.com/container-platform/3.9/dev_guide/pod_autoscaling.html#creating-a-hpa):
+
+    apiVersion: extensions/v1beta1
+    kind: HorizontalPodAutoscaler
+    metadata:
+      name: frontend 
+    spec:
+      scaleRef:
+        kind: DeploymentConfig 
+        name: frontend 
+        apiVersion: v1 
+        subresource: scale
+      minReplicas: 1 
+      maxReplicas: 10 
+      cpuUtilization:
+        targetPercentage: 80
+
+In 1.10 the above configuration becomes the more flexible:
+
+    apiVersion: autoscaling/v2beta1
+    kind: HorizontalPodAutoscaler
+    metadata:
+      name: hpa-resource-metrics-cpu 
+    spec:
+      scaleTargetRef:
+        apiVersion: apps/v1beta1 
+        kind: ReplicationController 
+        name: hello-hpa-cpu 
+      minReplicas: 1 
+      maxReplicas: 10 
+      metrics:
+      - type: Resource
+        resource:
+          name: cpu
+          targetAverageUtilization: 50
+
+Notice how the `cpuUtilization` field is replaced by a more flexible
+`metrics` field which considers CPU utilization the same way, but can
+already support memory usage or other core metrics.
+
+The ultimate goal of the new API, however, is to support arbitrary
+metrics, through the [custom metrics API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics). This behaves like the
+core metrics, except that Kubernetes does not ship or define a set
+custom metrics directly which is where vendors like Prometheus come
+in. Branczyk demonstrated the [k8s-prometheus-adapter](https://github.com/directXMan12/k8s-prometheus-adapter) which
+connects any Prometheus metric to the Kubernetes HPA, allowing the
+autoscaler to add new pods to reduce request latency, for
+example. Those metrics are bound to Kubernetes objects (e.g. pod,
+node, etc) but an "external metrics API" also allows for arbitrary
+metrics to influence autoscaling. This could allow Kubernetes to scale
+up a workload to deal with a larger load on an external message broker
+service, for example. Here is an example of how the custom metrics API
+could be used to pull metrics from Prometheus to make sure each pod
+handles a limit of 200 requests per second per pod:
+
+      metrics:
+      - type: Pods
+        pods:
+          metricName: http_requests
+          targetAverageValue: 200
+
+Here `http_requests` is a metrics exposed by the Prometheus which
+knows how many requests each pod is processing, in effect performing
+the following PromQL request to get the rate by pod:
+
+    sum(rate(http_requests_total{namespace="hpa-namespace"}[5m])) by(pod)
+
+![Architecture diagram](https://raw.githubusercontent.com/kubernetes/community/master/contributors/design-proposals/instrumentation/monitoring_architecture.png)
+
+While the HPA itself has been in "general availability" for a while,
+the new custom metrics API is currently in beta.
 

(Diff truncated)
pick a few pics
diff --git a/blog/kubecon-rant.mdwn b/blog/kubecon-rant.mdwn
index 1aac52f7..ea051855 100644
--- a/blog/kubecon-rant.mdwn
+++ b/blog/kubecon-rant.mdwn
@@ -9,6 +9,10 @@ that goal. Yet it is still one of the less diverse places i've ever
 been in: in comparison, pycon just "feels" more diverse. Those are not
 hard facts, true{...}
 
+4000 {?} white men: DSCF3307.JPG
+
+or around file:///home/anarcat/card-bak/DCIM/103_FUJI/DSCF3481.JPG
+
 The truth is that contrary to programmer communities, devops and
 sysadmin communities' knowledge comes not from institutional eduction,
 but from self-learning. Even though i have spent years studying in
@@ -21,6 +25,8 @@ there too, but they were acquired *before* going to university, when
 even there I was expected to learn languages such as C in-between
 sessions.
 
+diversity program: DSCF3502.JPG
+
 The truth is that the real solution to diversity in computing
 communities not only comes from a change in culture in the
 communities, but real investments in society at large. The large
@@ -202,3 +208,5 @@ chant: "I say 'Kube!', you say 'Con!' 'Kube!' 'Con!' 'Kube!' 'Con!'
 
 
 + lagging behind. 
+
+freedom to create: file:///home/anarcat/card-bak/DCIM/103_FUJI/DSCF3495.JPG

pick a few pics
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index 33c58e31..033c3095 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -137,3 +137,5 @@ giving a damn. doesn't really belong with the autoscaler - this is
 routing?}
 
 {video?}
+
+Branczyk pic: 103_FUJI/DSCF3333.JPG

pick a few pics
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index 5d222bff..83ab3380 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -10,6 +10,10 @@ https://kccnceu18.sched.com/event/Dqvz/horizontal-pod-autoscaler-reloaded-scale-
 
 Justin Cormack, Nassim Eddequiouaq, Docker
 
+nassim: file:///home/anarcat/card-bak/DCIM/103_FUJI/DSCF3380.JPG
+
+justin: file:///home/anarcat/card-bak/DCIM/103_FUJI/DSCF3389.JPG
+
 making security controls understandable
 
 complicated subject

another ref
diff --git a/blog/kubecon-rant.mdwn b/blog/kubecon-rant.mdwn
index 3bd99605..1aac52f7 100644
--- a/blog/kubecon-rant.mdwn
+++ b/blog/kubecon-rant.mdwn
@@ -27,7 +27,11 @@ communities, but real investments in society at large. The large
 mega-corporations subsidizing those events get a lot of good press out
 of those diversity sponsorship, but that is nothing compared to how
 much they are spared in tax savings by the states. Amzn's Jeff Bezos
-earned billions of dollars only in market cap this year. Google,
+earned billions of dollars only in market cap this year. 
+
+https://www.jwz.org/blog/2018/05/amazon-puts-7000-jobs-on-hold-because-of-a-tax-that-would-help-seattles-homeless-population/
+
+Google,
 Facebook, Microsoft, and Apple all evade taxes like the best
 gansters. And this is important because society will change through
 eduction. This is how more traditional STEM sectors like engineering

links to videos
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index 11ee484a..33c58e31 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -135,3 +135,5 @@ https://kccnceu18.sched.com/event/EgBG/applying-least-privileges-through-kuberne
 {figure out if istio already does all that kubervisor shit before
 giving a damn. doesn't really belong with the autoscaler - this is
 routing?}
+
+{video?}

links to videos
diff --git a/blog/entitlements.mdwn b/blog/entitlements.mdwn
index dce20a3d..5d222bff 100644
--- a/blog/entitlements.mdwn
+++ b/blog/entitlements.mdwn
@@ -3,6 +3,8 @@ Entitlements: Understandable Container Security Controls
 
 https://kccnceu18.sched.com/event/DqvJ/entitlements-understandable-container-security-controls-justin-cormack-nassim-eddequiouaq-docker-intermediate-skill-level
 
+https://www.youtube.com/watch?v=Jbqxsli2tRw
+
 overlaps:
 https://kccnceu18.sched.com/event/Dqvz/horizontal-pod-autoscaler-reloaded-scale-on-custom-metrics-maciej-pytel-google-solly-ross-red-hat-intermediate-skill-level
 

spit out kubervisor, first autoscaler draft
diff --git a/blog/autoscaling.mdwn b/blog/autoscaling.mdwn
index ed9d6f9b..11ee484a 100644
--- a/blog/autoscaling.mdwn
+++ b/blog/autoscaling.mdwn
@@ -1,223 +1,137 @@
-
-
-Autoscale your Kubernetes Workload with Prometheus
-==================================================
-
-https://kccnceu18.sched.com/event/Dqun/autoscale-your-kubernetes-workload-with-prometheus-frederic-branczyk-coreos-intermediate-skill-level
-
-Frederic Branczyk
-
-overlaps with:
+Autoscaling  Kubernetes Workloads
+=================================
+
+At KubeCon 2018, Frederic Branczyk from CoreOS held a packed
+[session](https://kccnceu18.sched.com/event/Dqun/autoscale-your-kubernetes-workload-with-prometheus-frederic-branczyk-coreos-intermediate-skill-level) aiming to introduce a standard and officially reocmmended
+way to autoscale workloads in Kubernetes clusters. Kubernetes has had
+an autoscaler since the early days but only recently did the community
+implement a more flexible and extensible mechanism.
+
+The old and new autoscalers
+---------------------------
+
+Branczyk started by explaining the history of the architecture and how
+it evolved through time. Kubernetes, since version 1.2, features a
+Horizontol Pod Autoscaler (HPA) which dynamically allocates resources
+("pods", which are really containers) depending on the detected
+workload. When the load becomes too high, the HPA increases the number
+of replicas and when the load goes down again, it removes {the
+replicas in excess}. In the old HPA, a component called {Heapster}
+pulls usage metrics from {cAdvisor} and the HPA then makes decisions
+based on that data. Unfortunately, only CPU utilization can be used in
+the older HPA: even if cAdvisor provides other metrics like memory
+usage, the autoscaler only relies on CPU usage. While in theory every
+workload can be converted in a CPU-bound problem, this was seen as an
+inconvenient limitation, especially when the objective is to realize
+"Service Level Objectives (SLO) to respect Service Level Agreements
+(SLA) by looking at Service Level Indicators (SLI)" (yes, this is the
+kind of business acronyms that are frequent at Kubecon). For example,
+a service might have to process requests within 100 miliseconds, or
+have a queue length lower than a predefined threshold. The Heapster
+APIs were also only loosely defined which led to implementations that
+vendors wouldn't maintain properly.
+
+So about a year and a half ago, it was decided that the pipeline
+needed a redesign that would allow scaling not only on CPU usage, but
+also arbitrary metrics including third-party tools like
+Prometheus. This was shipped with Kubernetes 1.8 in an API
+specification that defines resources and custom metrics. The API relies {confirm?} on the
+OpenMetrics protocol, itself based on the Prometheus wire protocol:
+each metric is a single value, the latest representation of the
+information requested. 
+
+{No complete implementation is implemented in Kubernetes itself so
+that its development is not slowed down and to shift responsability of
+adapter maintenance to the vendors..}
+
+But the new protocol also defines core metrics like CPU, memory, and
+filesystem usage. A canonical implementation is shipped with
+Kubernetes {why?} so that a monitoring vendor like Prometheus doesn't
+become an dependency to get what has become a core
+functionality. While metrics used to be stored in an on disk database,
+they are now only kept in memory by deployments like [minicube][] or
+[GKE][] {acronym?}.
+
+Where the new API really shines, however, is through the custom
+metrics API. It behaves the same as the core metrics, except that no
+built-in metrics are shipped at all by Kubernetes. This is where
+vendors like Prometheus come in. Branczyk demonstrated the
+[k8s-prometheus-adapter](https://github.com/directXMan12/k8s-prometheus-adapter) which connects any arbitrary Prometheus
+metrics to the Kubernetes HPA. Custom metrics are bound to Kubernetes
+objects (e.g. pod, node, etc) but an "external metrics API" also
+allows for arbitrary object to influence autoscaling. This could allow
+Kubernetes to scale up a workload to deal with a larger load on an
+external message broker service, for example. 
+
+{example k8s rule / config?}
+
+Upcoming features
+-----------------
+
+The team {which?} seem to have rounded up everything quite neatly. The
+next step is to [deprecate heapster](https://github.com/kubernetes/kubernetes/issue/2022): as of 1.10, the critical
+parts of Kubernetes use the new API so this *should* just be a mater
+of flipping the switch.
+
+Another thing Branczyk said they were looking into is to take into
+account workloads that need to scale vertically. Horizontal scaling is
+fine for certain workload like caching servers or application
+frontends, but database servers, most notable, are harder to scale by
+just adding more replicas: in this case what an autoscaler should do
+is increase the *size* of replicas instead of their numbres. The
+vertical pod autoscaler (VPA) is less practical than the HPA because
+there is a physical limit to the size of individual servers which the
+autoscaler cannot go beyond. According to Branczyk, the VPA is more
+"complicated and fragile, so a lot more thought needs to go into
+that." VPA functionality is currently in alpha and only support
+Prometheus as a backend.
+
+Branczyk also told his audience that @sttts and @nikhita are working
+on autoscaling CRD (Custom Resource Definitions) as currently only
+resources defined by Kubernetes itself (e.g. a "pod" is a native
+Kubernetes resource) are autoscalable. Hopefully those will behave the
+same ways as normal resources for autoscaling in 1.11.
+
+Finally, Branczyk ventured on a set of predictions for other
+improvements that could come down the pipeline. One issue he
+identitied is that while the HPA can scale "pods", there is another
+autoscaler that manages {clusters?} The idea is to combine to two
+projects into a single one. Another hope is that OpenMetrics emerges
+as a standard for metrics across vendors: this seems to be well under
+way with Kubernetes already providing Prometheus-style metrics out of
+the box and commercial vendors like Datadog {verify?} supporting the
+API as well. Another area of possible standardization is the gRPC
+communication endpoints which can now expose metrics through
+middleware adapters {what?} like the [go-grpc-prometheus adapter](https://github.com/grpc-ecosystem/go-grpc-prometheus)
+which enables Prometheus to scrape metrics from any gRPC-enabled
+API. This allows deploying a uniform monitoring system across an
+entire cluster.
+
+Conclusion
+----------
+
+The talk was one of the most popular of the conference, attended by a
+full room which show a deep interest in this key feature of Kubernetes
+deployments. It was great to see Branczyk who is deeply involved with
+the Prometheus project as well, work on standards so that not only
+Prometheus can provide those metrics and instead provide a standard
+API which others can also use. The speed at which APIs change is also
+impressive: in only a few months, a fundamental component of
+Kubernetes was basically upended and replaced by yet another new API
+and configuration syntax that people will need to get familiar
+with. Given the flexibility and clarity of the new HPA, this will
+probably be a small cost to pay to get easier ways to represent
+business logic inside such a complex system. Any simpler design is
+welcome in this maelstrom of APIs and subsystems.
+
+Notes
+=====
+
+first talk overlapped with:
 
 https://kccnceu18.sched.com/event/Dqum/managing-kubernetes-what-you-need-to-know-about-day-2-craig-tracey-heptio-intermediate-skill-level
 https://kccnceu18.sched.com/event/EgBG/applying-least-privileges-through-kubernetes-admission-controllers-benjy-portnoy-aqua-security-intermediate-skill-level
 
-officially recommended way to autoscale workloads
-
-the history of the stack/architecture and how it evolved
-
-future
-
-autoscaling cover demand based on metrics. to do that, those metrics
-need to be collected. this is to cover SLA objectives, like API
-latency or completion time rations. "fulfill SLO of SLA through SLI"
-(service level objective, indicator)
-
-example: to process requests within 5min, the queue length need to be
-below 50
-
-horizontal pod autoscaler (HPA)
-
-to cover demand, increase replicas
-
-full room
-
-vertiacal autoscaler, increase size of replicas
-
-VPA impractical beyond a certain size because there is a limit to the
-size of individual servers.
-
-previously: autoscale heavily based on heapster, based on cAdvisor
-metrics collection which collect cpu/memory usage and possible custom
-metrics. cadvisor writes those in another TSDB
-
-to consume this we have a HPA that scales a stateful set or whatever
-
-sets min/max replicas, and can only set target cpuUtilization as a
-percentage. 
-
-k8s 1.2, already autoscales! in theory everything can be converted in
-a cpu-bound problem.
-
-but not very practical
-
-workloads could be more efficient when memory-bound, or it can be hard
-to represent SLAs as cpuusage.
-
-heapster APIs were only loosely defined and led to the creation of
-many unmaintained vendored implementations
-
-1.5yr ago, decided the pipeline needed a redesign. wanted to solve the
-problems created with heapster to get autoscaling and stable
-monitoring
-
-not be bound only on cpu, but arbitrary metrics
-

(Diff truncated)
fix image links
diff --git a/blog/2018-05-04-terminal-emulators-2.mdwn b/blog/2018-05-04-terminal-emulators-2.mdwn
index bbf1d448..e3bfea45 100644
--- a/blog/2018-05-04-terminal-emulators-2.mdwn
+++ b/blog/2018-05-04-terminal-emulators-2.mdwn
@@ -116,7 +116,7 @@ and the width of error bars in the graph): "*any irregularities in delay
 durations (so called jitter) pose additional problem because of their
 inherent unpredictability*".
 
-[![Terminal latency compared on Debian with i3 on Xorg](terms-performance/latency-debian-xorg-i3.png)](terms-performance/latency-debian-xorg-i3.png)
+[![Terminal latency compared on Debian with i3 on Xorg](latency-debian-xorg-i3.png)](latency-debian-xorg-i3.png)
 
 The graph above is from a clean Debian 9 (stretch) profile with the [i3
 window manager](https://i3wm.org/). That environment gives the best
@@ -129,7 +129,7 @@ GNOME also includes compositing window manager
 ([Mutter](https://en.wikipedia.org/wiki/Mutter_(software))), an extra
 buffering layer that adds at least eight milliseconds in Fatin's tests.
 
-[![Terminal latency compared on Fedora with GNOME on Xorg](terms-performance/latency-fedora-xorg-gnome.png)](terms-performance/latency-fedora-xorg-gnome.png)
+[![Terminal latency compared on Fedora with GNOME on Xorg](latency-fedora-xorg-gnome.png)](latency-fedora-xorg-gnome.png)
 
 In the graph above, we can see the same tests performed on Fedora 27
 with GNOME running on X.org. The change is drastic; latency at least
@@ -164,7 +164,7 @@ xterm. There is therefore still some value in this test as the rendering
 process varies a lot between terminals; it also serves as a good test
 harness for testing other parameters.
 
-[![Scrolling speed](terms-performance/resource-time.png)](terms-performance/resource-time.png)
+[![Scrolling speed](resource-time.png)](resource-time.png)
 
 Here we can see rxvt and st are ahead of all others, closely followed by
 the much newer Alacritty, expressly designed for speed. Xfce
@@ -211,7 +211,7 @@ the supervision of a Python process that collected the results of
 counters for `ru_maxrss`, the sum of `ru_oublock` and `ru_inblock`, and
 a simple timer for wall clock time.
 
-[![Memory use](terms-performance/resource-memory.png)](terms-performance/resource-memory.png)
+[![Memory use](resource-memory.png)](resource-memory.png)
 
 St comes first in this benchmark with the smallest memory footprint, 8MB
 on average, which was no surprise considering the project's focus on
@@ -244,7 +244,7 @@ urxvt, Terminator, and Xfce Terminal feature a daemon mode that manages
 multiple terminals through a single process which limits the impact of
 their larger memory footprint.
 
-[![Disk I/O](terms-performance/resource-disk.png)](terms-performance/resource-disk.png)
+[![Disk I/O](resource-disk.png)](resource-disk.png)
 
 Another result I have found surprising in my tests is actual disk I/O: I
 did not expect any, yet some terminals write voluminous amounts of data

add reference to discussion and benchmarking tools
diff --git a/blog/2018-05-04-terminal-emulators-2/comment_1_1f8d32578cca16961be64a30756bd277._comment b/blog/2018-05-04-terminal-emulators-2/comment_1_1f8d32578cca16961be64a30756bd277._comment
new file mode 100644
index 00000000..66a3d3a6
--- /dev/null
+++ b/blog/2018-05-04-terminal-emulators-2/comment_1_1f8d32578cca16961be64a30756bd277._comment
@@ -0,0 +1,19 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject=""" detailed results and discussions """
+ date="2018-05-04T09:52:48Z"
+ content="""
+
+As with the previous article in this series, a lengthy discussion
+about this article has taken place on the LWN.net article, which might
+be interesting to readers here.
+
+As usual, I have published extensive documentation on the process by
+which those benchmarks were created here:
+
+https://gitlab.com/anarcat/terms-benchmarks
+
+... with a (potentially out of date) mirror on GitHub here:
+
+https://github.com/anarcat/terms-benchmarks
+"""]]

typos
diff --git a/blog/2018-04-12-terminal-emulators-1.mdwn b/blog/2018-04-12-terminal-emulators-1.mdwn
index 4234610f..876988a5 100644
--- a/blog/2018-04-12-terminal-emulators-1.mdwn
+++ b/blog/2018-04-12-terminal-emulators-1.mdwn
@@ -5,8 +5,8 @@
 > This article is the first in a two-part series about terminal
 > emulators.
 >
-> [[part one: features|2018-04-12-terminal-emulators-1]]
-> [[part two: performance|2018-05-04-terminal-emulators-2]]
+> * [[part one: features|2018-04-12-terminal-emulators-1]]
+> * [[part two: performance|2018-05-04-terminal-emulators-2]]
 
 [[!toc levels=2]]
 
diff --git a/blog/2018-05-04-terminal-emulators-2.mdwn b/blog/2018-05-04-terminal-emulators-2.mdwn
index 7cf97ed7..bbf1d448 100644
--- a/blog/2018-05-04-terminal-emulators-2.mdwn
+++ b/blog/2018-05-04-terminal-emulators-2.mdwn
@@ -8,8 +8,8 @@
 > This article is the second in a two-part series about terminal
 > emulators.
 >
-> [[part one: features|2018-04-12-terminal-emulators-1]]
-> [[part two: performance|2018-05-04-terminal-emulators-2]]
+> * [[part one: features|2018-04-12-terminal-emulators-1]]
+> * [[part two: performance|2018-05-04-terminal-emulators-2]]
 
 A comparison of the feature sets for a handful of terminal emulators was
 the subject of a [recent article](https://lwn.net/Articles/749992/);

small note
diff --git a/blog/kubecon-rant.mdwn b/blog/kubecon-rant.mdwn
index a599e6e7..3bd99605 100644
--- a/blog/kubecon-rant.mdwn
+++ b/blog/kubecon-rant.mdwn
@@ -194,3 +194,7 @@ oligarchs. A recurring pattern at Kubernetes conferences is the
 ) where Kelsey Hightower relunctantly engages the crowd in a pep
 chant: "I say 'Kube!', you say 'Con!' 'Kube!' 'Con!' 'Kube!' 'Con!'
 'Kube!' 'Con!' ". Cube [Con](https://en.wikipedia.org/wiki/Confidence_trick) indeed...
+
+
+
++ lagging behind. 

creating tag page tag/performance
diff --git a/tag/performance.mdwn b/tag/performance.mdwn
new file mode 100644
index 00000000..571365c2
--- /dev/null
+++ b/tag/performance.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged performance"]]
+
+[[!inline pages="tagged(performance)" actions="no" archive="yes"
+feedshow=10]]

Merge branch 'backlog/terms'
publish part two terminal series
diff --git a/blog/terms-performance.mdwn b/blog/2018-05-04-terminal-emulators-2.mdwn
similarity index 98%
rename from blog/terms-performance.mdwn
rename to blog/2018-05-04-terminal-emulators-2.mdwn
index 94a19d45..7cf97ed7 100644
--- a/blog/terms-performance.mdwn
+++ b/blog/2018-05-04-terminal-emulators-2.mdwn
@@ -5,6 +5,12 @@
 
 [[!toc levels=2]]
 
+> This article is the second in a two-part series about terminal
+> emulators.
+>
+> [[part one: features|2018-04-12-terminal-emulators-1]]
+> [[part two: performance|2018-05-04-terminal-emulators-2]]
+
 A comparison of the feature sets for a handful of terminal emulators was
 the subject of a [recent article](https://lwn.net/Articles/749992/);
 here I follow that up by examining the performance of those terminals.
@@ -288,4 +294,4 @@ future.
 [first appeared]: https://lwn.net/Articles/751763/
 [Linux Weekly News]: http://lwn.net/
 
-[[!tag debian-planet lwn]]
+[[!tag debian-planet lwn geek review terminals performance]]
diff --git a/blog/terms-performance/latency-debian-xorg-i3.png b/blog/2018-05-04-terminal-emulators-2/latency-debian-xorg-i3.png
similarity index 100%
rename from blog/terms-performance/latency-debian-xorg-i3.png
rename to blog/2018-05-04-terminal-emulators-2/latency-debian-xorg-i3.png
diff --git a/blog/terms-performance/latency-fedora-xorg-gnome.png b/blog/2018-05-04-terminal-emulators-2/latency-fedora-xorg-gnome.png
similarity index 100%
rename from blog/terms-performance/latency-fedora-xorg-gnome.png
rename to blog/2018-05-04-terminal-emulators-2/latency-fedora-xorg-gnome.png
diff --git a/blog/terms-performance/resource-disk.png b/blog/2018-05-04-terminal-emulators-2/resource-disk.png
similarity index 100%
rename from blog/terms-performance/resource-disk.png
rename to blog/2018-05-04-terminal-emulators-2/resource-disk.png
diff --git a/blog/terms-performance/resource-memory.png b/blog/2018-05-04-terminal-emulators-2/resource-memory.png
similarity index 100%
rename from blog/terms-performance/resource-memory.png
rename to blog/2018-05-04-terminal-emulators-2/resource-memory.png
diff --git a/blog/terms-performance/resource-time.png b/blog/2018-05-04-terminal-emulators-2/resource-time.png
similarity index 100%
rename from blog/terms-performance/resource-time.png
rename to blog/2018-05-04-terminal-emulators-2/resource-time.png

Archival link:

The above link creates a machine-readable RSS feed that can be used to easily archive new changes to the site. It is used by internal scripts to do sanity checks on new entries in the wiki.

Created . Edited .