Recent changes to this wiki. Not to be confused with my history.

Complete source to the wiki is available on gitweb or by cloning this site.

add quote about computers, source: twitter
diff --git a/sigs.fortune b/sigs.fortune
index cbd863e2..da84edbf 100644
--- a/sigs.fortune
+++ b/sigs.fortune
@@ -1084,3 +1084,7 @@ call intelligence boils down to curiosity.  - Aaron Swartz
 The class which has the power to rob upon a large scale has also the
 power to control the government and legalize their robbery.
                         - Eugene V. Debs
+%
+The good news about computers is that they do what you tell them to
+do. The bad news is that they do what you tell them to do.
+                        - Ted Nelson

document more my photo setup
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index ff3a78f8..255c1d76 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -3,6 +3,20 @@ comparaison de matériel et autre.
 
 [[!toc levels=2]]
 
+Logiciels
+=========
+
+Bien que cette page documente principalement le matériel, il me semble
+utile de toucher un peu au logiciel aussi.
+
+J'utilise principalement [Darktable](https://www.darktable.org) tant pour gérer ma collection
+(ce qui marche plus ou moins) que pour développer et traiter les
+photos (ce qui marche relativement bien). J'utilise [Rapid Photo
+Downloader](http://www.damonlynch.net/rapid/) pour sélectionner et transférer les photos des cartes
+mémoires et, beaucoup plus rarement, [GIMP](https://www.gimp.org/) pour faire certains
+traitements particuliers. Finalement, l'archivage et la sauvegarde à
+long terme des photos se fait avec [git-annex](https://git-annex.branchable.com/).
+
 History
 =======
 
@@ -102,6 +116,7 @@ Appareils numériques
 * [Nokia N900]
 * [Android G1]
 * [Canon Powershot A430]
+* [Fujifilm X-T2](https://en.wikipedia.org/wiki/Fujifilm_X-T2)
 
  [Android G1]: https://en.wikipedia.org/wiki/Android_G1
  [Nokia N900]: https://en.wikipedia.org/wiki/Nokia_N900

fix cross-ref
diff --git a/blog/2018-01-25-changes-prometheus-2.0.mdwn b/blog/2018-01-25-changes-prometheus-2.0.mdwn
index 6b8b58e3..9aab3e1d 100644
--- a/blog/2018-01-25-changes-prometheus-2.0.mdwn
+++ b/blog/2018-01-25-changes-prometheus-2.0.mdwn
@@ -26,8 +26,8 @@ community in a
 held during [KubeCon +
 CloudNativeCon](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america).
 This article covers what changed in this new release and what is brewing
-next in the Prometheus community; it is a companion to [this
-article](https://lwn.net/Articles/744410/), which provided a general
+next in the Prometheus community; it is a companion to [[this
+article|2018-01-17-monitoring-prometheus]], which provided a general
 introduction to monitoring with Prometheus.
 
 What changed

fix tocs
diff --git a/blog/2018-01-17-monitoring-prometheus.mdwn b/blog/2018-01-17-monitoring-prometheus.mdwn
index 0b3673f9..b417bbd4 100644
--- a/blog/2018-01-17-monitoring-prometheus.mdwn
+++ b/blog/2018-01-17-monitoring-prometheus.mdwn
@@ -11,7 +11,7 @@
 >  * [[Monitoring with Prometheus 2.0|2018-01-17-monitoring-prometheus]] (this article)
 >  * [[Changes in Prometheus 2.0|2018-01-25-changes-prometheus-2.0]]
 
-[[!toc]]
+[[!toc levels=2]]
 
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
 scratch by SoundCloud in 2012. It works by pulling metrics from
diff --git a/blog/2018-01-25-changes-prometheus-2.0.mdwn b/blog/2018-01-25-changes-prometheus-2.0.mdwn
index 2ce88788..6b8b58e3 100644
--- a/blog/2018-01-25-changes-prometheus-2.0.mdwn
+++ b/blog/2018-01-25-changes-prometheus-2.0.mdwn
@@ -11,7 +11,7 @@
 >  * [[Monitoring with Prometheus 2.0|2018-01-17-monitoring-prometheus]]
 >  * [[Changes in Prometheus 2.0|2018-01-25-changes-prometheus-2.0]] (this article)
 
-[[!toc]]
+[[!toc levels=2]]
 
 2017 was a big year for the [Prometheus](https://prometheus.io/)
 project, as it [published its 2.0 release in

creating tag page tag/prometheus
diff --git a/tag/prometheus.mdwn b/tag/prometheus.mdwn
new file mode 100644
index 00000000..305c4854
--- /dev/null
+++ b/tag/prometheus.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged prometheus"]]
+
+[[!inline pages="tagged(prometheus)" actions="no" archive="yes"
+feedshow=10]]

publish the prometheus articles as part of the kubecon series
diff --git a/blog/2017-12-13-kubecon-overview.mdwn b/blog/2017-12-13-kubecon-overview.mdwn
index 4c9623d4..db81970d 100644
--- a/blog/2017-12-13-kubecon-overview.mdwn
+++ b/blog/2017-12-13-kubecon-overview.mdwn
@@ -8,6 +8,8 @@
 >  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]] (this article)
 >  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
 >  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]]
+>  * [[Monitoring with Prometheus 2.0|2018-01-17-monitoring-prometheus]]
+>  * [[Changes in Prometheus 2.0|2018-01-25-changes-prometheus-2.0]]
 
 The [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF)
 held its conference, [KubeCon +
diff --git a/blog/2017-12-20-demystifying-container-runtimes.mdwn b/blog/2017-12-20-demystifying-container-runtimes.mdwn
index 942a434d..c72c40de 100644
--- a/blog/2017-12-20-demystifying-container-runtimes.mdwn
+++ b/blog/2017-12-20-demystifying-container-runtimes.mdwn
@@ -8,6 +8,8 @@
 >  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
 >  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
 >  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]] (this article)
+>  * [[Monitoring with Prometheus 2.0|2018-01-17-monitoring-prometheus]]
+>  * [[Changes in Prometheus 2.0|2018-01-25-changes-prometheus-2.0]]
 
 As we briefly mentioned in our [[overview article|2017-12-13-kubecon-overview]] about KubeCon +
 CloudNativeCon, there are multiple container "runtimes", which are
diff --git a/blog/2017-12-20-docker-without-docker.mdwn b/blog/2017-12-20-docker-without-docker.mdwn
index 576bd8d9..e283497e 100644
--- a/blog/2017-12-20-docker-without-docker.mdwn
+++ b/blog/2017-12-20-docker-without-docker.mdwn
@@ -8,6 +8,8 @@
 >  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
 >  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]] (this article)
 >  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]]
+>  * [[Monitoring with Prometheus 2.0|2018-01-17-monitoring-prometheus]]
+>  * [[Changes in Prometheus 2.0|2018-01-25-changes-prometheus-2.0]]
 
 The Docker (now [Moby](https://mobyproject.org/)) project has done a lot
 to popularize containers in recent years. Along the way, though, it has
diff --git a/blog/monitoring-prometheus.mdwn b/blog/2018-01-17-monitoring-prometheus.mdwn
similarity index 97%
rename from blog/monitoring-prometheus.mdwn
rename to blog/2018-01-17-monitoring-prometheus.mdwn
index 9b8f13c4..0b3673f9 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/2018-01-17-monitoring-prometheus.mdwn
@@ -2,6 +2,15 @@
 [[!meta date="2018-01-17T00:00:00+0000"]]
 [[!meta updated="2018-02-06T10:12:53-0500"]]
 
+> This is one part of my coverage of KubeCon Austin 2017. Other
+> articles include:
+>
+>  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
+>  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
+>  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]]
+>  * [[Monitoring with Prometheus 2.0|2018-01-17-monitoring-prometheus]] (this article)
+>  * [[Changes in Prometheus 2.0|2018-01-25-changes-prometheus-2.0]]
+
 [[!toc]]
 
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
diff --git a/blog/changes-prometheus-2.0.mdwn b/blog/2018-01-25-changes-prometheus-2.0.mdwn
similarity index 95%
rename from blog/changes-prometheus-2.0.mdwn
rename to blog/2018-01-25-changes-prometheus-2.0.mdwn
index 65b09d72..2ce88788 100644
--- a/blog/changes-prometheus-2.0.mdwn
+++ b/blog/2018-01-25-changes-prometheus-2.0.mdwn
@@ -2,6 +2,15 @@
 [[!meta date="2018-01-25T00:00:00+0000"]]
 [[!meta updated="2018-02-06T10:12:53-0500"]]
 
+> This is one part of my coverage of KubeCon Austin 2017. Other
+> articles include:
+>
+>  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
+>  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
+>  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]]
+>  * [[Monitoring with Prometheus 2.0|2018-01-17-monitoring-prometheus]]
+>  * [[Changes in Prometheus 2.0|2018-01-25-changes-prometheus-2.0]] (this article)
+
 [[!toc]]
 
 2017 was a big year for the [Prometheus](https://prometheus.io/)

prom article online
diff --git a/blog/changes-prometheus-2.0.mdwn b/blog/changes-prometheus-2.0.mdwn
index 19f54141..65b09d72 100644
--- a/blog/changes-prometheus-2.0.mdwn
+++ b/blog/changes-prometheus-2.0.mdwn
@@ -1,15 +1,9 @@
 [[!meta title="Changes in Prometheus 2.0"]]
-\[LWN subscriber-only content\]
--------------------------------
+[[!meta date="2018-01-25T00:00:00+0000"]]
+[[!meta updated="2018-02-06T10:12:53-0500"]]
 
-January 18, 2018
+[[!toc]]
 
-This article was contributed by Antoine Beaupré
-
-------------------------------------------------------------------------
-
-[KubeCon+CloudNativeCon
-NA](https://lwn.net/Archives/ConferenceByYear/#2017-KubeCon__CloudNativeCon_NA)
 2017 was a big year for the [Prometheus](https://prometheus.io/)
 project, as it [published its 2.0 release in
 November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/).
@@ -199,14 +193,11 @@ that affected certain monitoring software. This new Prometheus release
 could light a bright path for the future of monitoring in the free
 software world.
 
-\[We would like to thank LWN's travel sponsor, the Linux Foundation, for
-travel assistance to attend KubeCon + CloudNativeCon.\]
-
-
+------------------------------------------------------------------------
 
 > *This article [first appeared][] in the [Linux Weekly News][].*
 
 [first appeared]: https://lwn.net/Articles/744721/
 [Linux Weekly News]: http://lwn.net/
 
-[[!tag debian-planet lwn]]
+[[!tag debian-planet lwn conference]]
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index 683fe3f7..9b8f13c4 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -2,6 +2,8 @@
 [[!meta date="2018-01-17T00:00:00+0000"]]
 [[!meta updated="2018-02-06T10:12:53-0500"]]
 
+[[!toc]]
+
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
 scratch by SoundCloud in 2012. It works by pulling metrics from
 monitored services and storing them in a time series database (TSDB). It
@@ -309,11 +311,9 @@ applications, it will serve them well.
 
 ------------------------------------------------------------------------
 
-
-
 > *This article [first appeared][] in the [Linux Weekly News][].*
 
 [first appeared]: https://lwn.net/Articles/744410/
 [Linux Weekly News]: http://lwn.net/
 
-[[!tag debian-planet lwn]]
+[[!tag debian-planet lwn prometheus]]

prom article online
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index 5dfd926f..683fe3f7 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -1,9 +1,6 @@
 [[!meta title="Monitoring with Prometheus 2.0"]]
-\[LWN subscriber-only content\]
--------------------------------
-
 [[!meta date="2018-01-17T00:00:00+0000"]]
-[[!meta updated="2018-01-17T15:32:34-0500"]]
+[[!meta updated="2018-02-06T10:12:53-0500"]]
 
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
 scratch by SoundCloud in 2012. It works by pulling metrics from
@@ -310,6 +307,8 @@ existing monitoring systems to Prometheus, application developers should
 certainly consider deploying Prometheus to instrument their
 applications, it will serve them well.
 
+------------------------------------------------------------------------
+
 
 
 > *This article [first appeared][] in the [Linux Weekly News][].*

restore favicon as used in the header
This way we have square favicons for thumbnails and so on, and a round one for the theme logo
diff --git a/favicon.png b/favicon.png
new file mode 100644
index 00000000..b85f9b3a
Binary files /dev/null and b/favicon.png differ

fix favicon in the chaotic world of mobile devices
Turns out that Microsoft Windows, Chrome OS, Safari and others all
have their own little precious way of using the traditionnal favicon,
which are all pretty much incompatible with each other. Fixing this
requires regenerating not one, but *seven* different images, and *two*
metadata files that are somehow magically read by different
browsers/environments.
This is a shame and an embarrassment, but I prefer to have pretty
icons for users than try to fight such a useless battle.
This requires changes to the theme as well, which should hopefully
happen at the same time as this change.
Mostly cargo-culted from https://realfavicongenerator.net/
Original image should be found in 2013/10/14/prague/IMG_2576.CR2 in my
Photos archive.
diff --git a/android-chrome-192x192.png b/android-chrome-192x192.png
new file mode 100644
index 00000000..abc6b68b
Binary files /dev/null and b/android-chrome-192x192.png differ
diff --git a/android-chrome-512x512.png b/android-chrome-512x512.png
new file mode 100644
index 00000000..1bf91f64
Binary files /dev/null and b/android-chrome-512x512.png differ
diff --git a/apple-touch-icon.png b/apple-touch-icon.png
new file mode 100644
index 00000000..924ec53b
Binary files /dev/null and b/apple-touch-icon.png differ
diff --git a/browserconfig.xml b/browserconfig.xml
new file mode 100644
index 00000000..b3930d0f
--- /dev/null
+++ b/browserconfig.xml
@@ -0,0 +1,9 @@
+<?xml version="1.0" encoding="utf-8"?>
+<browserconfig>
+    <msapplication>
+        <tile>
+            <square150x150logo src="/mstile-150x150.png"/>
+            <TileColor>#da532c</TileColor>
+        </tile>
+    </msapplication>
+</browserconfig>
diff --git a/favicon-16x16.png b/favicon-16x16.png
new file mode 100644
index 00000000..b68202c4
Binary files /dev/null and b/favicon-16x16.png differ
diff --git a/favicon-32x32.png b/favicon-32x32.png
new file mode 100644
index 00000000..c4e19d0b
Binary files /dev/null and b/favicon-32x32.png differ
diff --git a/favicon.ico b/favicon.ico
new file mode 100644
index 00000000..a5c5e271
Binary files /dev/null and b/favicon.ico differ
diff --git a/favicon.png b/favicon.png
deleted file mode 100644
index b85f9b3a..00000000
Binary files a/favicon.png and /dev/null differ
diff --git a/mstile-150x150.png b/mstile-150x150.png
new file mode 100644
index 00000000..7caddfd9
Binary files /dev/null and b/mstile-150x150.png differ
diff --git a/site.webmanifest b/site.webmanifest
new file mode 100644
index 00000000..4271884c
--- /dev/null
+++ b/site.webmanifest
@@ -0,0 +1,18 @@
+{
+    "name": "",
+    "short_name": "",
+    "icons": [
+        {
+            "src": "/android-chrome-192x192.png",
+            "sizes": "192x192",
+            "type": "image/png"
+        },
+        {
+            "src": "/android-chrome-512x512.png",
+            "sizes": "512x512",
+            "type": "image/png"
+        }
+    ],
+    "theme_color": "#ffffff",
+    "background_color": "#ffffff"
+}

weak attempt at fixing icons when linking to this on mobile device screens
diff --git a/index.mdwn b/index.mdwn
index 8a0308ef..570ffcd8 100644
--- a/index.mdwn
+++ b/index.mdwn
@@ -1,4 +1,4 @@
-[[!meta title="À propos de moi"]]
+[[!meta title="Anarcat"]]
 
 <img src="folipon.jpg" align="right" />
 

Added a comment: Link restrictions
diff --git a/blog/2018-02-02-free-software-activities-january-2018/comment_1_da283900a23af4e24e52b88b6d6ac144._comment b/blog/2018-02-02-free-software-activities-january-2018/comment_1_da283900a23af4e24e52b88b6d6ac144._comment
new file mode 100644
index 00000000..f749e1c1
--- /dev/null
+++ b/blog/2018-02-02-free-software-activities-january-2018/comment_1_da283900a23af4e24e52b88b6d6ac144._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="91.176.163.76"
+ claimedauthor="Ben Hutchings"
+ subject="Link restrictions"
+ date="2018-02-05T01:00:11Z"
+ content="""
+Unfortunately the release notes only mentioned the symlink restriction and not the hard link restriction, and didn't mention sysctl.conf.
+
+What's more, this was done one release too late! Although these restrictions were implemented upstream in Linux 3.6, they are included in the linux package in wheezy.
+"""]]

fix broken links
diff --git a/blog/2018-02-02-free-software-activities-january-2018.mdwn b/blog/2018-02-02-free-software-activities-january-2018.mdwn
index 5456bbc5..dbe92401 100644
--- a/blog/2018-02-02-free-software-activities-january-2018.mdwn
+++ b/blog/2018-02-02-free-software-activities-january-2018.mdwn
@@ -26,7 +26,7 @@ issues in web development environemnts (like Javascript, Ruby or
 Python) but do not follow the normal [CVE](https://cve.mitre.org/) tracking system. This
 means that Debian had a few vulnerabilities in its jQuery packages
 that were not tracked by the security team, in particular three that
-were only on [Snyk.io](snyk.io/) ([[!debcve CVE-2012-6708]], [[!debcve
+were only on [Snyk.io](https://snyk.io/) ([[!debcve CVE-2012-6708]], [[!debcve
 CVE-2015-9251]] and [[!debcve CVE-2016-10707]]). The resulting
 discussion was interesting and is worth reading in full.
 
@@ -44,7 +44,7 @@ LTS meta-work
 -------------
 
 I've done some documentation and architecture work on the LTS project
-itself, documented in [this post](https://lists.debian.org/878tcnchnk.fsf@curie.anarc.at).
+itself, mostly around updating the wiki with current practices.
 
 OpenSSH DLA
 -----------

monthly lts report
diff --git a/blog/2018-02-02-free-software-activities-january-2018.mdwn b/blog/2018-02-02-free-software-activities-january-2018.mdwn
new file mode 100644
index 00000000..5456bbc5
--- /dev/null
+++ b/blog/2018-02-02-free-software-activities-january-2018.mdwn
@@ -0,0 +1,135 @@
+[[!meta title="January 2018 report: LTS"]]
+
+[[!toc levels=2]]
+
+I have already published a [yearly report](https://anarc.at/blog/2018-01-27-summary-2017-work/) which covers all of 2017
+but also some of my January 2018 work, so I'll try to keep this short.
+
+Debian Long Term Support (LTS)
+==============================
+
+This is my monthly [Debian LTS][] report. I was happy to switch to the
+new Git repository for the security tracker this month. It feels like
+some operations (namely pull / push) are a little slower, but others,
+like commits or log inspection, are much faster. So I think it is a
+net win.
+
+[Debian LTS]: https://www.freexian.com/services/debian-lts.html
+
+jQuery
+------
+
+I did some work on trying to cleanup a situation with the [jQuery
+package](https://security-tracker.debian.org/jquery), which I explained in more details in a [long post](https://lists.debian.org/878tct3kyi.fsf@curie.anarc.at). It
+turns out there are multiple databases out there that track security
+issues in web development environemnts (like Javascript, Ruby or
+Python) but do not follow the normal [CVE](https://cve.mitre.org/) tracking system. This
+means that Debian had a few vulnerabilities in its jQuery packages
+that were not tracked by the security team, in particular three that
+were only on [Snyk.io](snyk.io/) ([[!debcve CVE-2012-6708]], [[!debcve
+CVE-2015-9251]] and [[!debcve CVE-2016-10707]]). The resulting
+discussion was interesting and is worth reading in full.
+
+A more worrying aspect of the problem is that this problem is not
+limited to flashy new web frameworks. Ben Hutchings [estimated](https://lists.debian.org/1516926679.5097.121.camel@decadent.org.uk)
+that almost half of the Linux kernel vulnerabilities are not tracked
+by CVE. It seems the concensus is that we want to try to follow the
+CVE process, and Mitre has been helpful in distributing this by
+letting other entities, called CVE Numbering Authorities or [CNA](https://cve.mitre.org/cve/request_id.html#cna_participants),
+issue their own CVEs. After contacting Snyk, it turns out that they
+have started the process of becoming a CNA and are trying to get this
+part of their workflow, so that's a good sign.
+
+LTS meta-work
+-------------
+
+I've done some documentation and architecture work on the LTS project
+itself, documented in [this post](https://lists.debian.org/878tcnchnk.fsf@curie.anarc.at).
+
+OpenSSH DLA
+-----------
+
+I've done a quick security update of OpenSSH for LTS, which resulted
+in [DLA-1257-1](https://lists.debian.org/20180126211311.mrb5ogdcxc5hi5n4@curie.anarc.at). Unfortunately, after a discussion with the
+security researcher that published that CVE, it turned out that this
+was only a "self-DOS", i.e. that the `NEWKEYS` attack would only make
+the SSH client terminate its own connection, and therefore not impact
+the rest of the server. One has to wonder, in that case, why this was
+issue a CVE at all: presumably the vulnerability could be leveraged
+somehow, but I did not look deeply enough into it to figure that out.
+
+Hopefully the patch won't introduce a regression: I tested this
+summarily and it didn't seem to cause issue at first glance.
+
+Hardlinks attacks
+-----------------
+
+An interesting attack ([CVE-2017-18078](https://security-tracker.debian.org/tracker/CVE-2017-18078)) was discovered against
+systemd where the "tmpfiles" feature could be abused to bypass
+filesystem access restrictions through hardlinks. The trick is that
+the attack is possible only if kernel hardening (specifically
+`fs.protected_hardlinks`) is turned off. That feature is available in
+the Linux kernel since the 3.6 release, but was actually turned *off*
+by default in 3.7. In the [commit message]( https://github.com/torvalds/linux/commit/561ec64ae67ef25cac8d72bb9c4bfc955edfd415 ), Linus Torvalds
+explained the change was breaking some userland applications, which is
+a huge taboo in Linux, and recommended that distros configure this at
+boot instead. Debian took the reverse approach and Hutchings issued a
+[patch]( https://sources.debian.org/src/linux/3.16.7-ckt20-1+deb8u3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/) which reverts the default to the more secure default. But
+this means users of custom kernels are still vulnerable to this issue.
+
+But, more importantly, this affects more than systemd. The
+vulnerability also happens when using plain old `chown` with hardening
+turned off, when running a simple command like this:
+
+    chown -R non-root /path/owned/by/non-root
+
+I didn't realize this, but hardlinks share permissions: if you change
+permissions on file `a` that's hardlinked to file `b`, both files have
+the new permissions. This is especially nasty if users can hardlink to
+critical files like `/etc/password` or suid binaries, which is why the
+hardening was introduced in the first place.
+
+In Debian, this is especially an issue in maintainer scripts, which
+often call `chown -R` on arbitrary, non-root directories. Daniel Kahn
+Gillmor had reported this to the Debian security team all the way back
+in 2011, but it didn't get traction back then. He now opened [[!debbug
+889066]] to at least enable a warning in lintian and an issue was also
+opened on colord [[!debbug 889060]], as an example, but many more
+packages are vulnerable. Again, this is only if hardening is somewhat
+turned off.
+
+Normally, systemd is supposed to turn that hardening on, which should
+protect custom kernels, but this was [turned off in
+Debian](https://salsa.debian.org/systemd-team/systemd/commit/3e1bfe0d84545557d268c1293fff0d5f3db3b5c7). Anyways, Debian still supports non-systemd init systems
+(although those users mostly probably all migrated to [Devuan](https://devuan.org/)) so
+the fix wouldn't be complete. I have therefore filed [[!debbug
+889098]] against procps (which owns `/etc/sysctl.conf` and related
+files) to try and fix the issue more broadly there.
+
+And to be fair, this was very explicitly [mentioned in the jessie
+release notes](https://www.debian.org/releases/jessie/amd64/release-notes/ch-whats-new.en.html#security) so those people without the protection kind of get
+what they desserve here...
+
+p7zip
+-----
+
+Lastly, I did a fairly trivial update of the [p7zip](https://security-tracker.debian.org/p7zip) package, which
+resulted in [DLA-1268-1](https://lists.debian.org/20180202155210.7ivnhmbuhqe4jfmj@curie.anarc.at). The patch was [sent upstream](https://sourceforge.net/p/p7zip/bugs/204/) and went
+through a few iterations, including coordination with the security
+researcher.
+
+Unfortunately, the latter wasn't willing to share the proof of concept
+(PoC) so that we could test the patch. We are therefore trusting the
+researcher that the fix works, which is too bad because *they* do not
+trust *us* with the PoC...
+
+Other free software work
+========================
+
+I probably did more stuff in January that wasn't documented in the
+previous report. But I don't know if it's worth my time going through
+all this. Expect a report in February instead! :)
+
+Have happy new year and all that stuff.
+
+[[!tag debian-planet debian debian-lts python-planet software geek monthly-report free]]

add back missing word, thx r
diff --git a/blog/2018-01-27-summary-2017-work.mdwn b/blog/2018-01-27-summary-2017-work.mdwn
index ae9d8903..6b522c5a 100644
--- a/blog/2018-01-27-summary-2017-work.mdwn
+++ b/blog/2018-01-27-summary-2017-work.mdwn
@@ -320,7 +320,7 @@ away from GTK2, which is deprecated. I will probably [abandon the GUI
 in Monkeysign](https://0xacab.org/monkeysphere/monkeysign/issues/21#note_132520) but gameclock will probably need a [rewrite of its
 GUI](https://gitlab.com/anarcat/gameclock/issues/1). This begs the question of how we can maintain software in the
 longterm if even the graphical interface (even Xorg is going away!)
-under our feet all the time. Without this change, both software could
+is swept away under our feet all the time. Without this change, both software could
 have kept on going for another decade without trouble. But now, I need
 to spend time just to keep those tools from failing to build at all.
 

marvin/marcos stats updates
diff --git a/hardware/history.mdwn b/hardware/history.mdwn
index 71651217..79d77c8a 100644
--- a/hardware/history.mdwn
+++ b/hardware/history.mdwn
@@ -17,7 +17,8 @@ See also my software [[software/history]].
     sid)
   * tangerine: Pentium III 700MHz 30GB disk, 256MB ram, laptop (Debian
     etch)
-  * marvin: Pentium II 233MHz 34GB disk, 65MB ram, server + workstation?
+  * marvin: Pentium II 233MHz 34GB disk, 65MB ram, server, merged with
+    lenny to yield marcos in 2009 (2011?)
     (Debian 3.1)
 * 2006: Thinkpad T22 (stolen?)
 * 2006-2007?: Toshiba Satellite A30
@@ -30,14 +31,15 @@ See also my software [[software/history]].
 * 2008/2009?-2011: Asus Aspire One D250 (Atom N270 1.6GHz, 1GB ram, 160GB
   disque), suspected compromised by RCMP and replaced
   http://wiki.debian.org/InstallingDebianOn/Acer/AspireOne-D250-1821
-* 2009-...: custom server ([[server/marcos]])
 * 2010 linux counter entry:
   * lenny: AMD Athlon 1.1GHz 200GB disk, 1GB ram, workstation (debian lenny)
   * mumia: Pentium M 1GHz 40GB disk, 1GB ram, laptop (Debian lenny)
 * 2010: HP Mini 10 [[blog/2010-03-18-hp-mini-10-netbook-doom]]
+* 2011-...: custom server ([[server/marcos]]), merge of marvin and
+  lenny, backups of marvin archived in two disks (~120GB)
 * 2011-...: Thinkpad X120e (angela, 600$, 4GB RAM, AMD E-350, battery
   changed in 2015, see [[blog/2015-09-28-fun-with-batteries]], debian
-  wheezy, then jessie)
+  wheezy, then jessie, then stretch...)
 * 2017-...: Intel NUC desktop (curie, 750$, 16GB, Intel i3-6100U
   2.3Ghz 4 threads, M.2 500GB disk,
   [installation report](https://wiki.debian.org/InstallingDebianOn/Intel/NUC6i3SYH#preview),

Added a comment: NCIX
diff --git a/blog/2018-01-28-large-disk-price-review/comment_3_6200d4c608a5b021c105fe94d46f653f._comment b/blog/2018-01-28-large-disk-price-review/comment_3_6200d4c608a5b021c105fe94d46f653f._comment
new file mode 100644
index 00000000..df01b5a2
--- /dev/null
+++ b/blog/2018-01-28-large-disk-price-review/comment_3_6200d4c608a5b021c105fe94d46f653f._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ ip="128.233.249.248"
+ subject="NCIX"
+ date="2018-01-29T23:01:52Z"
+ content="""
+Sad to hear the news about NCIX, thanks for sharing. My go-to for hardware in the past.
+"""]]

Added a comment
diff --git a/blog/2018-01-28-large-disk-price-review/comment_2_73dab9b823b289c23bd53d20313aa6c0._comment b/blog/2018-01-28-large-disk-price-review/comment_2_73dab9b823b289c23bd53d20313aa6c0._comment
new file mode 100644
index 00000000..1418842a
--- /dev/null
+++ b/blog/2018-01-28-large-disk-price-review/comment_2_73dab9b823b289c23bd53d20313aa6c0._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ ip="45.72.160.23"
+ claimedauthor="Steve"
+ subject="comment 2"
+ date="2018-01-29T20:33:52Z"
+ content="""
+Merci… pour ce qui est locale maintenant: canadacomputers.com sont bien, quatres magasins alentours de Montréal (et quatres alentours d'Ottawa).
+"""]]

update: switched drive and better tool
diff --git a/blog/2018-01-28-large-disk-price-review/comment_1_5144166845aa015c3edcd38e46c1aeb8._comment b/blog/2018-01-28-large-disk-price-review/comment_1_5144166845aa015c3edcd38e46c1aeb8._comment
new file mode 100644
index 00000000..c6c54b8c
--- /dev/null
+++ b/blog/2018-01-28-large-disk-price-review/comment_1_5144166845aa015c3edcd38e46c1aeb8._comment
@@ -0,0 +1,19 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""update"""
+ date="2018-01-29T04:51:50Z"
+ content="""
+
+A fellow Debian developer wondered at how Canada could be so backwards
+to not have a site like [Geizhals.eu](https://geizhals.eu/), where you have detailed
+stats like price per TB. Turns out we have one after all,
+[pcpartpicker.com](https://ca.pcpartpicker.com/). The problem is, as usual,
+[Canadians](https://en.wikipedia.org/wiki/Blame_Canada)... "We're [not even a real country anyway](https://en.wikipedia.org/wiki/Numbered_Treaties) ([another
+way to see that](https://native-land.ca/)).
+
+I also got a [Seagate IronWolf ST8000VN0022](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179003&ignorebbr=1&_ga=2.23517690.1316039684.1517178266-780354650.1517178265) because (a) it
+actually ships from Newegg in Canada instead of a reseller in the US
+(so one less trip overall which means less delays and gaz waste) and
+(b) I don't trust that cheap HGST device. Turns out to about 39$/TB
+which is pretty close to par... 
+"""]]

regroup stuff in camera again
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 324a5b66..ff3a78f8 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -1,4 +1,5 @@
-Ceci est la documentation sur l'achat d'un nouvel appareil, mais aussi une tentative d'inventaire.
+Documentation sur la photo: équipements utilisés, histoire,
+comparaison de matériel et autre.
 
 [[!toc levels=2]]
 
@@ -91,6 +92,73 @@ The above list was created with
 ... and eye-ball parsing, copy-paste and editing in Emacs. Should
 probably be improved.
 
+Inventaire
+==========
+
+Appareils numériques
+--------------------
+
+* [Canon Powershot G12] - firmware 1.00G
+* [Nokia N900]
+* [Android G1]
+* [Canon Powershot A430]
+
+ [Android G1]: https://en.wikipedia.org/wiki/Android_G1
+ [Nokia N900]: https://en.wikipedia.org/wiki/Nokia_N900
+ [Canon Powershot G12]: https://en.wikipedia.org/wiki/Canon_PowerShot_G12
+ [Canon Powershot A430]: https://en.wikipedia.org/wiki/Canon_PowerShot_A430 
+
+Appareils analogues
+-------------------
+
+* [Minolta SRT-200](http://www.rokkorfiles.com/SRT%20Series.htm#b200)
+* [Pentax ME](https://en.wikipedia.org/wiki/Pentax_ME)
+* [Canon FTb](https://en.wikipedia.org/wiki/Canon_FTb)
+* [Canon EOS Rebel 2000](https://en.wikipedia.org/wiki/Canon_EOS_Rebel_2000)
+* Ricoh one take AF 39mm 3.9 - ma première caméra!
+
+Lentilles
+---------
+
+* Image 70-210mm 3.8, [Minolta SR] (monté sur le Minolta SRT-200)
+* Minolta 135mm 3.5, [Minolta SR]
+* Minolta 50mm 2, [Minolta SR]
+* Vivitar 2x converter, [Minolta SR]
+* SMC Pentax-M 50mm 1.4, [Pentax K] (monté sur le Pentax ME, mais malheureusement égratignée)
+* Canon 50mm 1.8, [Canon FD][] (monté sur le Canon FTb)
+* Soligor 75-260mm 4.5, [Canon FD]
+* Canon 28-80mm 3.5-5.6, [Canon EF]
+* Canon 75-300mm 4-5.6, [Canon EF]
+
+ [Minolta SR]: https://en.wikipedia.org/wiki/Minolta_SR-mount
+ [Pentax K]: https://en.wikipedia.org/wiki/Pentax_K-mount
+ [Canon FD]: https://en.wikipedia.org/wiki/Canon_FD_lens_mount
+ [Canon EF]: https://en.wikipedia.org/wiki/Canon_EF_lens_mount
+
+Note: of all those lens, only the [Canon EF] and [Pentax K] are still in use.
+
+Flash
+-----
+
+* Vivitar 2000, horse-shoe avec câble de synchro (pentax?)
+* Suntax 16A
+
+Reference
+---------
+
+ * [Sensor sizes](https://en.wikipedia.org/wiki/Image_sensor_format#Common_image_sensor_formats)
+   * Full frame (Sony, etc): 864mm² (100%)
+   * APS-C (Fuji, etc): 370mm² (42%)
+   * APS-C (Canon): 329mm² (38%)
+   * 4/3 (Olympus): 225mm² (26%)
+   * 1/1.7" (Canon Powershot G12): 43mm² (5%)
+ * Lentilles:
+   * [Lens mount guide](http://www.kehblog.com/2011/12/lens-mount-guide-part-1.html)
+   * [Another](http://rick_oleson.tripod.com/index-99.html)
+   * [Wikipedia](https://en.wikipedia.org/wiki/Lens_mount)
+   * [Lens buying guide](https://www.dpreview.com/articles/9162056837/digital-camera-lens-buying-guide)
+ * [Darktable camera support](https://www.darktable.org/resources/camera-support/): pretty uniform across brands
+
 2013-2017 shopping
 ==================
 
@@ -587,59 +655,8 @@ cons:
  * no builtin flash (but hot-shoe flash included)
  * [grainy when compared with X-T2](https://www.dpreview.com/reviews/image-comparison/fullscreen?attr18=lowlight&attr13_0=fujifilm_xt2&attr13_1=olympus_em1ii&attr15_0=raw&attr15_1=raw&attr16_0=25600&attr16_1=25600&attr126_0=1&attr126_1=1&normalization=full&widget=9&x=0.036022800728683066&y=0.3545271629778671)
 
-Inventaire
-==========
-
-Appareils numériques
---------------------
-
-* [Canon Powershot G12] - firmware 1.00G
-* [Nokia N900]
-* [Android G1]
-* [Canon Powershot A430]
-
- [Android G1]: https://en.wikipedia.org/wiki/Android_G1
- [Nokia N900]: https://en.wikipedia.org/wiki/Nokia_N900
- [Canon Powershot G12]: https://en.wikipedia.org/wiki/Canon_PowerShot_G12
- [Canon Powershot A430]: https://en.wikipedia.org/wiki/Canon_PowerShot_A430 
-
-Appareils analogues
--------------------
-
-* [Minolta SRT-200](http://www.rokkorfiles.com/SRT%20Series.htm#b200)
-* [Pentax ME](https://en.wikipedia.org/wiki/Pentax_ME)
-* [Canon FTb](https://en.wikipedia.org/wiki/Canon_FTb)
-* [Canon EOS Rebel 2000](https://en.wikipedia.org/wiki/Canon_EOS_Rebel_2000)
-* Ricoh one take AF 39mm 3.9 - ma première caméra!
-
-Lentilles
----------
-
-* Image 70-210mm 3.8, [Minolta SR] (monté sur le Minolta SRT-200)
-* Minolta 135mm 3.5, [Minolta SR]
-* Minolta 50mm 2, [Minolta SR]
-* Vivitar 2x converter, [Minolta SR]
-* SMC Pentax-M 50mm 1.4, [Pentax K] (monté sur le Pentax ME, mais malheureusement égratignée)
-* Canon 50mm 1.8, [Canon FD][] (monté sur le Canon FTb)
-* Soligor 75-260mm 4.5, [Canon FD]
-* Canon 28-80mm 3.5-5.6, [Canon EF]
-* Canon 75-300mm 4-5.6, [Canon EF]
-
- [Minolta SR]: https://en.wikipedia.org/wiki/Minolta_SR-mount
- [Pentax K]: https://en.wikipedia.org/wiki/Pentax_K-mount
- [Canon FD]: https://en.wikipedia.org/wiki/Canon_FD_lens_mount
- [Canon EF]: https://en.wikipedia.org/wiki/Canon_EF_lens_mount
-
-Note: of all those lens, only the [Canon EF] and [Pentax K] are still in use.
-
-Flash
------
-
-* Vivitar 2000, horse-shoe avec câble de synchro (pentax?)
-* Suntax 16A
-
-Lens Comparison
-===============
+Lens price comparison
+---------------------
 
  * wide: <28mm (<18mm APS-C, or <14mm 4/3)
  * normal: ~50mm (~30mm APS-C or 25mm 4/3)
@@ -647,8 +664,7 @@ Lens Comparison
 
 Prix chez B&H, 2017-12-13.
 
-Prime, normal, 1.8
-------------------
+### Prime, normal, 1.8 ###
 
  * Canon: 125$ 50mm EF
  * Sony:  200$ 50mm FE
@@ -665,8 +681,7 @@ Conclusion:
  * Nikon closer
  * Fuji weird, but very competitive for <= f/1.4
 
-Prime, normal, 1.4
-------------------
+### Prime, normal, 1.4 ###
 
  * Sony: 300$ 50mm
  * Canon: 330$ 50mm
@@ -684,8 +699,7 @@ Conclusion:
  * Nikon weird
  * Olympus weirder
 
-Zoom
-----
+### Zoom ###
 
 SLR (Canon, Nikon) ommitted for simplicity.
 
@@ -702,8 +716,7 @@ Conclusion:
  * Sony leading all around, esp. in that continus f/4
  * Fuji last, but competitive
 
-Telephoto
----------
+### Telephoto ###
 
 SLR (Canon, Nikon) ommitted for simplicity.
 
@@ -731,8 +744,7 @@ Conclusion:
  * Sony generally close
  * Fuji competitive, cheapest for fast primes
 
-Overall
--------
+### Overall ###
 
  * SLR are cheapest
  * Olympus cheapest for zooms, Sony very close, Fuji not completely off
@@ -743,19 +755,3 @@ Overall
 
 Looks like Fuji is targeting a more high-end market, Sony is all over

(fichier de différences tronqué)
camera: fix headings and add generator scripts
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 27d33b80..324a5b66 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -2,8 +2,29 @@ Ceci est la documentation sur l'achat d'un nouvel appareil, mais aussi une tenta
 
 [[!toc levels=2]]
 
+History
+=======
+
+Overview of important dates
+---------------------------
+
+ * 1988: film photography starts
+ * 2004: digital photography starts, <1GB/year
+ * 2006: Canon PowerShot A430, ~1GB/year
+ * 2009: Nokia N900 (mobile phone), ~1GB/year
+ * 2012: Canon PowerShot G12, 10-30GB/year
+ * 2018: Fujifilm X-T2, 20GB/mth?
+
+The above was created with:
+
+    git annex info --json 2017/* | jq '.["directory","local annex size"]'
+
+.. and some manual formatting in Emacs.
+
 Disk usage
-==========
+----------
+
+This list details per-year disk usage of my Photo archive:
 
  * 1969: 0.1 GB
  * 1970: 0.8 GB
@@ -25,7 +46,14 @@ Disk usage
  * 2018: 6 GB (FujiFilm X-T2, one month)
  * Total: 140GB
 
-Camera dates:
+Years before 2004 are probably mislabeled. Archives from 1988 to 2004
+are still in film and haven't been imported.
+
+Camera dates
+------------
+
+This is a more exhaustive list of which camera was used during which
+period.
 
  * 2004: random camera (Canon Powershot A70)
  * 2005: random cameras (HP Photosmart C200, Canon PowerShot S1 IS,
@@ -54,14 +82,14 @@ Camera dates:
    with X-T2
  * 2018: X-T2 (ongoing) 
 
-Summary dates:
+The above list was created with
+
+    LANG=C for file in 20*/*/*/*  20*/*/*/*/* ; do 
+        printf "$file: $(exiv2 $file 2> /dev/null | grep model | sed 's/.*://')\n"
+    done | tee models | sed 's#/.*:##'  | sort -n | uniq -c 
 
- * 1998: beginning of photography
- * 2004: beginning of digital photography
- * 2006: Canon PowerShot A430
- * 2009: Nokia N900 (mobile phone)
- * 2012: Canon PowerShot G12
- * 2018: Fujifilm X-T2
+... and eye-ball parsing, copy-paste and editing in Emacs. Should
+probably be improved.
 
 2013-2017 shopping
 ==================

add disk space and camera listing
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index d2af4ac5..27d33b80 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -2,6 +2,70 @@ Ceci est la documentation sur l'achat d'un nouvel appareil, mais aussi une tenta
 
 [[!toc levels=2]]
 
+Disk usage
+==========
+
+ * 1969: 0.1 GB
+ * 1970: 0.8 GB
+ * 1998: 0.3 GB
+ * 2004: 0.4 GB
+ * 2005: 0.3 GB
+ * 2006: 0.9 GB (Canon PowerShot A430)
+ * 2007: 2 GB
+ * 2008: 0.9 GB
+ * 2009: 0.8 GB (Nokia N900)
+ * 2010: 1 GB
+ * 2011: 1 GB
+ * 2012: 10 GB (Canon PowerShot G12)
+ * 2013: 33 GB
+ * 2014: 27 GB
+ * 2015: 11 GB
+ * 2016: 0.4 GB
+ * 2017: 32 GB (20 GB in december: FujiFilm X-T2 test shoot!)
+ * 2018: 6 GB (FujiFilm X-T2, one month)
+ * Total: 140GB
+
+Camera dates:
+
+ * 2004: random camera (Canon Powershot A70)
+ * 2005: random cameras (HP Photosmart C200, Canon PowerShot S1 IS,
+   Canon PowerShot S30,  E4600 (?), 2005  FinePix2600Zoom,  FinePix
+   A330,  PhotoSmart C200,  QSS-29_31)
+ * 2006: Mostly Canon Powershot (A430, A610, S30, SD450), but also
+     FinePix A330 and PhotoSmart C200)
+ * 2007: most shots with PowerShot A430 again, but also some shots
+    with Canon EOS 40D, DMC-TZ1 (?), FinePix F30, KODAK EASYSHARE C533
+    ZOOM DIGITAL CAMERA, QSS-30 (?)
+ * 2008: PowerShot A430 still, but also KODAK EASYSHARE C533 ZOOM
+      DIGITAL CAMERA, VLUU L310W / Samsung L310W 9 2009 (?)
+ * 2009: N900 arrives, but still mostly Canon PowerShot A430 (also
+      Canon PowerShot A570 IS ?)
+ * 2010: now mostly N900, some Canon PowerShot A430 left, some with a
+     NIKON D3000
+ * 2011: N900, some tests with Canon EOS 60D and NIKON D80
+ * 2012: PowerShot G12 arrives, good number of shots with Canon EOS
+    40D, still a lots of shots with N900
+ * 2013: PowerShot G12 dominant, test shoot with a Canon EOS 5D Mark
+     III and some N900 and DSLR-A100 (?)
+ * 2014: PowerShot G12, some 40D and Nikon D300
+ * 2015: PowerShot G12, test shoot with Olympus E-M1
+ * 2016: PowerShot G12, some DMC-FZ150 (?), HTC One S
+ * 2017: PowerShot G12, more tests with E-M1, NIKON D90. test shoot
+   with X-T2
+ * 2018: X-T2 (ongoing) 
+
+Summary dates:
+
+ * 1998: beginning of photography
+ * 2004: beginning of digital photography
+ * 2006: Canon PowerShot A430
+ * 2009: Nokia N900 (mobile phone)
+ * 2012: Canon PowerShot G12
+ * 2018: Fujifilm X-T2
+
+2013-2017 shopping
+==================
+
 Update: j'ai acheté une Fujifilm X-T2, principalement à cause de sa
 familiarité avec les vieux systèmes argentique et la qualité
 exceptionnelle des rendus de l'appareil, ce qui réduit ma dépendance à
@@ -10,10 +74,9 @@ l'ordinateur.
 [Comparatif actuel DPR](https://www.dpreview.com/products/compare/side-by-side?products=fujifilm_x100f&products=fujifilm_xt1&products=fujifilm_xt2&products=oly_em5ii&products=olympus_em1ii&products=sony_a7_ii&products=sony_a9&products=nikon_d750&products=nikon_d7500&products=canon_g12&sortDir=ascending)
 
 Things I need
-=============
+-------------
 
-Absolute requirements
----------------------
+### Absolute requirements ###
 
  * interchangeable lenses
  * builtin flash
@@ -25,8 +88,7 @@ Absolute requirements
  * sealed body
  * SD card support
 
-Nice to have
-------------
+### Nice to have ###
 
  * USB charging
  * top screen?
@@ -39,23 +101,21 @@ Nice to have
  * timelapse mode (intervalometer)
  * compatible with my current remote
 
-Candy on top
-------------
+### Candy on top ###
 
  * reasonable video features (e.g. be able to adjust settings while filming)
  * exposure bracketing
  * full frame
  * free or cheap (~200-300$)
 
-Not necessary
--------------
+### Not necessary ###
 
  * touch screen
  * wifi
  * gps
 
 Nikon
-=====
+-----
 
 Pro:
 
@@ -75,8 +135,7 @@ Con:
 
 There's a way to hack that connector to accept more standard ones, see [this instructable document](http://www.instructables.com/id/Nikon-D90-MC-DC2-Remote-Shutter-Hack/).
 
-D300
-----
+### D300 ###
 
 650$ used @ simons
 
@@ -93,8 +152,7 @@ Cons:
  * no video
  * CF card!
 
-D300S
------
+### D300S ###
 
 1200$ at futureshop, case only.
 
@@ -109,8 +167,7 @@ Cons:
  * "low" megapixel for the price (13MP)
  * only 720p video and 11khz audio!
 
-D700
-----
+### D700 ###
 
 1000$ used @ simons
 
@@ -125,8 +182,7 @@ Cons:
  * no video
  * more expensive
 
-D7000
------
+### D7000 ###
 
 [<del>~750$ à lozeau sans lentille</del>](https://lozeau.com/produits/appareils-photo/reflex/nikon-d7000-boitier-seulement/)
 [<del>1050 avec 18-105VR</del>](https://lozeau.com/produits/appareils-photo/reflex/ensemble-nikon-d7000-avec-18-105mm-vr/) maintenant 810$ pour le boitier et 1160$ pour le kit?
@@ -149,8 +205,7 @@ Cons:
  * ISO/mode/etc buttons are not on top like the D300
  * no mode lock
 
-D7100
------
+### D7100 ###
 
 Pros:
 
@@ -166,8 +221,7 @@ Cons:
 d7500: 1500 lozeau
 d7200: 1000$ lozeau: https://lozeau.com/produits/fr/photo/appareils-reflex/nikon/nikon/boitier-nikon-d7200-p24089c74c75c76/
 
-D750
-----
+### D750 ###
 
 stats:
 
@@ -186,8 +240,7 @@ con:
 
  * expensive (boitier 1800CAD lozeau 2017-12-7)
 
-Lentilles
----------
+### Lentilles ###
 
  * Simon's
   * Nikon 18-105 @ 3.5-5.6: 350$
@@ -215,7 +268,7 @@ Lentilles
   * le reste des zooms est tout plus cher que Simon's ou B&H
 
 Canon
-=====

(fichier de différences tronqué)
make link works, publish on planet
diff --git a/blog/2018-01-28-large-disk-price-review.mdwn b/blog/2018-01-28-large-disk-price-review.mdwn
index 72bf1ce3..96d26e8e 100644
--- a/blog/2018-01-28-large-disk-price-review.mdwn
+++ b/blog/2018-01-28-large-disk-price-review.mdwn
@@ -33,34 +33,35 @@ for items that are not on `newegg.ca`, as of today.
 8TB
 ---
 
-| Brand   | Model        | Price | $/TB | fail% | Link                                                            |
-| ------- | ------------ | ----- | ---- | ----- | --------------------------------------------------------------- |
-| HGST    | 0S04012      | 280$  | 35$  | N/A   | https://www.newegg.ca/Product/Product.aspx?item=N82E16822146142 |
-| Seagate | ST8000NM0055 | 320$  | 40$  | 1.04% | https://www.newegg.ca/Product/Product.aspx?Item=1Z4-002P-000N0  |
-| WD      | WD80EZFX     | 364$  | 46$  |  N/A  | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822235063 |
-| Seagate | ST8000DM002  | 380$  | 48$  | 0.72% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179001 |
-| HGST | HUH728080ALE600 | 791$  | 99$  | 0.00% | https://www.newegg.ca/Product/Product.aspx?Item=9SIA9763FK4876  |
+| Brand   | Model        | Price     | $/TB | fail% | Notes |
+| ------- | ------------ | --------- | ---- | ----- | ----- |
+| HGST    | 0S04012      | [280$](https://www.newegg.ca/Product/Product.aspx?item=N82E16822146142) | 35$  | N/A   |  |
+| Seagate | ST8000NM0055 | [320$](https://www.newegg.ca/Product/Product.aspx?Item=1Z4-002P-000N0) | 40$  | 1.04% |  |
+| WD      | WD80EZFX     | [364$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822235063) | 46$  |  N/A  |  |
+| Seagate | ST8000DM002  | [380$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179001) | 48$  | 0.72% |  |
+| HGST | HUH728080ALE600 | [791$](https://www.newegg.ca/Product/Product.aspx?Item=9SIA9763FK4876) | 99$  | 0.00% |  |
 
 6TB
 ---
 
-| Brand   | Model        | Price | $/TB | fail% | Link                                                            |
-| ------- | ------------ | ----- | ---- | ----- | --------------------------------------------------------------- |
-| HGST    | 0S04007      | 220$  | 37$  | N/A   | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822146118 |
-| Seagate | ST6000AS0002 | 230$  | 38$  | N/A   | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822178750CVF |
-| WD      | WD60EFRX     | 280$  | 47$  | 1.80% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822236737 |
-| Seagate | ST6000DX000  | ~423$ | 106$ | 0.42% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822178520 (not on .ca) |
+| Brand   | Model        | Price      | $/TB | fail% | Notes |
+| ------- | ------------ | ---------- | ---- | ----- | ----- |
+| HGST    | 0S04007      | [220$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822146118)  | 37$  | N/A   |       |
+| Seagate | ST6000DX000  | ~[222$](https://www.newegg.com/Product/Product.aspx?Item=9SIA5GC6TP2180&cm_re=ST6000DX000-_-9SIA5GC6TP2180-_-Product) | 56$  | 0.42% | not on .ca, refurbished |
+| Seagate | ST6000AS0002 | [230$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822178750CVF)  | 38$  | N/A   |       |
+| WD      | WD60EFRX     | [280$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822236737)  | 47$  | 1.80% |       |
+| Seagate | STBD6000100  | [343$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822178520)  | 58$  | N/A   |       |
 
 4TB
 ---
 
-| Brand   | Model        | Price | $/TB | fail% | Link                                                            |
-| ------- | ------------ | ----- | ---- | ----- | --------------------------------------------------------------- |
-| Seagate | ST4000DM004  | 125$  | 31$  | N/A   | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179299 |
-| Seagate | ST4000DM000  | 150$  | 38$  | 3.28% | https://www.newegg.ca/Product/Product.aspx?Item=9SIAA685UU9636  |
-| WD      | WD40EFRX     | 155$  | 39$  | 0.00% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822236599 |
-| HGST | HMS5C4040BLE640 | ~242$ | 61$  | 0.36% | https://www.newegg.com/Product/Product.aspx?Item=1Z4-001J-002X3 (not on .ca!) |
-| Toshiba | MB04ABA400V  | ~300$ | 74$  | 0.00% | https://www.newegg.com/Product/Product.aspx?Item=N82E16822149552 (not on .ca!) |
+| Brand   | Model        | Price      | $/TB | fail% | Notes      |
+| ------- | ------------ | ---------- | ---- | ----- | ---------- |
+| Seagate | ST4000DM004  | [125$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179299)  | 31$  | N/A   |  |
+| Seagate | ST4000DM000  | [150$](https://www.newegg.ca/Product/Product.aspx?Item=9SIAA685UU9636)  | 38$  | 3.28% |  |
+| WD      | WD40EFRX     | [155$](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822236599)  | 39$  | 0.00% |  |
+| HGST | HMS5C4040BLE640 | [~242$](https://www.newegg.com/Product/Product.aspx?Item=1Z4-001J-002X3) | 61$  | 0.36% | not on .ca |
+| Toshiba | MB04ABA400V  | [~300$](https://www.newegg.com/Product/Product.aspx?Item=N82E16822149552) | 74$  | 0.00% | not on .ca |
 
 Conclusion
 ----------
@@ -68,4 +69,7 @@ Conclusion
 Cheapest per TB costs seem to be in the 4TB range, but the 8TB HGST
 comes really close. Reliabilty for this drive could be an issue,
 however - I can't explain why it is so cheap compared to other
-devices...
+devices... But I guess we'll see where it goes as I'll just order the
+darn thing and try it out.
+
+[[!tag debian-planet hardware review]]

a quick large disk price review
diff --git a/blog/2018-01-28-large-disk-price-review.mdwn b/blog/2018-01-28-large-disk-price-review.mdwn
new file mode 100644
index 00000000..72bf1ce3
--- /dev/null
+++ b/blog/2018-01-28-large-disk-price-review.mdwn
@@ -0,0 +1,71 @@
+[[!meta title="4TB+ large disk price review"]]
+
+For my personal backups, I am now looking at 4TB+ single-disk
+long-term storage. I currently have 3.5TB of offline storage, split in
+two disks: this is rather inconvenient as I need to plug both in a
+toaster-like SATA enclosure which gathers dusts and performs like
+crap. Now I'm looking at hosting offline backups at a friend's place
+so I need to store everything in a single drive, to save space.
+
+This means I need at least 4TB of storage, and those needs are going
+to continuously expand in the future. Since this is going to be
+offsite, swapping the drive isn't really convenient (especially
+because syncing all that data takes a long time), so I figured I would
+also look at more than 4 TB.
+
+So I built those neat little tables. I took the prices from
+[Newegg.ca](https://www.newegg.ca/) or [Newegg.com](https://newegg.com/) as a fallback when the item wasn't
+available in Canada. I used to order from [NCIX](https://ncix.com/) because it was
+"more" local, but they unfortunately went [bankrupt](https://www.anandtech.com/show/12115/ncix-files-for-bankruptcy-after-restructuring-attempts) and in the
+worse possible way: the website is still up and you can order stuff,
+but those orders never ship. Sad to see a 20-year old institution go
+out like that; I blame [Jeff Bezos](https://en.wikipedia.org/wiki/Jeff_Bezos).
+
+I also used failure rate figures from the [latest Backblaze
+review](https://www.backblaze.com/blog/hard-drive-failure-rates-q3-2017/), although those should always be taken with a grain of
+salt. For example, the apparently stellar 0.00% failure rates are all
+on sample sizes too small to be statistically significant (<100
+drives).
+
+All prices are in [CAD](https://en.wikipedia.org/wiki/Canadian_dollar), sometimes after conversion from [USD](https://en.wikipedia.org/wiki/United_States_dollar)
+for items that are not on `newegg.ca`, as of today.
+
+8TB
+---
+
+| Brand   | Model        | Price | $/TB | fail% | Link                                                            |
+| ------- | ------------ | ----- | ---- | ----- | --------------------------------------------------------------- |
+| HGST    | 0S04012      | 280$  | 35$  | N/A   | https://www.newegg.ca/Product/Product.aspx?item=N82E16822146142 |
+| Seagate | ST8000NM0055 | 320$  | 40$  | 1.04% | https://www.newegg.ca/Product/Product.aspx?Item=1Z4-002P-000N0  |
+| WD      | WD80EZFX     | 364$  | 46$  |  N/A  | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822235063 |
+| Seagate | ST8000DM002  | 380$  | 48$  | 0.72% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179001 |
+| HGST | HUH728080ALE600 | 791$  | 99$  | 0.00% | https://www.newegg.ca/Product/Product.aspx?Item=9SIA9763FK4876  |
+
+6TB
+---
+
+| Brand   | Model        | Price | $/TB | fail% | Link                                                            |
+| ------- | ------------ | ----- | ---- | ----- | --------------------------------------------------------------- |
+| HGST    | 0S04007      | 220$  | 37$  | N/A   | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822146118 |
+| Seagate | ST6000AS0002 | 230$  | 38$  | N/A   | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822178750CVF |
+| WD      | WD60EFRX     | 280$  | 47$  | 1.80% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822236737 |
+| Seagate | ST6000DX000  | ~423$ | 106$ | 0.42% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822178520 (not on .ca) |
+
+4TB
+---
+
+| Brand   | Model        | Price | $/TB | fail% | Link                                                            |
+| ------- | ------------ | ----- | ---- | ----- | --------------------------------------------------------------- |
+| Seagate | ST4000DM004  | 125$  | 31$  | N/A   | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179299 |
+| Seagate | ST4000DM000  | 150$  | 38$  | 3.28% | https://www.newegg.ca/Product/Product.aspx?Item=9SIAA685UU9636  |
+| WD      | WD40EFRX     | 155$  | 39$  | 0.00% | https://www.newegg.ca/Product/Product.aspx?Item=N82E16822236599 |
+| HGST | HMS5C4040BLE640 | ~242$ | 61$  | 0.36% | https://www.newegg.com/Product/Product.aspx?Item=1Z4-001J-002X3 (not on .ca!) |
+| Toshiba | MB04ABA400V  | ~300$ | 74$  | 0.00% | https://www.newegg.com/Product/Product.aspx?Item=N82E16822149552 (not on .ca!) |
+
+Conclusion
+----------
+
+Cheapest per TB costs seem to be in the 4TB range, but the 8TB HGST
+comes really close. Reliabilty for this drive could be an issue,
+however - I can't explain why it is so cheap compared to other
+devices...

fix broken links
diff --git a/blog/2018-01-27-summary-2017-work.mdwn b/blog/2018-01-27-summary-2017-work.mdwn
index 49981df9..ae9d8903 100644
--- a/blog/2018-01-27-summary-2017-work.mdwn
+++ b/blog/2018-01-27-summary-2017-work.mdwn
@@ -195,7 +195,7 @@ New programs
 
 I have written a bunch of completely new programs:
 
- * [Stressant](2017-01-31-new-desktop-testing-with-stressant) - a small wrapper script to stress-test new
+ * [[Stressant|2017-01-31-new-desktop-testing-with-stressant]] - a small wrapper script to stress-test new
    machines. no idea if anyone's actually using the darn thing, but I
    have found it useful from time to time.
 
@@ -237,7 +237,7 @@ New maintainerships
 
 I also got more or less deeply involved in various communities:
 
- * [Linkchecker was forked](2017-01-31-free-software-activities-january-2017/#linkchecker-forked) and I've been reviewing PRs and bug
+ * [[Linkchecker was forked|2017-01-31-free-software-activities-january-2017#linkchecker-forked]] and I've been reviewing PRs and bug
    reports ever since. It's been difficult to keep up, and I still
    haven't been able to actually push out a new release because of
    administrative hurdles ([PyPI, what's up??](https://github.com/linkcheck/linkchecker/issues/4)) but also because

add table of contents
diff --git a/blog/2018-01-27-summary-2017-work.mdwn b/blog/2018-01-27-summary-2017-work.mdwn
index eaa3e03e..49981df9 100644
--- a/blog/2018-01-27-summary-2017-work.mdwn
+++ b/blog/2018-01-27-summary-2017-work.mdwn
@@ -1,5 +1,7 @@
 [[!meta title="A summary of my 2017 work"]]
 
+[[!toc]]
+
 New years are strange things: for [most arbitrary reasons](https://en.wikipedia.org/wiki/New_Year%27s_Day), around
 January 1st we reset a bunch of stuff, change calendars and forget
 about work for a while. This is also when I forget to do my monthly

creating tag page tag/gameclock
diff --git a/tag/gameclock.mdwn b/tag/gameclock.mdwn
new file mode 100644
index 00000000..64896522
--- /dev/null
+++ b/tag/gameclock.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged gameclock"]]
+
+[[!inline pages="tagged(gameclock)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/sigal
diff --git a/tag/sigal.mdwn b/tag/sigal.mdwn
new file mode 100644
index 00000000..6ca52fe7
--- /dev/null
+++ b/tag/sigal.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged sigal"]]
+
+[[!inline pages="tagged(sigal)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/yearly-report
diff --git a/tag/yearly-report.mdwn b/tag/yearly-report.mdwn
new file mode 100644
index 00000000..85f4ed48
--- /dev/null
+++ b/tag/yearly-report.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged yearly-report"]]
+
+[[!inline pages="tagged(yearly-report)" actions="no" archive="yes"
+feedshow=10]]

yearly summary
diff --git a/blog/2017-04-29-free-software-activities-april-2017.mdwn b/blog/2017-04-29-free-software-activities-april-2017.mdwn
index afb7baed..a98a8cab 100644
--- a/blog/2017-04-29-free-software-activities-april-2017.mdwn
+++ b/blog/2017-04-29-free-software-activities-april-2017.mdwn
@@ -419,4 +419,4 @@ trying to avoid in the first place.
 [reproducible builds]: https://forum.f-droid.org/t/are-reproducible-builds-possible-now/335
 [usability review]: https://forum.f-droid.org/t/usability-review-of-the-privileged-extension/334/
 
-[[!tag debian-planet debian debian-lts python-planet software geek free kodi debmans xmonad emacs ikiwiki stressant git-annex android]]
+[[!tag debian-planet debian debian-lts python-planet software geek free kodi debmans xmonad emacs ikiwiki stressant git-annex android monthly-report]]
diff --git a/blog/2017-07-03-free-software-activities-june-2017.mdwn b/blog/2017-07-03-free-software-activities-june-2017.mdwn
index fad2dbda..40404baf 100644
--- a/blog/2017-07-03-free-software-activities-june-2017.mdwn
+++ b/blog/2017-07-03-free-software-activities-june-2017.mdwn
@@ -210,4 +210,4 @@ Small fry
 
 That's about it! And that was supposed to be a slow month...
 
-[[!tag debian-planet debian debian-lts python-planet software geek free debmans stressant wallabako docker subsonic meta]]
+[[!tag debian-planet debian debian-lts python-planet software geek free debmans stressant wallabako docker subsonic meta monthly-report]]
diff --git a/blog/2017-07-29-free-software-activities-july-2017.mdwn b/blog/2017-07-29-free-software-activities-july-2017.mdwn
index 5aa6b31f..33158dcf 100644
--- a/blog/2017-07-29-free-software-activities-july-2017.mdwn
+++ b/blog/2017-07-29-free-software-activities-july-2017.mdwn
@@ -288,4 +288,4 @@ were long overdue:
    upstream. this ended up to be way too much overhead and I reverted
    to my old normal history habits.
 
-[[!tag debian-planet debian debian-lts python-planet software geek free beets ecdysis subsonic]]
+[[!tag debian-planet debian debian-lts python-planet software geek free beets ecdysis subsonic monthly-report]]
diff --git a/blog/2017-09-01-free-software-activities-august-2017.mdwn b/blog/2017-09-01-free-software-activities-august-2017.mdwn
index 4f7701ea..dece36dc 100644
--- a/blog/2017-09-01-free-software-activities-august-2017.mdwn
+++ b/blog/2017-09-01-free-software-activities-august-2017.mdwn
@@ -369,4 +369,4 @@ changes which may be of interest:
    bug report! The upload of this in Debian is pending a review from
    the [release team](https://bugs.debian.org/871937).
 
-[[!tag debian-planet debian debian-lts python-planet software geek free gnupg pgp ansible numpy gitlab git-annex lfs debconf]]
+[[!tag debian-planet debian debian-lts python-planet software geek free gnupg pgp ansible numpy gitlab git-annex lfs debconf monthly-report]]
diff --git a/blog/2017-10-02-free-software-activities-september-2017.mdwn b/blog/2017-10-02-free-software-activities-september-2017.mdwn
index cae97242..ee64491c 100644
--- a/blog/2017-10-02-free-software-activities-september-2017.mdwn
+++ b/blog/2017-10-02-free-software-activities-september-2017.mdwn
@@ -293,4 +293,4 @@ for that directory:
 
     ProxyPass /.well-known/ !
 
-[[!tag debian-planet debian debian-lts python-planet software geek free ham restic ansible feed2exec]]
+[[!tag debian-planet debian debian-lts python-planet software geek free ham restic ansible feed2exec monthly-report]]
diff --git a/blog/2017-11-02-free-software-activities-october-2017.mdwn b/blog/2017-11-02-free-software-activities-october-2017.mdwn
index 15f02b46..14072c85 100644
--- a/blog/2017-11-02-free-software-activities-october-2017.mdwn
+++ b/blog/2017-11-02-free-software-activities-october-2017.mdwn
@@ -389,4 +389,4 @@ over the month.
    
 > There is no [web extension] only XUL! - [Inside joke](https://en.wikipedia.org/wiki/XUL#Etymology_and_Ghostbusters_references)
 
-[[!tag debian feed2exec git haskell mediawiki debian-lts software geek free debian-planet python-planet]]
+[[!tag debian feed2exec git haskell mediawiki debian-lts software geek free debian-planet python-planet monthly-report]]
diff --git a/blog/2017-11-30-free-software-activities-november-2017.mdwn b/blog/2017-11-30-free-software-activities-november-2017.mdwn
index cd6b016a..c1eb083c 100644
--- a/blog/2017-11-30-free-software-activities-november-2017.mdwn
+++ b/blog/2017-11-30-free-software-activities-november-2017.mdwn
@@ -366,4 +366,4 @@ Miscellaneous
 
 I spent only 30% of this month on paid work.
 
-[[!tag debian-planet debian debian-lts python-planet software geek free monkeysphere feed2exec git mediawiki python-planet drupal hardware stressant security]]
+[[!tag debian-planet debian debian-lts python-planet software geek free monkeysphere feed2exec git mediawiki python-planet drupal hardware stressant security monthly-report]]
diff --git a/blog/2018-01-27-summary-2017-work.mdwn b/blog/2018-01-27-summary-2017-work.mdwn
new file mode 100644
index 00000000..eaa3e03e
--- /dev/null
+++ b/blog/2018-01-27-summary-2017-work.mdwn
@@ -0,0 +1,354 @@
+[[!meta title="A summary of my 2017 work"]]
+
+New years are strange things: for [most arbitrary reasons](https://en.wikipedia.org/wiki/New_Year%27s_Day), around
+January 1st we reset a bunch of stuff, change calendars and forget
+about work for a while. This is also when I forget to do my monthly
+report and then procrastinate until I figure out I might as well do a
+year report while I'm at it, and then do nothing at all for a while.
+
+So this is my humble attempt at fixing this, about a month late. I'll
+try to cover December as well, but since not much has happened then, I
+figured I could also review the last year and think back on the trends
+there. Oh, and you'll get chocolate cookies of course. Hang on to your
+eyeballs, this won't hurt a bit.
+
+Debian Long Term Support (LTS)
+==============================
+
+Those of you used to reading those reports might be tempted to skip
+this part, but wait! I actually don't have much to report here and
+instead you will find an incredibly insightful and relevant rant.
+
+So I didn't actually *do* any LTS work in December. I actually reduced
+my available hours to focus on writing (more on that later). Overall,
+I ended up working about 11 hours per month on LTS in 2017. That is
+less than the 16-20 hours I was available during that time. Part of
+that is me regularly procrastinating, but another part is that finding
+work to do is sometimes difficult. The "easy" tasks often get picked
+and dispatched quickly, so the stuff that remains, when you're not
+constantly looking, is often very difficult packages.
+
+I especially remember the pain of working on
+[[libreoffice|2017-11-30-free-software-activities-november-2017#libreoffice]],
+the [[KRACK
+update|2017-11-02-free-software-activities-october-2017#wpa-amp-krack-update]],
+more tiff, GraphicsMagick and ImageMagick vulnerabilities than I care
+to remember, and, ugh, Ruby... Masochists (also known as "security
+researchers") can find the details of those excruciating experiments
+in [[tag/debian-lts]] for the monthly reports.
+
+I don't want to sound like an [old idiot](https://www.theguardian.com/us-news/2017/apr/28/i-thought-being-president-would-be-easier-trumps-reuters-interview-highlights), but I must admit, after
+working on LTS for two years, that working on patching old software
+for security bugs is hard work, and not particularly pleasant on top
+of it. You're basically always dealing with other people's garbage:
+badly written code that hasn't been touched in years, sometimes
+decades, that no one wants to take care of.
+
+Yet someone needs to take care of it. A large part of the technical
+community considers Linux distributions in general, and LTS releases
+in particular, as "too old to care for". As if our elders, once they
+passed a certain age, should just be rolled out to the nearest
+dumpster or just left rotting on the curb. I suspect most people don't
+realize that Debian "stable" (stretch) was released less than a year
+ago, and "oldstable" (jessie) is a little over two years old. LTS
+(wheezy), our oldest supported release, is only four years old now,
+and will become unsupported this summer, on its fifth year
+anniversary. Five years may seem like a long time in computing but
+really, there's a whole universe out there and five years is
+absolutely nothing in the range of changes I'm interested in:
+politics, society and the environment range much beyond that
+shortsightedness.
+
+To put things in perspective, some people I know still run their
+office on an Apple II, which celebrated its 40th anniversary this
+year. *That* is "old". And the fact that the damn thing still works
+should command respect and admiration, more than contempt. In
+comparison, the phone I have, an LG G3, is running an unpatched,
+vulnerable version of Android because it cannot be updated, because
+it's locked out of the telcos networks, because it was found in a taxi
+and reported "lost or stolen" (same thing, right?). And DRM
+protections in the bootloader keep me from doing the right thing and
+unbricking this device.
+
+We should build devices that last decades. Instead we fill junkyards
+with tons and tons of precious computing devices that have more
+precious metals than most people carry as jewelry. We are wasting
+generations of programmers, hardware engineers, human robots and
+precious, rare metals on speculative, useless devices that are
+[destroying](https://www.theguardian.com/technology/2017/dec/11/facebook-former-executive-ripping-society-apart) our [society](https://www.theatlantic.com/amp/article/534198/). Working on supporting LTS is a small
+part in trying to fix the problem, but right now I can't help but
+think we have a problem upstream, in the way we build those tools in
+the first place. It's just depressing to be at the receiving end of
+the billions of lines of code that get created every year. Hopefully,
+the [death of Moore's law](https://spectrum.ieee.org/semiconductors/design/the-death-of-moores-law-will-spur-innovation) could change that, but I'm afraid it's
+going to take another generation before programmers figure out how far
+away from their roots they have strayed. Maybe too long to keep
+ourselves from a civilization collapse.
+
+LWN publications
+================
+
+With that gloomy conclusion, let's switch gears and talk about
+something happier. So as I mentioned, in December, I reduced my LTS
+hours and focused instead on finishing my coverage of [KubeCon
+Austin](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america) for [LWN.net](https://lwn.net/). Three articles have already been
+published on the blog here:
+
+ * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
+ * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
+ * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]]
+
+... and two more articles, about [Prometheus](https://prometheus.io/), are currently
+published as exclusives by LWN:
+
+ * [Monitoring with Prometheus 2.0](https://lwn.net/Articles/744410/) - an introductory article
+ * [Changes in Prometheus 2.0](https://lwn.net/Articles/744721/) - what changed, what's coming...
+
+I was surprised to see that the container runtimes article got such
+traction. It wasn't the most important debate in the whole conference,
+but there were some amazingly juicy bits, some of which we didn't even
+cover because. Those were... uh... rather controversial and we want
+the community to stay sane. Or saner, if that word can be applied at
+all to the container community at this point.
+
+I ended up publishing [[16 articles|tag/lwn]] at LWN this year. I'm
+really happy about that: I just love writing and even if it's in
+English (my native language is French), it's still better than
+rambling on my own like I do here. My editors allow me to publish well
+polished articles, and I am hugely grateful for the privilege. Each
+article takes about 13 hours to write, on average. I'm less happy
+about that: I wish delivery was more streamlined and I spare you the
+miserable story of last minute major changes I sent in some recent
+articles, to which I again apologize profusely to my editors.
+
+I'm often at a loss when I need to explain to friends and family what

(fichier de différences tronqué)
add a few more packages
diff --git a/software/packages.yml b/software/packages.yml
index e4e3d04f..c68cddf5 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -42,6 +42,7 @@
       - dispcalgui
       - gimp
       - inkscape
+      - rapid-photo-downloader
       - sane
       - xsane
 
@@ -54,6 +55,7 @@
       - apt-file
       - apt-listbugs
       - aptitude
+      - bats
       - bzr
       - build-essential
       - cdbs
@@ -274,13 +276,28 @@
       - yubikey-personalization
       - zotero-standalone
 
-  - name: install authorship tools (TeX)
+  - name: install authorship tools (incl. TeX)
     # This is mostly TeX-related packages
     tags: author
     apt: name={{item}} state=installed
     with_items:
       - auctex
       - dict
+      - dict-bouvier
+      - dict-devil
+      - dict-elements
+      - dict-foldoc
+      - dict-freedict-eng-fra
+      - dict-freedict-eng-spa
+      - dict-freedict-fra-eng
+      - dict-freedict-spa-eng
+      - dict-gazetteer2k
+      - dict-gcide
+      - dict-jargon
+      - dict-moby-thesaurus
+      - dict-vera
+      - dict-wn
+      - dictd
       - epubcheck
       - elpa-writegood-mode
       - libtext-multimarkdown-perl
@@ -296,6 +313,7 @@
     with_items:
       - analog
       - ansible
+      - apache2-utils
       - apt-transport-https
       - asciinema
       - borgbackup

small cosmetic fixes from lwn
diff --git a/blog/changes-prometheus-2.0.mdwn b/blog/changes-prometheus-2.0.mdwn
index b40d5d9e..19f54141 100644
--- a/blog/changes-prometheus-2.0.mdwn
+++ b/blog/changes-prometheus-2.0.mdwn
@@ -13,9 +13,9 @@ NA](https://lwn.net/Archives/ConferenceByYear/#2017-KubeCon__CloudNativeCon_NA)
 2017 was a big year for the [Prometheus](https://prometheus.io/)
 project, as it [published its 2.0 release in
 November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/).
-The new release ships long-standing bugfixes, new features and notably a
-new storage engine bringing major performance improvements. This comes
-at the cost of incompatible changes to the storage and
+The new release ships numerous bug fixes, new features and, notably, a
+new storage engine that brings major performance improvements. This
+comes at the cost of incompatible changes to the storage and
 configuration-file formats. An overview of Prometheus and its new
 release was presented to the [Kubernetes](https://kubernetes.io/)
 community in a
@@ -35,32 +35,32 @@ containers for deployments, which means rapid changes in parameters (or
 "labels" in Prometheus-talk) like hostnames or IP addresses. This was
 creating significant performance problems in Prometheus 1.0, which
 wasn't designed for such changes. To correct this, Prometheus ships a
-new [storage engine](https://github.com/prometheus/tsdb) was
+new [storage engine](https://github.com/prometheus/tsdb) that was
 [specifically designed](https://fabxc.org/tsdb/) to handle continuously
 changing labels. This was tested by monitoring a Kubernetes cluster
 where 50% of the pods would be swapped every 10 minutes; the new design
 was proven to be much more effective. The new engine
 [boasts](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization)
-a hundred-fold I/O performance improvements, a three-fold improvement in
-CPU, a five-fold in memory usage and increased space efficiency. This
+a hundred-fold I/O performance improvement, a three-fold improvement in
+CPU, five-fold in memory usage, and increased space efficiency. This
 impacts container deployments, but it also means improvements for any
 configuration as well. Anecdotally, there was no noticeable extra load
 on the servers where I deployed Prometheus, at least nothing that the
 previous monitoring tool (Munin) could detect.
 
 Prometheus 2.0 also brings new features like snapshot backups. The
-project has a long-standing design wart over data volatility: backups
-are deemed to be unnecessary in Prometheus because metrics data is
-considered disposable. According to Goutham Veeramanchaneni, one of the
-presenters at KubeCon, "this approach apparently doesn't work for the
-enterprise". Backups *were* possible in 1.x, but they involved using
+project has a longstanding design wart regarding data volatility:
+backups are deemed to be unnecessary in Prometheus because metrics data
+is considered disposable. According to Goutham Veeramanchaneni, one of
+the presenters at KubeCon, "this approach apparently doesn't work for
+the enterprise". Backups *were* possible in 1.x, but they involved using
 filesystem snapshots and stopping the server to get a consistent view of
 the on-disk storage. This implied downtime, which was unacceptable for
-certain production deployments. Thanks to the new storage engine again,
+certain production deployments. Thanks again to the new storage engine,
 Prometheus can now perform fast and consistent backups, triggered
 through the web API.
 
-Another improvement is a fix to the long-standing [staleness handling
+Another improvement is a fix to the longstanding [staleness handling
 bug](https://github.com/prometheus/prometheus/issues/398) where it would
 take up to five minutes for Prometheus to notice when a target
 disappeared. In that case, when polling for new values (or "scraping" as
@@ -77,10 +77,10 @@ the problem is more complicated than originally thought, which means
 there's still a hard limit to how slowly you can fetch metrics from
 targets. This, in turn, means that Prometheus is not well suited for
 devices that cannot support sub-minute refresh rates, which, to be fair,
-is rather uncommon. For slower devices or statistics, a solutions
-include the node exporter "textfile support", which we mentioned in the
+is rather uncommon. For slower devices or statistics, a solution might
+be the node exporter "textfile support", which we mentioned in the
 previous article, and the
-[pushgateway](https://github.com/prometheus/pushgateway) daemon, which
+[`pushgateway`](https://github.com/prometheus/pushgateway) daemon, which
 allows pushing results from the targets instead of having the collector
 pull samples from targets.
 
@@ -99,7 +99,7 @@ workaround](https://www.robustperception.io/accessing-data-from-prometheus-1-x-i
 is to replicate the older 1.8 server to a new 2.0 replica, as the
 [network
 protocols](https://prometheus.io/docs/prometheus/latest/federation/) are
-still compatible. The older server can be then decommissioned when the
+still compatible. The older server can then be decommissioned when the
 retention window (which defaults to fifteen days) closes. While there is
 some work in progress to provide a way to convert 1.8 data storage
 to 2.0, new deployments should probably use the 2.0 release directly to
@@ -109,7 +109,7 @@ Another key point in the [migration
 guide](https://prometheus.io/docs/prometheus/2.0/migration/) is a change
 in the rules-file format. While 1.x used a custom file format, 2.0 uses
 YAML, matching the other Prometheus configuration files. Thankfully the
-[promtool](https://github.com/prometheus/prometheus/tree/master/cmd/promtool)
+[`promtool`](https://github.com/prometheus/prometheus/tree/master/cmd/promtool)
 command handles this migration automatically. The [new
 format](https://prometheus.io/blog/2017/06/21/prometheus-20-alpha3-new-rule-format/)
 also introduces [rule
@@ -117,8 +117,7 @@ groups](https://github.com/prometheus/prometheus/issues/1095), which
 improve control over the rules execution order. In 1.x, alerting rules
 were run sequentially but, in 2.0, the *groups* are executed
 sequentially and each group can have its own interval. This fixes the
-long-standing [race conditions between dependent
-rules](https://github.com/prometheus/prometheus/issues/1095) that create
+longstanding race conditions between dependent rules that create
 inconsistent results when rules would reuse the same queries. The
 problem should be fixed between groups, but rule authors still need to
 be careful of that limitation *within* a rule group.
@@ -126,9 +125,9 @@ be careful of that limitation *within* a rule group.
 Remaining limitations and future
 --------------------------------
 
-As we have seen in the introductory article, Prometheus may not be
-suitable for all workflows because of the limited default dashboards and
-alerts, but also because the lack of data-retention policies. There are,
+As we saw in the introductory article, Prometheus may not be suitable
+for all workflows because of its limited default dashboards and alerts,
+but also because of the lack of data-retention policies. There are,
 however, discussions about [variable per-series
 retention](https://github.com/prometheus/prometheus/issues/1381) in
 Prometheus and [native down-sampling support in the storage
@@ -151,10 +150,9 @@ aggregate samples and collect the results in a second server with a
 slower sampling rate and different retention policy. And of course, the
 new release features [external storage
 engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage)
-which can better support archival features. Those solutions are
-obviously not suitable for smaller deployments, which therefore need to
-make hard choices about discarding older samples or getting more disk
-space.
+that can better support archival features. Those solutions are obviously
+not suitable for smaller deployments, which therefore need to make hard
+choices about discarding older samples or getting more disk space.
 
 As part of the staleness improvements, Brazil also started working on
 "isolation" (the "I" in the [ACID
@@ -168,8 +166,8 @@ Prometheus gets stuck on locking. Some of the performance impact could
 therefore be offset under heavy load.
 
 Another performance improvement mentioned during the talk is an eventual
-query-engine rewrite. The current query engine can sometimes lead to
-excessive load for certain expensive queries, according the Prometheus
+query-engine rewrite. The current query engine can sometimes cause
+excessive loads for certain expensive queries, according the Prometheus
 [security
 guide](https://prometheus.io/docs/operating/security/#denial-of-service).
 The goal would be to optimize the current engine so that those expensive
@@ -183,18 +181,18 @@ Debian to [remove the package from the i386
 architecture](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886702).
 It is currently unclear if this is a bug in Prometheus: indeed, it is
 strange that Debian tests actually pass in other 32-bit architectures
-like armel. Brazil, in the bug report, argued that "Prometheus isn't
-going to be very useful on a 32bit machine". The position of the project
-is currently that "if it runs, it runs but no guarantees or effort
-beyond that from our side".
+like armel. Brazil, in the bug report, argued that "*Prometheus isn't
+going to be very useful on a 32bit machine*". The position of the
+project is currently that "*'if it runs, it runs' but no guarantees or
+effort beyond that from our side*".
 
 I had the privilege to meet the [Prometheus
-team](https://prometheus.io/community/_) in Austin and was happy to see
-different consultants and organizations working together on the project.
-It reminded me of my [golden days in the
-Drupal](https://www.drupal.org/user/1274/) community: different
-companies cooperating on the same project in a harmonious environment.
-If Prometheus can keep that spirit together, it will be a welcome change
+team](https://prometheus.io/community/) at the conference in Austin and
+was happy to see different consultants and organizations working
+together on the project. It reminded me of my [golden days in the Drupal
+community](https://www.drupal.org/user/1274/): different companies
+cooperating on the same project in a harmonious environment. If
+Prometheus can keep that spirit together, it will be a welcome change
 from the
 [drama](http://freesoftwaremagazine.com/articles/nagios_and_icinga/)
 that affected certain monitoring software. This new Prometheus release

fix links to presentation repos
diff --git a/communication.mdwn b/communication.mdwn
index ed8b8e77..5901d6e6 100644
--- a/communication.mdwn
+++ b/communication.mdwn
@@ -58,10 +58,10 @@ Politiques
 
  * au sujet des médias sociaux, à [Être Social / Being Social](http://etresocialbeingsocial.org/program) (novembre 2013, Montréal)
  * [#MapleSpring: Media Spin and Social Struggles in Cloudy Times](https://program.ohm2013.org/event/263.html), à [OHM2013](http://ohm2013.org/), [slides](https://gitlab.com/anarcat/ohm2013/blob/de837263e5d4a0f908fd2576c1fa74dd9c0e396b/ohm2013-maple-spring.pin) (août 2013, Pays-Bas)
- * Introduction à PRISM au TA3M, [slides](http://src.anarc.at/presentations/prism.git/blob_plain/HEAD:/prism.pin) (juillet 2013, Montréal)
+ * Introduction à PRISM au TA3M, [slides](https://gitlab.com/anarcat/presentation-prism) (juillet 2013, Montréal)
  * Contre SOPA, PIPA et le projet de loi C-30, avec les Alter-Citoyens lors d'un 5@7 de Koumbit, vidéos: [partie 1](https://www.youtube.com/watch?v=lbXV5Xhy1AY), [partie 2](https://www.youtube.com/watch?v=I2wQabFt9CQ) (février 2012, Montréal)
- * "Le réseau et vous" au Forum Ouvert de Communautique, [vidéo](https://www.youtube.com/watch?v=sQEoXr_sn7s), [présentation](http://src.anarc.at/presentations.git/blob_plain/HEAD:/conferences/infrastructure-internet/reseau-et-vous.html) (décembre 2010, Montréal)
- * "Infrastructure et internet", au cours multimédia et société (?) de Stéphane Couture de l'UQAM (deux fois?), version longue de "Le réseau et vous", [présentation](http://src.anarc.at/presentations.git/blob_plain/HEAD:/conferences/infrastructure-internet/infrastructure-internet.html) (2008-2009, Montréal, basé sur une présentation de Lunar à Dijon)
+ * "Le réseau et vous" au Forum Ouvert de Communautique, [vidéo](https://www.youtube.com/watch?v=sQEoXr_sn7s), [présentation](https://gitlab.com/anarcat/koumbit/blob/master/conferences/infrastructure-internet/reseau-et-vous.html) (décembre 2010, Montréal)
+ * "Infrastructure et internet", au cours "Informatique et société" de Stéphane Couture de l'UQAM (deux fois?), version longue de "Le réseau et vous", [présentation](https://gitlab.com/anarcat/koumbit/blob/master/conferences/infrastructure-internet/infrastructure-internet.html) (2008-2009, Montréal, basé sur une présentation de Lunar à Dijon)
 
 J'ai donné plusieurs fois des présentations devant des classes au CEGEP Maisonneuve (informatique) et à l'UQAM (communications) au sujet des logiciels libres et de la neutralité des réseaux.
 
@@ -79,7 +79,9 @@ Techniques
  * [DrupalCamp 2009 Montréal - Aegir - One Drupal to Rule them All!](http://community.aegirproject.org/node/57) (octobre 2009)
  * [DrupalCon 2009 Paris - Automate your maintenance troubles away with the Aegir hosting system!](http://community.aegirproject.org/node/51) (septembre 2009)
 
-Il y a probablement plusieurs autres présentations manquantes ici, voir aussi le [dépôt complet](http://src.anarc.at/presentations.git/) (mirroir sur le [Redmine de Koumbit](https://redmine.koumbit.net/projects/presentationsRSSAtom)).
+Il y a probablement plusieurs autres présentations manquantes ici,
+voir aussi [l'archive des présentations de
+Koumbit](https://gitlab.com/anarcat/koumbit) et ailleurs.
 
 Photo
 =====

calendar update
diff --git a/archives/2018.mdwn b/archives/2018.mdwn
new file mode 100644
index 00000000..38d897b8
--- /dev/null
+++ b/archives/2018.mdwn
@@ -0,0 +1 @@
+[[!calendar type=year year=2018 pages="*"]]
diff --git a/archives/2018/01.mdwn b/archives/2018/01.mdwn
new file mode 100644
index 00000000..e6113229
--- /dev/null
+++ b/archives/2018/01.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=01 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(01) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/02.mdwn b/archives/2018/02.mdwn
new file mode 100644
index 00000000..36ec3e1e
--- /dev/null
+++ b/archives/2018/02.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=02 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(02) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/03.mdwn b/archives/2018/03.mdwn
new file mode 100644
index 00000000..150ddf34
--- /dev/null
+++ b/archives/2018/03.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=03 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(03) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/04.mdwn b/archives/2018/04.mdwn
new file mode 100644
index 00000000..8c047584
--- /dev/null
+++ b/archives/2018/04.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=04 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(04) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/05.mdwn b/archives/2018/05.mdwn
new file mode 100644
index 00000000..fc3b77de
--- /dev/null
+++ b/archives/2018/05.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=05 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(05) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/06.mdwn b/archives/2018/06.mdwn
new file mode 100644
index 00000000..19c3e9e2
--- /dev/null
+++ b/archives/2018/06.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=06 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(06) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/07.mdwn b/archives/2018/07.mdwn
new file mode 100644
index 00000000..3213f220
--- /dev/null
+++ b/archives/2018/07.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=07 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(07) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/08.mdwn b/archives/2018/08.mdwn
new file mode 100644
index 00000000..201b2bcf
--- /dev/null
+++ b/archives/2018/08.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=08 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(08) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/09.mdwn b/archives/2018/09.mdwn
new file mode 100644
index 00000000..08ddb5d3
--- /dev/null
+++ b/archives/2018/09.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=09 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(09) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/10.mdwn b/archives/2018/10.mdwn
new file mode 100644
index 00000000..9efd2c1f
--- /dev/null
+++ b/archives/2018/10.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=10 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(10) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/11.mdwn b/archives/2018/11.mdwn
new file mode 100644
index 00000000..1933e3c2
--- /dev/null
+++ b/archives/2018/11.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=11 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(11) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/12.mdwn b/archives/2018/12.mdwn
new file mode 100644
index 00000000..ff50f841
--- /dev/null
+++ b/archives/2018/12.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=12 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(12) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]

comment on the theme update and possible alternatives
diff --git a/blog/2015-09-09-bootstrap/comment_10_b245c0c5d5d5a7d2fe77d488ecb256e7._comment b/blog/2015-09-09-bootstrap/comment_10_b245c0c5d5d5a7d2fe77d488ecb256e7._comment
new file mode 100644
index 00000000..a6fe7b11
--- /dev/null
+++ b/blog/2015-09-09-bootstrap/comment_10_b245c0c5d5d5a7d2fe77d488ecb256e7._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""another update"""
+ date="2018-01-19T01:59:58Z"
+ content="""
+I'm still using this theme, but it moved to [Gitlab](https://gitlab.com/anarcat/ikiwiki-bootstrap-anarcat). I also updated it to jquery 3.x and the latest bootstrap recently.
+
+I'm considering my options for Bootstrap alternatives. jQuery, in particular, seems like a huge liability nowadays. I found 3 undocumented vulnerabilities (no CVE, fixed) I was affected by so I will look at other options next time I go crazy and want to rebuild the theme here. Those I know of:
+
+ * [Miligram](https://milligram.io/)
+ * [Min](https://mincss.com/)
+ * [Mini](https://minicss.org/)
+ * [Pure](https://purecss.io/)
+ * [Skeleton](http://getskeleton.com/)
+
+There are probably a [billion](https://news.ycombinator.com/item?id=14120796) [more](https://news.ycombinator.com/item?id=14264494), [options](https://codecondo.com/minimal-css-frameworks-grid-systems/) [of](http://bashooka.com/resources/20-best-minimal-css-frameworks-2017/) [course](https://hn.algolia.com/?query=minimal%20css%20framework). Sigh.
+"""]]

documenter que git est rendu sur gitlab
diff --git a/services.mdwn b/services.mdwn
index 9f5a1395..97ec4c96 100644
--- a/services.mdwn
+++ b/services.mdwn
@@ -25,7 +25,7 @@ Service        | État                                      | Détails  | Depuis
 [[Multimedia]] | [[!color background=#00ff00 text="OK"]]   |          | 1999?    | privé       | [[!wikipedia XBMC]]    | archive audio et video, "cinéma maison"
 [[Web]]        | [[!color background=#00ff00 text="OK"]]   | [[sites hébergés|hosted]], [[nginx]] considéré, [[SSL]] à faire | 1999?    | public      | [Apache][] | hébergement de sites web, sur demande
 [[Wiki]]       | [[!color background=#ffff00 text="dev"]]  | <http://wiki.anarc.at>, à automatiser | 2011     | [public][9]      | [ikiwiki-hosting][]      | Hébergement de wikis [[ikiwiki]], sur demande
-[[Git]]        | [[!color background=#ffff00 text="dev"]]  | <http://src.anarc.at>, deprecated         | ~2012?   | [public][6] | [Git][]                | hébergement de dépôts git, sur demande, en migration vers [Gitlab](https://gitlab.com/anarcat)
+[[Git]]        | [[!color background=#ff0000 text="down"]]  | <http://src.anarc.at>, disabled         | ~2012-2017   | [public][6] | [Git][]                | hébergement de dépôts git, sur demande, migré vers [Gitlab](https://gitlab.com/anarcat)
 [[Gallery]]    | [[!color background=#ffff00 text="dev"]]  | <https://photos.anarc.at>, à automatiser | 1999?    | [public][4] | [Sigal][]         | galleries de photos, sur demande
 [[Stats]]      | [[!color background=#00ff00 text="OK"]]   | <http://munin.anarc.at> | 2012     | [public][8] | [Munin][]              | statistiques du réseau
 
diff --git a/services/git.mdwn b/services/git.mdwn
index ece8367c..ace27401 100644
--- a/services/git.mdwn
+++ b/services/git.mdwn
@@ -1,5 +1,10 @@
-J'héberge ici [quelques dépôts git](http://src.anarc.at) pour les [[projets|software]] sur lesquels je travaille durant mes temps libre.
+J'hébergeais ici [quelques dépôts git](http://src.anarc.at) pour les [[projets|software]] sur lesquels je travaille durant mes temps libre.
 
-Ceci fonctionne grâce au "daemon" git déployé par ikiwiki-hosting dans la ferme de [[wiki]]s et le mode "virtual hosting" de git-daemon.
+Ceci fonctionnait grâce au "daemon" git déployé par ikiwiki-hosting dans la ferme de [[wiki]]s et le mode "virtual hosting" de git-daemon.
 
-Sur demande, des dépôts stockés dans votre compte [[shell]] peuvent être publiés sur un domaine de votre choix.
+C'est maintenant hébergé sur [Gitlab](https://gitlab.com/anarcat) (et
+partiellement sur [GitHub](https://github.com/anarcat)) pour faciliter
+la collaboration.
+
+Une infrastructure git demeure, néammoins, ici pour faire fonctionner
+le [[wiki]].

fix links to src.anarc.at to point to gitlab instead
diff --git a/blog/2013-12-17-password-reset-speedstream-5200-modems.mdwn b/blog/2013-12-17-password-reset-speedstream-5200-modems.mdwn
index f9e24588..20a10c45 100644
--- a/blog/2013-12-17-password-reset-speedstream-5200-modems.mdwn
+++ b/blog/2013-12-17-password-reset-speedstream-5200-modems.mdwn
@@ -19,6 +19,6 @@ Enjoy.
  [SNR (Signal to Noise Ratio)]: https://en.wikipedia.org/wiki/Signal-to-noise_ratio
  [Windows utility]: http://www.gabrielstrong.com/
  [description]: http://www.gabrielstrong.com/ethernet_packet.php
- [git repository]: http://src.anarc.at/speedstream-reset.git/
+ [git repository]: https://gitlab.com/anarcat/speedstream-reset
 
-[[!tag "debian" "debian-planet" "freebsd" "hack" "hardware" "software"]]
\ No newline at end of file
+[[!tag "debian" "debian-planet" "freebsd" "hack" "hardware" "software"]]
diff --git a/blog/2015-09-09-bootstrap.mdwn b/blog/2015-09-09-bootstrap.mdwn
index 291d1eb0..1d4c2d34 100644
--- a/blog/2015-09-09-bootstrap.mdwn
+++ b/blog/2015-09-09-bootstrap.mdwn
@@ -107,9 +107,9 @@ various other small changes, all of which are available in my
  [thorough evaluation]:
  https://ikiwiki.info/tips/bootstrap_themes_evaluation/?updated
  [Jak Linux]: http://jak-linux.org/about/
- [work with the sidebar plugin]: http://src.anarc.at/ikiwiki_bootstrap_anarcat.git/commitdiff/958f2874c17fb1d886abf2fbf6581d1cbe389531
- [added the regular action links in the top navbar]: http://src.anarc.at/ikiwiki_bootstrap_anarcat.git/commitdiff/d9218f0053de4959cf15cd427e505d06a8564e8a
- [personnal git repo]: http://src.anarc.at/ikiwiki_bootstrap_anarcat.git/
+ [work with the sidebar plugin]: https://gitlab.com/anarcat/ikiwiki-bootstrap-anarcat/commit//958f2874c17fb1d886abf2fbf6581d1cbe389531
+ [added the regular action links in the top navbar]: https://gitlab.com/anarcat/ikiwiki-bootstrap-anarcat/commit//d9218f0053de4959cf15cd427e505d06a8564e8a
+ [personnal git repo]: https://gitlab.com/anarcat/ikiwiki-bootstrap-anarcat/
 
 What doesn't work
 =================
diff --git a/blog/2016-01-05-free-software-activities-december-2015.mdwn b/blog/2016-01-05-free-software-activities-december-2015.mdwn
index 636ceb85..f15ce816 100644
--- a/blog/2016-01-05-free-software-activities-december-2015.mdwn
+++ b/blog/2016-01-05-free-software-activities-december-2015.mdwn
@@ -38,7 +38,7 @@ actual uploads for the upcoming month.
  [Redmine security issues]: https://tracker.debian.org/redmine
  [notice]: https://lists.debian.org/debian-lts/2015/12/msg00100.html
  [clarified]: https://lists.debian.org/debian-lts/2015/12/msg00092.html
- [tool to track patches through git and SVN merges in Redmine's history]: http://src.anarc.at/scripts.git/blob_plain/HEAD:/redmine-svn-inspector
+ [tool to track patches through git and SVN merges in Redmine's history]: https://gitlab.com/anarcat/scripts/blob/1756e5df166e00462de290a42e4a18ee0994d9a3/redmine-svn-inspector
  [proposed a small patch]: https://lists.debian.org/debian-lts/2015/12/msg00124.html
  [ongoing]: https://lists.debian.org/debian-lts/2016/01/msg00002.html
 
diff --git a/blog/2016-02-12-free-software-activities-february-2016.mdwn b/blog/2016-02-12-free-software-activities-february-2016.mdwn
index 0d7b3100..3655fc97 100644
--- a/blog/2016-02-12-free-software-activities-february-2016.mdwn
+++ b/blog/2016-02-12-free-software-activities-february-2016.mdwn
@@ -336,6 +336,6 @@ so I added it to my script, to be able to review photos prior to
 importing them into Darktable and git-annex.
 
  [fim]: http://www.nongnu.org/fbi-improved/
- [photos-import]: http://src.anarc.at/scripts.git/blob/HEAD:/photos-import
+ [photos-import]: https://gitlab.com/anarcat/scripts/blob/1756e5df166e00462de290a42e4a18ee0994d9a3/photos-import
 
 [[!tag monthly-report debian-planet debian python-planet software geek free debian-lts darktable markdown emacs irc]]
diff --git a/blog/2016-03-31-free-software-activities-march-2016.mdwn b/blog/2016-03-31-free-software-activities-march-2016.mdwn
index 4e7a4fe8..e2bb45ba 100644
--- a/blog/2016-03-31-free-software-activities-march-2016.mdwn
+++ b/blog/2016-03-31-free-software-activities-march-2016.mdwn
@@ -305,7 +305,7 @@ particularly the `Favorites` list which is populated by the "star"
 button in the UI.
 
 [GMPC]: https://gmpclient.org/
-[get-playlist]: http://src.anarc.at/scripts.git/blob_plain/HEAD:/get-playlist
+[get-playlist]: https://gitlab.com/anarcat/scripts/blob/1756e5df166e00462de290a42e4a18ee0994d9a3/get-playlist
 [M3U playlist]: https://en.wikipedia.org/wiki/M3U
 [extension to git-annex]: http://git-annex.branchable.com/tips/playlist_fetch/
 
@@ -343,7 +343,7 @@ operate transparently on multiple playlists. It has a bunch of
 heuristics to find files and uses a MPD server as a directory to
 search into. It can edit files in place or just act as a filter.
 
-[fix-playlists]: http://src.anarc.at/scripts.git/blob_plain/HEAD:/fix-playlists
+[fix-playlists]: https://gitlab.com/anarcat/scripts/blob/1756e5df166e00462de290a42e4a18ee0994d9a3/fix-playlists
 
 Useful snippets
 ---------------
@@ -360,11 +360,11 @@ But the most interesting snippet, for me, is this simple
 script that includes argument processing, logging and filtering of
 files, something which I was always copy-pasting around.
 
-[script snippet]: http://src.anarc.at/snippets.git/blob_plain/HEAD:/python-mode/script
+[script snippet]: https://gitlab.com/anarcat/yasnippets/blob/6130b3ea8210a0019561223838bb79013bf7b1b4/python-mode/script
 [AGPLv3]: http://www.gnu.org/licenses/agpl-3.0.html
-[license snippet]: http://src.anarc.at/snippets.git/blob_plain/HEAD:/python-mode/license
-[lts snippet]: http://src.anarc.at/snippets.git/blob_plain/HEAD:/markdown-mode/lts
-[snippets]: http://src.anarc.at/snippets.git/
+[license snippet]: https://gitlab.com/anarcat/yasnippets/blob/6130b3ea8210a0019561223838bb79013bf7b1b4/python-mode/license
+[lts snippet]: https://gitlab.com/anarcat/yasnippets/blob/6130b3ea8210a0019561223838bb79013bf7b1b4/markdown-mode/lts
+[snippets]: https://gitlab.com/anarcat/yasnippets/
 [Yasnippet]: https://capitaomorte.github.io/yasnippet/
 
 Other projects
@@ -387,7 +387,7 @@ And finally, a list of interesting issues *en vrac*:
 
 [OWS]: https://github.com/WhisperSystems
 [BitHub]: https://github.com/WhisperSystems/BitHub
-[photos-import]: http://src.anarc.at/scripts.git/blob/HEAD:/photos-import
+[photos-import]: https://gitlab.com/anarcat/scripts/blob/1756e5df166e00462de290a42e4a18ee0994d9a3/photos-import
 [sharing with the author]: https://github.com/ianw/photo-scripts/issues/1
 [shared his photos workflow]: https://www.technovelty.org/junkcode/durable-photo-workflow.html
 [filed a PR]: https://github.com/innir/gtranscribe/issues/4
diff --git a/blog/2016-05-19-free-software-activities-may-2016.mdwn b/blog/2016-05-19-free-software-activities-may-2016.mdwn
index e2246bb5..56577332 100644
--- a/blog/2016-05-19-free-software-activities-may-2016.mdwn
+++ b/blog/2016-05-19-free-software-activities-may-2016.mdwn
@@ -304,7 +304,7 @@ a chicken with his head cut off:
 
 [may eventually be merged in ikiwiki directly]: http://ikiwiki.info/todo/merge_bootstrap_branch/
 [ikistrap theme]: https://github.com/gsliepen/ikistrap
-[this git repository]: http://src.anarc.at/ikiwiki_bootstrap_anarcat.git/
+[this git repository]: https://gitlab.com/anarcat/ikiwiki-bootstrap-anarcat
 
 Finally, I should mention that I will be less active in the coming
 months, as I will be heading outside as the summer finally came! I
diff --git a/communication.mdwn b/communication.mdwn
index 61ef42cd..ed8b8e77 100644
--- a/communication.mdwn
+++ b/communication.mdwn
@@ -57,7 +57,7 @@ Politiques
 <!-- todo: move to a .bib file and add good entries to CV -->
 
  * au sujet des médias sociaux, à [Être Social / Being Social](http://etresocialbeingsocial.org/program) (novembre 2013, Montréal)
- * [#MapleSpring: Media Spin and Social Struggles in Cloudy Times](https://program.ohm2013.org/event/263.html), à [OHM2013](http://ohm2013.org/), [slides](http://src.anarc.at/presentations/ohm2013.git/blob_plain/HEAD:/ohm2013-maple-spring.pin) (août 2013, Pays-Bas)
+ * [#MapleSpring: Media Spin and Social Struggles in Cloudy Times](https://program.ohm2013.org/event/263.html), à [OHM2013](http://ohm2013.org/), [slides](https://gitlab.com/anarcat/ohm2013/blob/de837263e5d4a0f908fd2576c1fa74dd9c0e396b/ohm2013-maple-spring.pin) (août 2013, Pays-Bas)
  * Introduction à PRISM au TA3M, [slides](http://src.anarc.at/presentations/prism.git/blob_plain/HEAD:/prism.pin) (juillet 2013, Montréal)
  * Contre SOPA, PIPA et le projet de loi C-30, avec les Alter-Citoyens lors d'un 5@7 de Koumbit, vidéos: [partie 1](https://www.youtube.com/watch?v=lbXV5Xhy1AY), [partie 2](https://www.youtube.com/watch?v=I2wQabFt9CQ) (février 2012, Montréal)
  * "Le réseau et vous" au Forum Ouvert de Communautique, [vidéo](https://www.youtube.com/watch?v=sQEoXr_sn7s), [présentation](http://src.anarc.at/presentations.git/blob_plain/HEAD:/conferences/infrastructure-internet/reseau-et-vous.html) (décembre 2010, Montréal)
diff --git a/hardware/phone/htc-one-s.mdwn b/hardware/phone/htc-one-s.mdwn
index d0db5de7..e7a56271 100644
--- a/hardware/phone/htc-one-s.mdwn
+++ b/hardware/phone/htc-one-s.mdwn
@@ -496,7 +496,7 @@ limitations, most notably that it cannot remove files yet. Just
 installing the script in `$PATH` and making it executable should make
 it work.
 
-[git-annex-remote-dumb]: http://src.anarc.at/scripts.git/blob_plain/HEAD:/git-annex-remote-dumb
+[git-annex-remote-dumb]: https://gitlab.com/anarcat/scripts/blob/1756e5df166e00462de290a42e4a18ee0994d9a3/git-annex-remote-dumb
 
 To prepare the remote:
 
diff --git a/meta/license.mdwn b/meta/license.mdwn
index ab7e1e96..18e20b39 100644
--- a/meta/license.mdwn
+++ b/meta/license.mdwn
@@ -20,7 +20,7 @@ Design
 ------
 
 The design of this site falls under a different license, see the
-[source code](http://src.anarc.at/ikiwiki_bootstrap_anarcat.git/) for
+[source code](https://gitlab.com/anarcat/ikiwiki-bootstrap-anarcat) for
 the ultimate reference. Right now, it is a
 [modified bootstrap theme](https://anarc.at/blog/2015-09-09-bootstrap/)
 licensed under a MIT license and, like all bootstrap derived themes,
diff --git a/software/desktop.mdwn b/software/desktop.mdwn
index 19610b92..ef9bd3ce 100644
--- a/software/desktop.mdwn
+++ b/software/desktop.mdwn
@@ -95,7 +95,7 @@ well.
 I also considered [Obnam][] but it seems it doesn't scale so well.
 
  [bup]: https://bup.github.io/
- [bup-cron]: http://src.anarc.at/bup-cron.git/
+ [bup-cron]: https://gitlab.com/anarcat/bup-cron
  [Obnam]: http://liw.fi/obnam/
 
 Color theme: solarized

remove local modifications to theme, merged
diff --git a/local.css b/local.css
deleted file mode 100644
index 7c3b1e15..00000000
--- a/local.css
+++ /dev/null
@@ -1,25 +0,0 @@
-/* this in particular belongs to the ikiwiki style.css, which is in the admonition patch */
-/* admonition start */
-#content div.caution,
-#content div.important,
-#content div.note,
-#content div.tip,
-#content div.warning {
-    border: 1pt solid #aaa;
-    margin: 1em 3em 1em 3em;
-    background-repeat: no-repeat;
-    background-position: 8px 8px;
-    min-height: 48px; /*48=32+8+8 but doesn't work with IE*/
-    padding: 1em 1em 1em 48px;
-}
-#content div.tip { background-image: url("smileys/admon-tip.png"); }
-#content div.note { background-image: url("smileys/admon-note.png"); }
-#content div.important { background-image: url("smileys/admon-important.png"); }
-#content div.caution { background-image: url("smileys/admon-caution.png"); }
-#content div.warning { background-image: url("smileys/admon-warning.png"); }
-/* admonition end */
-
-/* make table scale out to avoid ugly word-wrapping */
-/* bootstrap should deal with this, but ikiwiki doesn't assign the right style and anyways our width is smaller than necessary */
-table, table.table { width: inherit; }
-table { font-size: inherit; } /* why the heck does chrome override font-size for tables?! */

moar lwn updates
diff --git a/blog/changes-prometheus-2.0.mdwn b/blog/changes-prometheus-2.0.mdwn
index b5c23fc8..b40d5d9e 100644
--- a/blog/changes-prometheus-2.0.mdwn
+++ b/blog/changes-prometheus-2.0.mdwn
@@ -50,37 +50,39 @@ previous monitoring tool (Munin) could detect.
 
 Prometheus 2.0 also brings new features like snapshot backups. The
 project has a long-standing design wart over data volatility: backups
-are unnecessary in Prometheus because metrics data is considered disposable.
-According to Goutham Veeramanchaneni, one of the presenters at KubeCon,
-"this approach apparently doesn't work for the enterprise". Backups
-*were* possible in 1.x, but they involved using filesystem snapshots and
-stopping the server to get a consistent view of the on-disk storage.
-This implied downtime, which was unacceptable for certain production
-deployments. Thanks to the new storage engine again, Prometheus can now
-perform fast and consistent backups, triggered through the web API.
+are deemed to be unnecessary in Prometheus because metrics data is
+considered disposable. According to Goutham Veeramanchaneni, one of the
+presenters at KubeCon, "this approach apparently doesn't work for the
+enterprise". Backups *were* possible in 1.x, but they involved using
+filesystem snapshots and stopping the server to get a consistent view of
+the on-disk storage. This implied downtime, which was unacceptable for
+certain production deployments. Thanks to the new storage engine again,
+Prometheus can now perform fast and consistent backups, triggered
+through the web API.
 
 Another improvement is a fix to the long-standing [staleness handling
-bug](https://github.com/prometheus/prometheus/issues/398) where it
-would take up to five minutes for Prometheus to notice when a target
-disappears. In that case, when polling for new values, or "scraping"
-as it's called in Prometheus jargon, a failure used to make Prometheus
-reuse the older, stale value, which meant downtimes would stay
-undetected for too long and fail to trigger alerts properly. This
-would also cause problems with double-counting of some metrics when
-labels vary in the same measurement. Another limitation related to
-staleness is that Prometheus wouldn't work well with scrape intervals
-above two minutes (instead of the default 15 seconds). Unfortunately,
-that is still not fixed in Prometheus 2.0 as the problem is more
-complicated than originally thought, which means there's still a hard
-limit to how slowly you can fetch metrics from targets. This, in turn,
-means that Prometheus is not well suited for devices that cannot
-support sub-minute refresh rates, which, to be fair, is rather
-uncommon. For slower devices or statistics, a solutions include the
-node exporter "textfile support", which we mentioned in the previous
-article, and the
+bug](https://github.com/prometheus/prometheus/issues/398) where it would
+take up to five minutes for Prometheus to notice when a target
+disappeared. In that case, when polling for new values (or "scraping" as
+it's called in Prometheus jargon) a failure would make Prometheus reuse
+the older, stale value, which meant that downtime would go undetected
+for too long and fail to trigger alerts properly. This would also cause
+problems with double-counting of some metrics when labels vary in the
+same measurement.
+
+Another limitation related to staleness is that Prometheus wouldn't work
+well with scrape intervals above two minutes (instead of the default
+15 seconds). Unfortunately, that is still not fixed in Prometheus 2.0 as
+the problem is more complicated than originally thought, which means
+there's still a hard limit to how slowly you can fetch metrics from
+targets. This, in turn, means that Prometheus is not well suited for
+devices that cannot support sub-minute refresh rates, which, to be fair,
+is rather uncommon. For slower devices or statistics, a solutions
+include the node exporter "textfile support", which we mentioned in the
+previous article, and the
 [pushgateway](https://github.com/prometheus/pushgateway) daemon, which
-allows pushing results from the targets instead of having the
-collector pull samples from targets.
+allows pushing results from the targets instead of having the collector
+pull samples from targets.
 
 The migration path
 ------------------

define scrape and try to clarify staleness again
diff --git a/blog/changes-prometheus-2.0.mdwn b/blog/changes-prometheus-2.0.mdwn
index 0d868d65..b5c23fc8 100644
--- a/blog/changes-prometheus-2.0.mdwn
+++ b/blog/changes-prometheus-2.0.mdwn
@@ -60,25 +60,27 @@ deployments. Thanks to the new storage engine again, Prometheus can now
 perform fast and consistent backups, triggered through the web API.
 
 Another improvement is a fix to the long-standing [staleness handling
-bug](https://github.com/prometheus/prometheus/issues/398) where it would
-take up to five minutes for Prometheus to notice changes in metrics when
-a target disappears. Because it wouldn't detect stale samples
-correctly, a failed scrape would stay undetected for too long and fail
-to trigger alerts properly.
-This would also cause problems with double-counting of some metrics when
+bug](https://github.com/prometheus/prometheus/issues/398) where it
+would take up to five minutes for Prometheus to notice when a target
+disappears. In that case, when polling for new values, or "scraping"
+as it's called in Prometheus jargon, a failure used to make Prometheus
+reuse the older, stale value, which meant downtimes would stay
+undetected for too long and fail to trigger alerts properly. This
+would also cause problems with double-counting of some metrics when
 labels vary in the same measurement. Another limitation related to
 staleness is that Prometheus wouldn't work well with scrape intervals
 above two minutes (instead of the default 15 seconds). Unfortunately,
 that is still not fixed in Prometheus 2.0 as the problem is more
 complicated than originally thought, which means there's still a hard
 limit to how slowly you can fetch metrics from targets. This, in turn,
-means that Prometheus is not well suited for devices that cannot support
-sub-minute refresh rates, which, to be fair, is rather uncommon. For
-slower devices or statistics, a solutions include the node exporter
-"textfile support", which we mentioned in the previous article, and the
+means that Prometheus is not well suited for devices that cannot
+support sub-minute refresh rates, which, to be fair, is rather
+uncommon. For slower devices or statistics, a solutions include the
+node exporter "textfile support", which we mentioned in the previous
+article, and the
 [pushgateway](https://github.com/prometheus/pushgateway) daemon, which
-allows pushing results from the targets instead of having the collector
-pull samples from targets.
+allows pushing results from the targets instead of having the
+collector pull samples from targets.
 
 The migration path
 ------------------

article in progress
diff --git a/blog/changes-prometheus-2.0.mdwn b/blog/changes-prometheus-2.0.mdwn
new file mode 100644
index 00000000..0d868d65
--- /dev/null
+++ b/blog/changes-prometheus-2.0.mdwn
@@ -0,0 +1,210 @@
+[[!meta title="Changes in Prometheus 2.0"]]
+\[LWN subscriber-only content\]
+-------------------------------
+
+January 18, 2018
+
+This article was contributed by Antoine Beaupré
+
+------------------------------------------------------------------------
+
+[KubeCon+CloudNativeCon
+NA](https://lwn.net/Archives/ConferenceByYear/#2017-KubeCon__CloudNativeCon_NA)
+2017 was a big year for the [Prometheus](https://prometheus.io/)
+project, as it [published its 2.0 release in
+November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/).
+The new release ships long-standing bugfixes, new features and notably a
+new storage engine bringing major performance improvements. This comes
+at the cost of incompatible changes to the storage and
+configuration-file formats. An overview of Prometheus and its new
+release was presented to the [Kubernetes](https://kubernetes.io/)
+community in a
+[talk](https://kccncna17.sched.com/event/Cs4d/prometheus-salon-hosted-by-frederic-branczyk-coreos-bob-cotton-freshtracksio-goutham-veeramanchaneni-tom-wilkie-kausal)
+held during [KubeCon +
+CloudNativeCon](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america).
+This article covers what changed in this new release and what is brewing
+next in the Prometheus community; it is a companion to [this
+article](https://lwn.net/Articles/744410/), which provided a general
+introduction to monitoring with Prometheus.
+
+What changed
+------------
+
+Orchestration systems like Kubernetes regularly replace entire fleets of
+containers for deployments, which means rapid changes in parameters (or
+"labels" in Prometheus-talk) like hostnames or IP addresses. This was
+creating significant performance problems in Prometheus 1.0, which
+wasn't designed for such changes. To correct this, Prometheus ships a
+new [storage engine](https://github.com/prometheus/tsdb) was
+[specifically designed](https://fabxc.org/tsdb/) to handle continuously
+changing labels. This was tested by monitoring a Kubernetes cluster
+where 50% of the pods would be swapped every 10 minutes; the new design
+was proven to be much more effective. The new engine
+[boasts](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization)
+a hundred-fold I/O performance improvements, a three-fold improvement in
+CPU, a five-fold in memory usage and increased space efficiency. This
+impacts container deployments, but it also means improvements for any
+configuration as well. Anecdotally, there was no noticeable extra load
+on the servers where I deployed Prometheus, at least nothing that the
+previous monitoring tool (Munin) could detect.
+
+Prometheus 2.0 also brings new features like snapshot backups. The
+project has a long-standing design wart over data volatility: backups
+are unnecessary in Prometheus because metrics data is considered disposable.
+According to Goutham Veeramanchaneni, one of the presenters at KubeCon,
+"this approach apparently doesn't work for the enterprise". Backups
+*were* possible in 1.x, but they involved using filesystem snapshots and
+stopping the server to get a consistent view of the on-disk storage.
+This implied downtime, which was unacceptable for certain production
+deployments. Thanks to the new storage engine again, Prometheus can now
+perform fast and consistent backups, triggered through the web API.
+
+Another improvement is a fix to the long-standing [staleness handling
+bug](https://github.com/prometheus/prometheus/issues/398) where it would
+take up to five minutes for Prometheus to notice changes in metrics when
+a target disappears. Because it wouldn't detect stale samples
+correctly, a failed scrape would stay undetected for too long and fail
+to trigger alerts properly.
+This would also cause problems with double-counting of some metrics when
+labels vary in the same measurement. Another limitation related to
+staleness is that Prometheus wouldn't work well with scrape intervals
+above two minutes (instead of the default 15 seconds). Unfortunately,
+that is still not fixed in Prometheus 2.0 as the problem is more
+complicated than originally thought, which means there's still a hard
+limit to how slowly you can fetch metrics from targets. This, in turn,
+means that Prometheus is not well suited for devices that cannot support
+sub-minute refresh rates, which, to be fair, is rather uncommon. For
+slower devices or statistics, a solutions include the node exporter
+"textfile support", which we mentioned in the previous article, and the
+[pushgateway](https://github.com/prometheus/pushgateway) daemon, which
+allows pushing results from the targets instead of having the collector
+pull samples from targets.
+
+The migration path
+------------------
+
+One downside of this new release is that the upgrade path from the
+previous version is bumpy: since the storage format changed,
+Prometheus 2.0 cannot use the previous 1.x data files directly. In his
+presentation, Veeramanchaneni justified this change by saying this was
+consistent with the project's [API stability
+promises](https://prometheus.io/blog/2016/07/18/prometheus-1-0-released/#fine-print):
+the major release was the time to "break everything we wanted to break".
+For those who can't afford to discard historical data, a [possible
+workaround](https://www.robustperception.io/accessing-data-from-prometheus-1-x-in-prometheus-2-0/)
+is to replicate the older 1.8 server to a new 2.0 replica, as the
+[network
+protocols](https://prometheus.io/docs/prometheus/latest/federation/) are
+still compatible. The older server can be then decommissioned when the
+retention window (which defaults to fifteen days) closes. While there is
+some work in progress to provide a way to convert 1.8 data storage
+to 2.0, new deployments should probably use the 2.0 release directly to
+avoid this peculiar migration pain.
+
+Another key point in the [migration
+guide](https://prometheus.io/docs/prometheus/2.0/migration/) is a change
+in the rules-file format. While 1.x used a custom file format, 2.0 uses
+YAML, matching the other Prometheus configuration files. Thankfully the
+[promtool](https://github.com/prometheus/prometheus/tree/master/cmd/promtool)
+command handles this migration automatically. The [new
+format](https://prometheus.io/blog/2017/06/21/prometheus-20-alpha3-new-rule-format/)
+also introduces [rule
+groups](https://github.com/prometheus/prometheus/issues/1095), which
+improve control over the rules execution order. In 1.x, alerting rules
+were run sequentially but, in 2.0, the *groups* are executed
+sequentially and each group can have its own interval. This fixes the
+long-standing [race conditions between dependent
+rules](https://github.com/prometheus/prometheus/issues/1095) that create
+inconsistent results when rules would reuse the same queries. The
+problem should be fixed between groups, but rule authors still need to
+be careful of that limitation *within* a rule group.
+
+Remaining limitations and future
+--------------------------------
+
+As we have seen in the introductory article, Prometheus may not be
+suitable for all workflows because of the limited default dashboards and
+alerts, but also because the lack of data-retention policies. There are,
+however, discussions about [variable per-series
+retention](https://github.com/prometheus/prometheus/issues/1381) in
+Prometheus and [native down-sampling support in the storage
+engine](https://github.com/prometheus/tsdb/issues/56), although this is
+a feature some developers are not really comfortable with. When asked on
+IRC, Brian Brazil, one of the lead Prometheus developers,
+[stated](https://riot.im/app/#/room/#prometheus:matrix.org/$15158612532461742oHkAM:matrix.org)
+that "*downsampling is a very hard problem, I don't believe it should be
+handled in Prometheus*".
+
+Besides, it is already possible to selectively [delete an old
+series](https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md#delete-series)
+using the new 2.0 API. But Veeramanchaneni
+[warned](https://twitter.com/putadent/status/952420417276276736) that
+this approach "*puts extra pressure on Prometheus and unless you know
+what you are doing, its likely that you'll end up shooting yourself in
+the foot*". A more common approach to native archival facilities is to
+use [recording rules](https://prometheus.io/docs/practices/rules/) to
+aggregate samples and collect the results in a second server with a
+slower sampling rate and different retention policy. And of course, the
+new release features [external storage
+engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage)
+which can better support archival features. Those solutions are
+obviously not suitable for smaller deployments, which therefore need to
+make hard choices about discarding older samples or getting more disk
+space.
+
+As part of the staleness improvements, Brazil also started working on
+"isolation" (the "I" in the [ACID
+acronym](https://en.wikipedia.org/wiki/ACID)) so that queries wouldn't
+see "partial scrapes". This hasn't made the cut for the 2.0 release, and
+is still [work in
+progress](https://github.com/prometheus/tsdb/pull/105), with some
+performance impacts (about 5% CPU and 10% RAM). This work would also be
+useful when heavy contention occurs in certain scenarios where
+Prometheus gets stuck on locking. Some of the performance impact could
+therefore be offset under heavy load.
+
+Another performance improvement mentioned during the talk is an eventual
+query-engine rewrite. The current query engine can sometimes lead to
+excessive load for certain expensive queries, according the Prometheus
+[security
+guide](https://prometheus.io/docs/operating/security/#denial-of-service).
+The goal would be to optimize the current engine so that those expensive
+queries wouldn't harm performance.
+
+Finally, another issue I discovered is that 32-bit support is limited in
+Prometheus 2.0. The Debian package maintainers found that the [test
+suite fails on
+i386](https://github.com/prometheus/prometheus/issues/3665), which lead
+Debian to [remove the package from the i386
+architecture](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886702).
+It is currently unclear if this is a bug in Prometheus: indeed, it is
+strange that Debian tests actually pass in other 32-bit architectures
+like armel. Brazil, in the bug report, argued that "Prometheus isn't
+going to be very useful on a 32bit machine". The position of the project
+is currently that "if it runs, it runs but no guarantees or effort
+beyond that from our side".
+
+I had the privilege to meet the [Prometheus
+team](https://prometheus.io/community/_) in Austin and was happy to see
+different consultants and organizations working together on the project.
+It reminded me of my [golden days in the
+Drupal](https://www.drupal.org/user/1274/) community: different
+companies cooperating on the same project in a harmonious environment.
+If Prometheus can keep that spirit together, it will be a welcome change
+from the

(Diff truncated)
remove notes
diff --git a/blog/prometheus.mdwn b/blog/prometheus.mdwn
deleted file mode 100644
index dc21b980..00000000
--- a/blog/prometheus.mdwn
+++ /dev/null
@@ -1,635 +0,0 @@
-Monitoring with Prometheus 2.0
-==============================
-
-Prometheus is a monitoring tool built from scratch by SoundCloud
-in 2012. It works by pulling metrics from monitored services and
-storing them in a time series database (TSDB). It has a powerful query
-language to inspect that database, create alerts, and plot basic
-graphs. Those graphs can then be used to detect anomalies or trends
-for (possibly automated) resource provisionning. Prometheus also has
-extensive service discovery features and supports high availability
-configurations.
-
-That's the brochure says anyways, let's see how it works in the hands
-of an old grumpy system administrator. I'll be drawing comparisons
-with Munin and Nagios frequently because those are the tools I have
-used for over a decade in monitoring Unix clusters. Readers already
-familiar with Prometheus can skip ahead to the last two sections to
-see what's new in Prometheus 2.0.
-
-Monitoring with Prometheus and Grafana
---------------------------------------
-
-{should i show how to install prometheus? fairly trivial - go get or
-apt-get install works, basically}
-
-What distinguishes Prometheus from other solutions is the relative
-simplicity of its design: for one, metrics are exposed over HTTP using
-a special URL (`/metrics`) and a simple text format. Here is, as an
-example, some disk metrics for a test machine:
-
-    $ curl -s http://curie:9100/metrics | grep node_disk_io_time_ms
-    # HELP node_disk_io_time_ms Total Milliseconds spent doing I/Os.
-    # TYPE node_disk_io_time_ms counter
-    node_disk_io_time_ms{device="dm-0"} 466884
-    node_disk_io_time_ms{device="dm-1"} 133776
-    node_disk_io_time_ms{device="dm-2"} 80
-    node_disk_io_time_ms{device="dm-3"} 349936
-    node_disk_io_time_ms{device="sda"} 446028
-
-{should i use the more traditionnal bandwidth example instead? kubecon
-talk used the "cars driving by" analogy which i didn't find so
-useful. i deliberately picked a tricky example that's not commonly
-available, but it might be too confusing}
-
-In the above, the metric is named `node_disk_io_time_ms`, has a single
-label/value pair (`device=sda`) attached to it, and finally the value
-of the metric itself (`446028`). This is only one of hundreds of
-metrics (usage of CPU, memory, disk, temperature and so on) exposed by
-the "node exporter", a basic stats collector running on monitored
-hosts. Metrics can be counters (e.g. per-interface packet counts),
-gauges (e.g. temperature or fan sensors), or [histograms](https://prometheus.io/docs/practices/histograms/). The
-latter allow for example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, which is
-something that has been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is
-essential to billing networking customers. Another popular use for
-historgrams maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N
-requests are answered in X time. The various metrics types are
-carefully analyzed before being stored to correctly handle conditions
-like overflow (which occur surprisingly often on gigabit network
-interfaces) or resets (when a device restarts).
-
-Those metrics are fetched from "targets", which are simply HTTP
-endpoints, added to the Prometheus configuration file. Targets can
-also be automatically added through various [discovery mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/)
-like DNS that allows having a single `A` or `SRV` that lists all the
-hosts to monitor; or Kubernetes or cloud providers APIs that list all
-containers or virtual machines to monitor. Discovery works in
-realtime, so it will correctly pickup changes in DNS, for example. It
-can also add metadata (e.g. IP address found or server state), which
-is useful for dynamic environments such as Kubernetes or containers
-orchestration in general.
-
-Once collected, metrics can be queried through the web interface,
-using a custom language called PromQL. For example, a query showing
-the average latency, per minute, for `/dev/sda` would look like:
-
-    rate(node_disk_io_time_ms{device="sda"}[1m])
-
-Notice the "device" label, which we use to restrict the search to a
-single disk. This query can also be plotted into a simple graph on the
-web interface:
-
-![my first prometheus graph](https://paste.anarc.at/snaps/snap-2018.01.12-16.29.20.png)
-
-What is interesting here is not really the node exporter metrics
-themselves, as those are fairly standard in any monitoring solution.
-But in Prometheus, any (web) application can easily expose their own
-internal metrics to the monitoring server through regular HTTP,
-whereas other systems would require special plugins, both on the
-monitoring server but also the application side. Note that Munin also
-follows a similar pattern, but speaks its own text protocol on top of
-TCP, which means it is harder to implement for web apps and diagnose
-with a web browser.
-
-However, coming from the world of Munin, where all sorts of graphics
-just magically appear out of the box, this first experience can be a
-bit of a disappointement: everything is built by hand and
-ephemeral. While there are ways to add custom graphs to the Prometheus
-web interface using Go-based [console templates](https://prometheus.io/docs/visualization/consoles/), most Prometheus
-deployments generally [use Grafana](https://prometheus.io/docs/visualization/grafana/) to render the results using
-custom-built dashboards. This gives much better results, and allows to
-graph multiple machines separately, using the [Node Exporter Server
-Metrics](https://grafana.com/dashboards/405) dashboard:
-
-![A Grafana dashboard showing metrics from 2 servers](https://paste.anarc.at/snaps/snap-2018.01.12-16.30.40.png)
-
-All this work took roughly an hour of configuration, which is pretty
-good for a first try. Things get tougher when extending those basic
-metrics: because of the system's modularity, it is difficult to add
-new metrics to existing dashboards. For example, web or mail servers
-are not monitored by the node exporter. So monitoring a web server
-involves installing an [Apache-specific exporter](https://github.com/Lusitaniae/apache_exporter) which needs to be
-added to the Prometheus configuration. But it won't show up
-automatically in the above dashboard, because that's a "node exporter"
-dashboard, not an *Apache* dashboard. So you need a [separate
-dashboard](https://grafana.com/dashboards/3894) for that. This is all work that's done automatically in
-Munin without any hand-holding.
-
-Event then, Apache is an easy one: monitoring a Postfix server, for
-example, currently requires installing program like [mtail](https://github.com/google/mtail/) that
-parses the Postfix logfiles to expose some metrics to Prometheus. Yet
-this will not tell you critical metrics like queue sizes which can
-alert administrators of backlog conditions. There doesn't seem to be a
-way to write quick "run this command to count files" plugins that
-would allow administrators to write such quick hacks as watching the
-queue sizes in Postfix, without writing a [new exporter](https://prometheus.io/docs/instrumenting/writing_exporters/) using
-[client libraries](https://prometheus.io/docs/instrumenting/clientlibs/) which seems to be a rather large undertaking for
-non-programmers. There are, however, a large number of [exporters
-already available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md), including ones that can tap into existing
-[Nagios](https://github.com/Griesbacher/Iapetos_) and [Munin](https://github.com/pvdh/munin_exporter) servers to allow for a smooth transition.
-
-However, at the time of writing, I couldn't find a dashboard that would
-show and analyse those logs at all. To graph metrics from the [postfix
-mtail plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail), a graph created by hand in Grafana, with a magic
-PromQL formula. This may involve too much clicking around a web
-browser for grumpy old administrators. There are tools like
-[Grafanalib](https://github.com/weaveworks/grafanalib) to programmaticaly create dashboards, but those also
-involve a lot of boilerplate. When building a custom application,
-however, creating graphs may actually be an fun and distracting task
-that some may enjoy. The Grafana/Prometheus design is certainly
-enticing and enables powerful abstractions that are not easily done
-with other monitoring systems.
-
-Alerting and high availability
-------------------------------
-
-So far, we've worked only with a single server, and did only
-graphing. But Prometheus also supports sending alarms when things go
-bad. After working over a decade as a system administrator, I have
-mixed feelings about "paging" or "alerting" as it's called in
-Prometheus. Regardless of how well the system is tweaked, I have come
-to believe it is basically impossible to design a system that will
-respect workers and not torture on-call personel through
-sleep-deprivation. It seems it's a feature people want regardless,
-especially in the entreprise so let's look at how it works here.
-
-In Prometheus, you design [alert rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) using PromQL. For example,
-to make sure we complete all disk I/O within a certain timeframe (say
-200ms), we could set the following rule:
-
-    rate(node_disk_io_time_ms{device="sda"}[1m]) < 200
-
-{this is a silly check - should i use something more obvious like disk
-space or keep consistency?}
-
-Those rules are regularly checked and matching rules are fired to an
-[alertmanager](https://github.com/prometheus/alertmanager) daemon which can receive alerts from multiple
-Prometheus servers. The alertmanager then deduplicates multiple
-alerts, regroups them (so a single notification is sent even if
-multiple alerts are received), and sends the actual notifications
-through [various services](https://prometheus.io/docs/alerting/configuration/) like email, PagerDuty, Slack or an
-arbitrary [webhooks](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver).
-
-The Alertmanager has a "gossip protocol" to enable multiple instance
-to coordinate notifications. This design allows you to run multiple
-Prometheus servers in a [federation](https://prometheus.io/docs/prometheus/latest/federation/) model, all simultaneously
-collecting metrics, and sending alerts to redundant Alertmanager
-instances to create a highly available monitoring system. Those who
-have struggled with such setups in Nagios will surely appreciate the
-simplicity of this design.
-
-The downside is that Prometheus doesn't ship a set of default alerts
-and exporters do not define default alerting thresholds which could be
-used to create rules automatically either. The Prometheus
-documentation [also lacks examples](https://github.com/prometheus/docs/issues/581) that the community could use as
-well, so alerting is harder to deploy than in classic monitoring
-systems.
-
-Changes in Prometheus 2.0
--------------------------
-
-2017 was a big year for the Prometheus project, as it [published its
-2.0 release in November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/), which ships major performance
-improvements, particularly in container deployments. The 1.x series'
-performance was strained with short-lived containers: because some

(Diff truncated)
sent to lwn
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index 8b366f24..dd47e045 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -156,9 +156,12 @@ path for the future of monitoring in the free software world.
 Notes
 -----
 
- * To be mentioned as a comment after publication? [This
-   presentation](https://www.youtube.com/watch?v=GcTzd2CLH7I) explains in details the work on stale samples and
-   isolation, presented at Promcon 2017.
-
- * not sure about the titles, including main title - confusing with
-   the first article? maybe remove "2.0" from the previous article?
+ * titles. not sure about the main title and subtitles. the main, in
+   particular, seems a bit redundant with the previous article. maybe we
+   could remove "2.0" from the previous article to alleviate that
+   ambiguity?
+
+ * there are a lot more details about the staleness and isolation stuff
+   in a [talk](https://www.youtube.com/watch?v=GcTzd2CLH7I) Brazil made
+   at Promcon 2017. note sure where to fit that in, i was thinking of
+   just adding that as a comment after the article was posted.

yet another rewrite of 2.0
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index e4227f6d..8b366f24 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -5,82 +5,92 @@ Changes in Prometheus 2.0
 its 2.0 release in November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/). The new release ships long-standing
 bugfixes, new features and notably a new storage engine bringing major
 performance improvements. This comes at the cost of incompatible
-changes to the storage format and some changes in the configuration
-but hopefully the migration will be easy for most deployments. An
+changes to the storage format and configuration file formats. An
 overview of Prometheus and its new release was presented to the
 [Kubernetes](https://kubernetes.io/) community in a [talk](https://kccncna17.sched.com/event/Cs4d/prometheus-salon-hosted-by-frederic-branczyk-coreos-bob-cotton-freshtracksio-goutham-veeramanchaneni-tom-wilkie-kausal) held during [KubeCon +
-CloudNativeCon](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america). This article aims to cover what changed in this
-new release and what is brewing next in the growing Prometheus
-community.
+CloudNativeCon](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america). This article covers what changed in this new
+release and what is brewing next in the Prometheus community.
 
 What changed
 ------------
 
-Kubernetes created performance problems in the 1.x Prometheus storage
-engine, because the orchestration system can easily trigger massive
-container churn. This leads rapid changes in parameters like hostnames
-or IP addresses, called "labels" in Prometheus. A newly designed
-[storage engine](https://github.com/prometheus/tsdb) resolves this by explicitly dealing with changing
-labels. This was tested by monitoring a Kubernetes cluster where 50%
-of the pods would be swapped every 10 minutes, where the new design
-was proven to be much more effective. The new engine [boasts](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization) a
-hundred-fold I/O performance improvements, a three-fold improvement in
-CPU and five-fold in memory usage. This impacts container deployments,
-but it also means improvements for any configuration as well
-Anecdotally, there was no noticeable extra load on the servers where I
-deployed Prometheus, at least nothing that the previous monitoring
-tool (Munin) could detect.
+Orchestration systems like Kubernetes regularly replace entire fleet
+of containers for deployments, which means rapid changes in parameters
+(or "labels" in Prometheus-talk) like hostnames or IP addresses. This
+was creating significant performance problems in Prometheus 1.0, which
+wasn't designed for such changes. To correct this, Prometheus ships a
+new [storage engine](https://github.com/prometheus/tsdb) was [specifically designed](https://fabxc.org/tsdb/) to handle
+continuously changing labels. This was tested by monitoring a
+Kubernetes cluster where 50% of the pods would be swapped every 10
+minutes, where the new design was proven to be much more
+effective. The new engine [boasts](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization) a hundred-fold I/O performance
+improvements, a three-fold improvement in CPU, a five-fold in memory
+usage and increased space efficiency. This impacts container
+deployments, but it also means improvements for any configuration as
+well. Anecdotally, there was no noticeable extra load on the servers
+where I deployed Prometheus, at least nothing that the previous
+monitoring tool (Munin) could detect.
 
 Prometheus 2.0 also brings new features like snapshot backups. The
 project has a long-standing design wart over data volatility: the word
-in the community is "don't do backups" because data is
+in the community is "don't do backups" because metrics are
 disposable. According to Goutham Veeramanchaneni, one of the
-presenters at KubeCon, "this approach apparently doesn't work for
-entreprise". Backups *were* previously possible, but they involved
-stopping the server and doing filesystem snapshots. This was causing a
-short downtime that was unacceptable for certain production
-deployments. So Prometheus now implements a way to do fast, consistent
-backups through the web API.
-
-Another improvement is a fix to long-standing [staleness handling
-bug](https://github.com/prometheus/prometheus/issues/398) where it could take up to 5 minutes for Prometheus to notice
-changes in certain metrics, because it wouldn't detect stale samples
-correctly. This would also cause problems with longer scrape intervals
-(above 2 minutes, instead of the default 15 seconds) and
-double-counting of some metrics when labels vary for the same
-metric. Unfortunately, the latter is still not fixed in Prometheus 2.0
-as that was more complicated than originally thought, which means
-there's still a hard limit to how slow you can fetch metrics from
-targets. This, in turn, means that Prometheus may not be well suited
-for more fragile devices that may not be able to support sub-minute
-refresh rates.
-
-One downside of the new release is that the upgrade path from the
+presenters at KubeCon, "this approach apparently doesn't work for the
+enterprise". Backups *were* possible in 1.x, but they involved using
+filesystem snapshots and stopping the server to get a consistent view
+of the on-disk storage. This implied a downtime, which was
+unacceptable for certain production deployments. Thanks to the new
+storage engine again, Prometheus can now perform fast and consistent
+backups, triggered through the web API.
+
+Another improvement is a fix to the long-standing [staleness handling
+bug](https://github.com/prometheus/prometheus/issues/398) where it would take up to 5 minutes for Prometheus to notice
+changes in metrics when a target disappears, because it wouldn't
+detect stale samples correctly. This would also cause problems with
+double-counting on some metrics when labels vary in the same
+measurement. Another limitation related to staleness is that
+Prometheus wouldn't work well with scrape intervals above 2 minutes
+(instead of the default 15 seconds). Unfortunately, that is still not
+fixed in Prometheus 2.0 as the problem more complicated than
+originally thought, which means there's still a hard limit to how slow
+you can fetch metrics from targets. This, in turn, means that
+Prometheus is not well suited for devices that cannot support
+sub-minute refresh rates, which, to be fair, is rather uncommon. For
+slower devices or statistics, a solutions include the node exporter
+"textfile support", which we mentioned in the [previous article](/Articles/744410/),
+and the [pushgateway](https://github.com/prometheus/pushgateway) daemon, which allows pushing results from the
+targets instead of having the collector pull samples from targets.
+
+The migration path
+------------------
+
+One downside of this new release is that the upgrade path from the
 previous version is bumpy: since the storage format changed,
-Prometheus 2.0 cannot use the previous 1.x data files directly. In the
-talk, Veeramanchaneni justified this by saying this was consistent
-with the project's [API stability promises](https://prometheus.io/blog/2016/07/18/prometheus-1-0-released/#fine-print): the major release was
-the time to "break everything we wanted to break". For those who can't
-afford to discard historical data, a [possible workaround](https://www.robustperception.io/accessing-data-from-prometheus-1-x-in-prometheus-2-0/) is to
-replicate the older 1.8 server to a new 2.0 replica - the network
-protocols are still compatible - and discard the older server once the
-retention window (which defaults to fifteen days) closes. While there
-is some work in progress to try to provide a way to convert 1.8 data
-storage to 2.0, new deployments should probably use the 2.0 release
-directly to avoid the migration pain.
-
-Another key point of [migration guide](https://prometheus.io/docs/prometheus/2.0/migration/) is a change in the rules
-file format. While the 1.0 uses a custom format, 2.0 now uses YAML,
-like the other Prometheus configuration files. Thankfully the
-[promtool](https://github.com/prometheus/prometheus/tree/master/cmd/promtool) command handles this migration automatically. The [new
-format](https://prometheus.io/blog/2017/06/21/prometheus-20-alpha3-new-rule-format/) also introduces [rule groups](https://github.com/prometheus/prometheus/issues/1095), which improve control
-over the rules execution order. In 1.8, alerting rules were ran
-sequentially, but in 2.0 the *groups* get executed sequentially and
-each group can have its own interval. This fixes the long-standing
-[race conditions between dependent rules](https://github.com/prometheus/prometheus/issues/1095) that create
-inconsistencies when rules would reuse results. The problem should be
-fixed between groups, but rule authors still need to be careful of
-this issue *within* rule groups.
+Prometheus 2.0 cannot use the previous 1.x data files directly. In his
+presentation, Veeramanchaneni justified this by saying this was
+consistent with the project's [API stability promises](https://prometheus.io/blog/2016/07/18/prometheus-1-0-released/#fine-print): the major
+release was the time to "break everything we wanted to break". For
+those who can't afford to discard historical data, a [possible
+workaround](https://www.robustperception.io/accessing-data-from-prometheus-1-x-in-prometheus-2-0/) is to replicate the older 1.8 server to a new 2.0
+replica, as the [network protocols](https://prometheus.io/docs/prometheus/latest/federation/) are still compatible. The older
+server can be then decommissioned when the retention window (which
+defaults to fifteen days) closes. While there is some work in progress
+to provide a way to convert 1.8 data storage to 2.0, new deployments
+should probably use the 2.0 release directly to avoid this peculiar
+migration pain.
+
+Another key point of the [migration guide](https://prometheus.io/docs/prometheus/2.0/migration/) is a change in the rules
+file format. While 1.x used a custom file format, 2.0 uses YAML like
+the other Prometheus configuration files. Thankfully the [promtool](https://github.com/prometheus/prometheus/tree/master/cmd/promtool)
+command handles this migration automatically. The [new format](https://prometheus.io/blog/2017/06/21/prometheus-20-alpha3-new-rule-format/) also
+introduces [rule groups](https://github.com/prometheus/prometheus/issues/1095), which improve control over the rules
+execution order. In 1.x, alerting rules were ran sequentially, but in
+2.0 the *groups* get executed sequentially and each group can have its
+own interval. This fixes the long-standing [race conditions between
+dependent rules](https://github.com/prometheus/prometheus/issues/1095) that create inconsistent results when rules would
+reuse the same queries. The problem should be fixed between groups,
+but rule authors still need to be careful of that limitation *within*
+a rule group.
 
 Remaining limitations and future
 --------------------------------
@@ -90,24 +100,23 @@ be adapted for all workflows because of the limited default dashboards
 and alerts, but also because the lack of data retention
 policies. There are, however, discussions about [variable per-series
 retention](https://github.com/prometheus/prometheus/issues/1381) in Prometheus and [native down-sampling support in the
-storage engine](https://github.com/prometheus/tsdb/issues/56)), although this is a feature Prometheus developers
-are not really comfortable with. When asked on IRC, Brian Brazil, one
-of the lead Prometheus developers, [stated](https://riot.im/app/#/room/#prometheus:matrix.org/$15158612532461742oHkAM:matrix.org) that "downsampling is a
-very hard problem, I don't believe it should be handled in
-Prometheus".
+storage engine](https://github.com/prometheus/tsdb/issues/56), although this is a feature some developers are not
+really comfortable with. When asked on IRC, Brian Brazil, one of the
+lead Prometheus developers, [stated](https://riot.im/app/#/room/#prometheus:matrix.org/$15158612532461742oHkAM:matrix.org) that "downsampling is a very
+hard problem, I don't believe it should be handled in Prometheus".
 
 Besides, it is already possible to selectively [delete old series](https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md#delete-series)
-using the new 2.0 API. But Veeramanchaneni warned that this approach
-"puts extra pressure on Prometheus and unless you know what you are
-doing, its likely that you'll end up shooting yourself in the foot." A
-more common approach to native archival facilities is to use
+using the new 2.0 API. But Veeramanchaneni [warned](https://twitter.com/putadent/status/952420417276276736) that this
+approach "puts extra pressure on Prometheus and unless you know what
+you are doing, its likely that you'll end up shooting yourself in the
+foot." A more common approach to native archival facilities is to use
 [recording rules](https://prometheus.io/docs/practices/rules/) to aggregate samples and collect the results in a
 second server with a slower sampling rate and different retention
 policy. And of course, the new release features [external storage
 engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) which can better support archival features. Those
 solutions are obviously not suitable for smaller deployments, which
-need to make hard choices about discarding older samples or getting
-more disk space.
+therefore need to make hard choices about discarding older samples or
+getting more disk space.
 
 As part of the staleness improvements, Brazil also started working on
 "isolation" (the "I" in the [ACID acronym](https://en.wikipedia.org/wiki/ACID)) so that queries
@@ -115,8 +124,8 @@ wouldn't see "partial scrapes". This hasn't made the cut for the 2.0
 release, and is still [work in progress](https://github.com/prometheus/tsdb/pull/105), with some performance
 impacts (about 5% CPU and 10% RAM). This work would also be useful

(Diff truncated)
article online
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index 79511c4d..5dfd926f 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -2,8 +2,8 @@
 \[LWN subscriber-only content\]
 -------------------------------
 
-[[!meta date="2018-01-15T00:00:00+0000"]]
-[[!meta updated="2018-01-17T13:04:00-0500"]]
+[[!meta date="2018-01-17T00:00:00+0000"]]
+[[!meta updated="2018-01-17T15:32:34-0500"]]
 
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
 scratch by SoundCloud in 2012. It works by pulling metrics from

changes from lwn
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index ab4cf6df..79511c4d 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-01-15T00:00:00+0000"]]
-[[!meta updated="2018-01-16T13:20:32-0500"]]
+[[!meta updated="2018-01-17T13:04:00-0500"]]
 
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
 scratch by SoundCloud in 2012. It works by pulling metrics from
@@ -35,7 +35,7 @@ example, some network metrics for a test machine:
         node_network_transmit_bytes{device="eth0"} 4.03286677e+08
 
 In the above example, the metrics are named `node_network_receive_bytes`
-and `node_network_transmit_bytes`. They a single label/value
+and `node_network_transmit_bytes`. They have a single label/value
 pair(`device=eth0`) attached to them, along with the value of the
 metrics themselves. This is only a couple of hundreds of metrics (usage
 of CPU, memory, disk, temperature, and so on) exposed by the "node
@@ -51,7 +51,7 @@ billing networking customers. Another popular use for histograms is
 maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to
 make sure that N requests are answered in X time. The various metrics
 types are carefully analyzed before being stored to correctly handle
-conditions like overflow (which occurs surprisingly often on gigabit
+conditions like overflows (which occur surprisingly often on gigabit
 network interfaces) or resets (when a device restarts).
 
 Those metrics are fetched from "targets", which are simply HTTP
@@ -61,10 +61,10 @@ mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/),
 like DNS, that allow having a single `A` or `SRV` record that lists all
 the hosts to monitor; or Kubernetes or cloud-provider APIs that list all
 containers or virtual machines to monitor. Discovery works in real time,
-so it will correctly pickup changes in DNS, for example. It can also add
-metadata (e.g. IP address found or server state), which is useful for
-dynamic environments such as Kubernetes or containers orchestration in
-general.
+so it will correctly pick up changes in DNS, for example. It can also
+add metadata (e.g. IP address found or server state), which is useful
+for dynamic environments such as Kubernetes or containers orchestration
+in general.
 
 Once collected, metrics can be queried through the web interface, using
 a custom language called PromQL. For example, a query showing the
@@ -98,7 +98,7 @@ templates](https://prometheus.io/docs/visualization/consoles/), most
 Prometheus deployments generally [use
 Grafana](https://prometheus.io/docs/visualization/grafana/) to render
 the results using custom-built dashboards. This gives much better
-results, and allows to graph multiple machines separately, using the
+results, and allows graphing multiple machines separately, using the
 [Node Exporter Server Metrics](https://grafana.com/dashboards/405)
 dashboard:
 
@@ -147,9 +147,9 @@ created by hand in Grafana, with a magic PromQL formula. This may
 involve too much clicking around in a web browser for grumpy old
 administrators. There are tools like
 [Grafanalib](https://github.com/weaveworks/grafanalib) to
-programmaticaly create dashboards, but those also involve a lot of
+programmatically create dashboards, but those also involve a lot of
 boilerplate. When building a custom application, however, creating
-graphs may actually be an fun and distracting task that some may enjoy.
+graphs may actually be a fun and distracting task that some may enjoy.
 The Grafana/Prometheus design is certainly enticing and enables powerful
 abstractions that are not readily available with other monitoring
 systems.
@@ -163,7 +163,7 @@ working over a decade as a system administrator, I have mixed feelings
 about "paging" or "alerting" as it's called in Prometheus. Regardless of
 how well the system is tweaked, I have come to believe it is basically
 impossible to design a system that will respect workers and not torture
-on-call personel through sleep-deprivation. It seems it's a feature
+on-call personnel through sleep-deprivation. It seems it's a feature
 people want regardless, especially in the enterprise, so let's look at
 how it works here.
 
@@ -189,7 +189,7 @@ notification is sent even if multiple alerts are received), and sends
 the actual notifications through [various
 services](https://prometheus.io/docs/alerting/configuration/) like
 email, PagerDuty, Slack or an arbitrary
-[webhooks](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver).
+[webhook](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver).
 
 The Alertmanager has a "gossip protocol" to enable multiple instances to
 coordinate notifications. This design allows you to run multiple
@@ -202,10 +202,10 @@ appreciate the simplicity of this design.
 
 The downside is that Prometheus doesn't ship a set of default alerts and
 exporters do not define default alerting thresholds that could be used
-to create rules automatically either. The Prometheus documentation [also
-lacks examples](https://github.com/prometheus/docs/issues/581) that the
-community could use as well, so alerting is harder to deploy than in
-classic monitoring systems.
+to create rules automatically. The Prometheus documentation [also lacks
+examples](https://github.com/prometheus/docs/issues/581) that the
+community could use, so alerting is harder to deploy than in classic
+monitoring systems.
 
 Issues and limitations
 ----------------------
@@ -227,12 +227,13 @@ to old system administrators who are used to RRDtool databases that
 efficiently store samples for years. As a comparison, my test Prometheus
 instance is taking up as much space for five days of samples as Munin,
 which has samples for the last year. Of course, Munin only collects
-metrics every five minutes, but this shows that Prometheus's disk
-requirements are much larger than traditionnal RRDtool implementations
-because it lacks native down-sampling facilities. Therefore, retaining
-samples for more than a year (which is a Munin limitation I was hoping
-to overcome) will be difficult without some serious hacking to
-selectively purge samples or adding extra disk space.
+metrics every five minutes while Prometheus samples all targets every
+15 seconds by default. Even so, this difference in sizes shows that
+Prometheus's disk requirements are much larger than traditional RRDtool
+implementations because it lacks native down-sampling facilities.
+Therefore, retaining samples for more than a year (which is a Munin
+limitation I was hoping to overcome) will be difficult without some
+serious hacking to selectively purge samples or adding extra disk space.
 
 The [project
 documentation](https://prometheus.io/docs/prometheus/latest/storage/)
@@ -250,8 +251,8 @@ infrastructure. And when *that* is not enough [sharding is
 possible](https://www.robustperception.io/scaling-and-federating-prometheus/).
 In general, performance is dependent on avoiding variable data in
 labels, which keeps the cardinality of the dataset under control, but
-regardless: the dataset size will grow with time. So long-term storage
-is not Prometheus' strongest suit. But starting with 2.0, Prometheus can
+the dataset size will grow with time regardless. So long-term storage is
+not Prometheus' strongest suit. But starting with 2.0, Prometheus can
 [finally](https://github.com/prometheus/prometheus/issues/10) write to
 (and read from) [external storage
 engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage)

reword last sent
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index 7842bdc9..e4227f6d 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -142,8 +142,10 @@ project. At KubeCon, the people presenting Prometheus were not from a
 single company: different organizations and freelance consultants are
 collaborating on the project. It is nice to see such a group working
 together; it reminds me of the golden days of the Drupal community,
-where passion was more about creating great tools than milking money
-out of a free software project. If the Prometheus project can organize
-well and keep that spirit together, this new release will light a
-bright path for the future of free software monitoring, that has been
-so strewn with drama and crisis.
+where passionate people were working more towards creating great tools
+than milking money out of the free software project. If Prometheus can
+organize well and keep that spirit together, this new release will
+light a bright path for the future of free software monitoring. This
+road has been traditionally strewn by a bit of drama and conflict, so
+it's nice to see a new team coming into the scene with such positive
+energy.

try another rewrite after the fiasco of the previous article
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index c1efe00a..7842bdc9 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -1,91 +1,113 @@
 Changes in Prometheus 2.0
 =========================
 
-2017 was a big year for the Prometheus project, as it [published its
-2.0 release in November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/), which ships long-standing bugfixes, a new
-storage engine that promises performance improvements and new
-features. This comes at the cost of incompatible changes to the
-storage format and some changes in the configuration but hopefully the
-migration will be easy for most deployments.
+2017 was a big year for the [Prometheus](https://prometheus.io/) project, as it [published
+its 2.0 release in November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/). The new release ships long-standing
+bugfixes, new features and notably a new storage engine bringing major
+performance improvements. This comes at the cost of incompatible
+changes to the storage format and some changes in the configuration
+but hopefully the migration will be easy for most deployments. An
+overview of Prometheus and its new release was presented to the
+[Kubernetes](https://kubernetes.io/) community in a [talk](https://kccncna17.sched.com/event/Cs4d/prometheus-salon-hosted-by-frederic-branczyk-coreos-bob-cotton-freshtracksio-goutham-veeramanchaneni-tom-wilkie-kausal) held during [KubeCon +
+CloudNativeCon](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america). This article aims to cover what changed in this
+new release and what is brewing next in the growing Prometheus
+community.
 
 What changed
 ------------
 
-Performance problems were being felt particularly in container
-deployments. The 1.x series' performance was strained with short-lived
-containers: because some parameters like hostnames or IP addresses
-would change frequently in that environment, it would lead to some
-churn in the labels used by the storage engine, which created
-significant performance problems. A newly designed [storage engine](https://github.com/prometheus/tsdb)
-resolves this with promising results.
-
-The new engine [boasts](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization) a hundred-fold I/O performance
-improvements, a three-fold improvement in CPU and five-fold in memory
-usage. This impacts large container deployments, but it also means
-performance improvements for regular configurations as
-well. Anecdotally, there was no noticeable extra load on the servers
-where I deployed Prometheus, at least none that *previous* monitoring
-tools could notice.
+Kubernetes created performance problems in the 1.x Prometheus storage
+engine, because the orchestration system can easily trigger massive
+container churn. This leads rapid changes in parameters like hostnames
+or IP addresses, called "labels" in Prometheus. A newly designed
+[storage engine](https://github.com/prometheus/tsdb) resolves this by explicitly dealing with changing
+labels. This was tested by monitoring a Kubernetes cluster where 50%
+of the pods would be swapped every 10 minutes, where the new design
+was proven to be much more effective. The new engine [boasts](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization) a
+hundred-fold I/O performance improvements, a three-fold improvement in
+CPU and five-fold in memory usage. This impacts container deployments,
+but it also means improvements for any configuration as well
+Anecdotally, there was no noticeable extra load on the servers where I
+deployed Prometheus, at least nothing that the previous monitoring
+tool (Munin) could detect.
 
 Prometheus 2.0 also brings new features like snapshot backups. The
-project has a long-standing design wart about data volatility: the
-word in the community is "don't do backups" because data is
-disposable. This approach, according to Goutham Veeramanchaneni, one
-of the presenters at KubeCon, "apparently doesn't work for
+project has a long-standing design wart over data volatility: the word
+in the community is "don't do backups" because data is
+disposable. According to Goutham Veeramanchaneni, one of the
+presenters at KubeCon, "this approach apparently doesn't work for
 entreprise". Backups *were* previously possible, but they involved
-stopping the server and doing filesystem snapshots, which was not
-really an option for production deployments. So Prometheus now
-implements a way to do fast, consistent backups through the web API.
+stopping the server and doing filesystem snapshots. This was causing a
+short downtime that was unacceptable for certain production
+deployments. So Prometheus now implements a way to do fast, consistent
+backups through the web API.
 
 Another improvement is a fix to long-standing [staleness handling
-bug](https://github.com/prometheus/prometheus/issues/398) where it would take a long time (5 minutes, which, in
-Prometheus' world, is a long time) for the server to find when a host
-is down, for example. This would also cause problems with flapping
-services that would fail to be detected properly or would throw
-metrics out of whack.
+bug](https://github.com/prometheus/prometheus/issues/398) where it could take up to 5 minutes for Prometheus to notice
+changes in certain metrics, because it wouldn't detect stale samples
+correctly. This would also cause problems with longer scrape intervals
+(above 2 minutes, instead of the default 15 seconds) and
+double-counting of some metrics when labels vary for the same
+metric. Unfortunately, the latter is still not fixed in Prometheus 2.0
+as that was more complicated than originally thought, which means
+there's still a hard limit to how slow you can fetch metrics from
+targets. This, in turn, means that Prometheus may not be well suited
+for more fragile devices that may not be able to support sub-minute
+refresh rates.
 
 One downside of the new release is that the upgrade path from the
 previous version is bumpy: since the storage format changed,
-Prometheus 2.0 cannot use the storage from the previous 1.x series. In
-the talk, Veeramanchaneni justified this by saying this was consistent
-with its [API stability promises](https://prometheus.io/blog/2016/07/18/prometheus-1-0-released/#fine-print): the major release is the time to
-"break everything we wanted to break". For those who can't afford to
-discard historical data, a [possible workaround](https://www.robustperception.io/accessing-data-from-prometheus-1-x-in-prometheus-2-0/) is to replicate the
-older 1.8 server to a new 2.0 replica - the network protocols are
-still compatible - and discard the older server once the retention
-window closes. Still, new deployments should probably use the 2.0
-release to avoid the migration pain although there is some work in
-progress to try to provide a way to convert 1.8 data storage to 2.0.
+Prometheus 2.0 cannot use the previous 1.x data files directly. In the
+talk, Veeramanchaneni justified this by saying this was consistent
+with the project's [API stability promises](https://prometheus.io/blog/2016/07/18/prometheus-1-0-released/#fine-print): the major release was
+the time to "break everything we wanted to break". For those who can't
+afford to discard historical data, a [possible workaround](https://www.robustperception.io/accessing-data-from-prometheus-1-x-in-prometheus-2-0/) is to
+replicate the older 1.8 server to a new 2.0 replica - the network
+protocols are still compatible - and discard the older server once the
+retention window (which defaults to fifteen days) closes. While there
+is some work in progress to try to provide a way to convert 1.8 data
+storage to 2.0, new deployments should probably use the 2.0 release
+directly to avoid the migration pain.
 
 Another key point of [migration guide](https://prometheus.io/docs/prometheus/2.0/migration/) is a change in the rules
-file format. While a custom format was previously used, 2.0 now uses
-YAML, which makes sense because it is the format used in other
-Prometheus configuration files. Thankfully the [promtool](https://github.com/prometheus/prometheus/tree/master/cmd/promtool) command
-handles this migration automatically. The [new format](https://prometheus.io/blog/2017/06/21/prometheus-20-alpha3-new-rule-format/) also
-introduces [Rule groups](https://github.com/prometheus/prometheus/issues/1095), which improve control over the rules
-execution order. In 1.8, alerting rules were ran sequentially, but in
-2.0 the *groups* get executed esequentially and each group can have
-its own interval. This fixes the long-standing [race
-conditions between dependent rules](https://github.com/prometheus/prometheus/issues/1095).
+file format. While the 1.0 uses a custom format, 2.0 now uses YAML,
+like the other Prometheus configuration files. Thankfully the
+[promtool](https://github.com/prometheus/prometheus/tree/master/cmd/promtool) command handles this migration automatically. The [new
+format](https://prometheus.io/blog/2017/06/21/prometheus-20-alpha3-new-rule-format/) also introduces [rule groups](https://github.com/prometheus/prometheus/issues/1095), which improve control
+over the rules execution order. In 1.8, alerting rules were ran
+sequentially, but in 2.0 the *groups* get executed sequentially and
+each group can have its own interval. This fixes the long-standing
+[race conditions between dependent rules](https://github.com/prometheus/prometheus/issues/1095) that create
+inconsistencies when rules would reuse results. The problem should be
+fixed between groups, but rule authors still need to be careful of
+this issue *within* rule groups.
 
 Remaining limitations and future
 --------------------------------
 
-As we have seen in the introduction article, Prometheus may not be
-adapted for all workflows because of the limited default dashboards
+As we have seen in the [introductory article](/Articles/744410/), Prometheus may not
+be adapted for all workflows because of the limited default dashboards
 and alerts, but also because the lack of data retention
-policies. There is, however, [work being done](https://github.com/prometheus/prometheus/issues/1381) in Prometheus (and
-its [native storage engine](https://github.com/prometheus/tsdb/issues/56)) to possibly support downsampling in
-Prometheus itself, although this is an issue core developers are not
-necessarily comfortable with. When asked on IRC, Brian Brazil, one of
-the lead Prometheus developers, stated that "downsampling is a very
-hard problem, I don't believe it should be handed in Prometheus".
-
-Besides, it is already possible to downsample, using [recording
-rules](https://prometheus.io/docs/practices/rules/) to aggregate samples, and collecting the results in a second
-server with a slower sampling rate and different retention
-policy. Unfortunately, this is not a suitable solution for smaller
-deployments.
+policies. There are, however, discussions about [variable per-series
+retention](https://github.com/prometheus/prometheus/issues/1381) in Prometheus and [native down-sampling support in the
+storage engine](https://github.com/prometheus/tsdb/issues/56)), although this is a feature Prometheus developers
+are not really comfortable with. When asked on IRC, Brian Brazil, one
+of the lead Prometheus developers, [stated](https://riot.im/app/#/room/#prometheus:matrix.org/$15158612532461742oHkAM:matrix.org) that "downsampling is a
+very hard problem, I don't believe it should be handled in
+Prometheus".
+
+Besides, it is already possible to selectively [delete old series](https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md#delete-series)
+using the new 2.0 API. But Veeramanchaneni warned that this approach
+"puts extra pressure on Prometheus and unless you know what you are
+doing, its likely that you'll end up shooting yourself in the foot." A
+more common approach to native archival facilities is to use
+[recording rules](https://prometheus.io/docs/practices/rules/) to aggregate samples and collect the results in a
+second server with a slower sampling rate and different retention
+policy. And of course, the new release features [external storage
+engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) which can better support archival features. Those
+solutions are obviously not suitable for smaller deployments, which
+need to make hard choices about discarding older samples or getting
+more disk space.
 
 As part of the staleness improvements, Brazil also started working on
 "isolation" (the "I" in the [ACID acronym](https://en.wikipedia.org/wiki/ACID)) so that queries
@@ -93,40 +115,35 @@ wouldn't see "partial scrapes". This hasn't made the cut for the 2.0
 release, and is still [work in progress](https://github.com/prometheus/tsdb/pull/105), with some performance
 impacts (about 5% CPU and 10% RAM). This work would also be useful
 when heavy contention occurs in certain scenarios where Prometheus
-gets stuck on locking, which in turns impacts performance.
-
-Another performance improvement to be expected is a possible query
-engine rewrite. The current query engine can sometimes lead to
-excessive load for certain expensive queries, something that is
-acknowledged by the [security guide](https://prometheus.io/docs/operating/security/#denial-of-service). The idea here is to optimize
-the current engine so that those expensive queries wouldn't harm
+gets stuck on locking. The performance impacts could therefore be
+offset under heavy load.
+
+Another performance improvement mentioned during the talk is an
+eventual query engine rewrite. The current query engine can sometimes

(Diff truncated)
changes from lwn
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index 573118a2..ab4cf6df 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-01-15T00:00:00+0000"]]
-[[!meta updated="2018-01-16T10:14:27-0500"]]
+[[!meta updated="2018-01-16T13:20:32-0500"]]
 
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
 scratch by SoundCloud in 2012. It works by pulling metrics from
@@ -26,20 +26,21 @@ simplicity of its design: for one, metrics are exposed over HTTP using a
 special URL (`/metrics`) and a simple text format. Here is, as an
 example, some network metrics for a test machine:
 
-    $ curl -s http://curie:9100/metrics | grep node_network_.*_bytes
-    # HELP node_network_receive_bytes Network device statistic receive_bytes.
-    # TYPE node_network_receive_bytes gauge
-    node_network_receive_bytes{device="eth0"} 2.720630123e+09
-    # HELP node_network_transmit_bytes Network device statistic transmit_bytes.
-    # TYPE node_network_transmit_bytes gauge
-    node_network_transmit_bytes{device="eth0"} 4.03286677e+08
-
-In the above, the metrics are named `node_network_receive_bytes` and `node_network_transmit_bytes`, has a single
-label/value pair (`device=eth0`) attached to it, and finally the value of
-the metrics themselves. This is only some of hundreds of metrics
-(usage of CPU, memory, disk, temperature, and so on) exposed by the
-"node exporter", a basic stats collector running on monitored hosts.
-Metrics can be counters (e.g. per-interface packet counts), gauges (e.g.
+        $ curl -s http://curie:9100/metrics | grep node_network_.*_bytes
+        # HELP node_network_receive_bytes Network device statistic receive_bytes.
+        # TYPE node_network_receive_bytes gauge
+        node_network_receive_bytes{device="eth0"} 2.720630123e+09
+        # HELP node_network_transmit_bytes Network device statistic transmit_bytes.
+        # TYPE node_network_transmit_bytes gauge
+        node_network_transmit_bytes{device="eth0"} 4.03286677e+08
+
+In the above example, the metrics are named `node_network_receive_bytes`
+and `node_network_transmit_bytes`. They a single label/value
+pair(`device=eth0`) attached to them, along with the value of the
+metrics themselves. This is only a couple of hundreds of metrics (usage
+of CPU, memory, disk, temperature, and so on) exposed by the "node
+exporter", a basic stats collector running on monitored hosts. Metrics
+can be counters (e.g. per-interface packet counts), gauges (e.g.
 temperature or fan sensors), or
 [histograms](https://prometheus.io/docs/practices/histograms/). The
 latter allow, for example, [95th
@@ -65,17 +66,19 @@ metadata (e.g. IP address found or server state), which is useful for
 dynamic environments such as Kubernetes or containers orchestration in
 general.
 
-Once collected, metrics can be queried through the web interface,
-using a custom language called PromQL.  For example, a query showing
-the average bandwidth over the last minute for `eth0` would look like:
+Once collected, metrics can be queried through the web interface, using
+a custom language called PromQL. For example, a query showing the
+average bandwidth over the last minute for interface `eth0` would look
+like:
 
-    rate(node_network_receive_bytes{device="eth0"}[1m])
+        rate(node_network_receive_bytes{device="eth0"}[1m])
 
 Notice the "device" label, which we use to restrict the search to a
-single interface. This query can also be plotted into a simple graph on the
-web interface:
+single interface. This query can also be plotted into a simple graph on
+the web interface:
 
-![My first Prometheus graph](https://paste.anarc.at/snaps/snap-2018.01.16-10.56.12.png)
+> [![\[My first Prometheus
+> graph\]](https://static.lwn.net/images/2018/prometheus-graph-sm.png)](https://lwn.net/Articles/744415/)
 
 What is interesting here is not really the node exporter metrics
 themselves, as those are fairly standard in any monitoring solution. But
@@ -99,7 +102,8 @@ results, and allows to graph multiple machines separately, using the
 [Node Exporter Server Metrics](https://grafana.com/dashboards/405)
 dashboard:
 
-> ![Grafana dashboard](https://paste.anarc.at/snaps/snap-2018.01.16-10.58.20.png)
+> [![\[Grafana
+> dashboard\]](https://static.lwn.net/images/2018/prometheus-graph2-sm.png)](https://lwn.net/Articles/744415/#dash)
 
 All this work took roughly an hour of configuration, which is pretty
 good for a first try. Things get tougher when extending those basic
@@ -114,33 +118,41 @@ dashboard, not an *Apache* dashboard. So you need a [separate
 dashboard](https://grafana.com/dashboards/3894) for that. This is all
 work that's done automatically in Munin without any hand-holding.
 
-Even then, Apache is an easy one: monitoring some arbitrary server not
-supported by a custom exporter will require installing a program like
-[mtail](https://github.com/google/mtail/) that parses the server's logfiles to expose some metrics to
-Prometheus. There doesn't seem to be a way to write quick "run this
-command to count files" plugins that would allow administrators to
-write quick hacks. The options available are writing a [new
-exporter](https://prometheus.io/docs/instrumenting/writing_exporters/) using [client libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which seems to be a rather
-large undertaking for non-programmers. You can also use the [node
-exporter textfile option](https://github.com/prometheus/node_exporter#textfile-collector) which reads arbitrary metrics from plain
-text files in a directory. It's not as direct as running a shell
-command, but may be good enough for some use cases. Besides, there is
-a large number of [exporters already available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md), including ones
-that can tap into existing [Nagios](https://github.com/Griesbacher/Iapetos) and [Munin](https://github.com/pvdh/munin_exporter) servers to allow
-for a smooth transition.
-
-Unfortunately, those will only give you metrics, not graphs. To graph
-metrics from a third-party [Postfix
+Even then, Apache is a relatively easy one; monitoring some arbitrary
+server not supported by a custom exporter will require installing a
+program like [mtail](https://github.com/google/mtail/), which parses the
+server's logfiles to expose some metrics to Prometheus. There doesn't
+seem to be a way to write quick "run this command to count files"
+plugins that would allow administrators to write quick hacks. The
+options available are writing a [new
+exporter](https://prometheus.io/docs/instrumenting/writing_exporters/)
+using [client
+libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which
+seems to be a rather large undertaking for non-programmers. You can also
+use the [node exporter textfile
+option](https://github.com/prometheus/node_exporter#textfile-collector),
+which reads arbitrary metrics from plain text files in a directory. It's
+not as direct as running a shell command, but may be good enough for
+some use cases. Besides, there are a large number of [exporters already
+available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md),
+including ones that can tap into existing
+[Nagios](https://github.com/Griesbacher/Iapetos) and
+[Munin](https://github.com/pvdh/munin_exporter) servers to allow for a
+smooth transition.
+
+Unfortunately, those exporters will only give you metrics, not graphs.
+To graph metrics from a third-party [Postfix
 exporter](https://github.com/kumina/postfix_exporter), a graph must be
 created by hand in Grafana, with a magic PromQL formula. This may
 involve too much clicking around in a web browser for grumpy old
-administrators. There are tools like [Grafanalib](https://github.com/weaveworks/grafanalib) to
+administrators. There are tools like
+[Grafanalib](https://github.com/weaveworks/grafanalib) to
 programmaticaly create dashboards, but those also involve a lot of
 boilerplate. When building a custom application, however, creating
-graphs may actually be an fun and distracting task that some may
-enjoy. The Grafana/Prometheus design is certainly enticing and
-enables powerful abstractions that are not readily available with
-other monitoring systems.
+graphs may actually be an fun and distracting task that some may enjoy.
+The Grafana/Prometheus design is certainly enticing and enables powerful
+abstractions that are not readily available with other monitoring
+systems.
 
 Alerting and high availability
 ------------------------------
@@ -157,8 +169,8 @@ how it works here.
 
 In Prometheus, you design [alerting
 rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
-using PromQL. For example, to warn operators when network interfaces
-are close to saturation, we could set the following rule:
+using PromQL. For example, to warn operators when a network interface is
+close to saturation, we could set the following rule:
 
         alert: HighBandwidthUsage
         expr: rate(node_network_transmit_bytes{device="eth0"}[1m]) > 0.95*1e+09
@@ -211,17 +223,18 @@ workloads](https://prometheus.io/docs/introduction/overview/#when-does-it-not-fi
 
 In particular, Prometheus is not designed for long-term storage. By
 default, it keeps samples for only two weeks, which seems rather small
-for old system administrators used to RRDtool databases that efficiently
-store samples for years, with minimal disk space usage. As a comparison,
-my test Prometheus instance is taking up as much space for 5 days of
-samples than Munin, which has samples for the last year. Of course,
-Munin only collects metrics every 5 minutes, but this means Prometheus
-disk requirements are much larger than traditionnal RRDtool
-implementations, because it lacks native down-sampling
-facilities. Therefore, retaining
+to old system administrators who are used to RRDtool databases that
+efficiently store samples for years. As a comparison, my test Prometheus
+instance is taking up as much space for five days of samples as Munin,
+which has samples for the last year. Of course, Munin only collects
+metrics every five minutes, but this shows that Prometheus's disk
+requirements are much larger than traditionnal RRDtool implementations
+because it lacks native down-sampling facilities. Therefore, retaining
 samples for more than a year (which is a Munin limitation I was hoping
 to overcome) will be difficult without some serious hacking to
-selectively purge samples or add extra disk space. The [project
+selectively purge samples or adding extra disk space.
+
+The [project
 documentation](https://prometheus.io/docs/prometheus/latest/storage/)
 recognizes this and suggests using alternatives:
 
@@ -289,10 +302,14 @@ continuous-integration and deployment workflow. By integrating
 monitoring into development workflows, developers are immediately made
 aware of the performance impacts of proposed changes. Performance
 regressions can therefore be trivially identified quickly, which is a
-powerful tool for any application. Whereas system administrators may
-want to wait a bit before converting existing monitoring systems to
-Prometheus, application developers should certainly consider deploying
-Prometheus to instrument their applications, it will serve them well.
+powerful tool for any application.
+
+Whereas system administrators may want to wait a bit before converting

(Diff truncated)
more work
diff --git a/blog/prometheus.mdwn b/blog/prometheus.mdwn
index c3e7b355..dc21b980 100644
--- a/blog/prometheus.mdwn
+++ b/blog/prometheus.mdwn
@@ -631,4 +631,5 @@ bugs filed or expanded:
  * [document how to hook properly in prometheus](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886893)
  * [do not use daemon when systemd is available](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886894)
  * [new upstream version available (0.5.0)](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=887112)
- * [disk usage metrics](https://github.com/prometheus/prometheus/issues/3684)
+ * [disk usage metrics](https://github.com/prometheus/prometheus/issues/3684), including [patch to the textfiles to
+   implement this](https://github.com/prometheus/node_exporter) and [proof-of-concept patch](https://github.com/anarcat/node_exporter/commit/48d02bdbdcf9e49a761eed30abcab2b5461bf05f)

rework article after proofreading from tincho
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index dd9f4591..573118a2 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -24,20 +24,19 @@ Monitoring with Prometheus and Grafana
 What distinguishes Prometheus from other solutions is the relative
 simplicity of its design: for one, metrics are exposed over HTTP using a
 special URL (`/metrics`) and a simple text format. Here is, as an
-example, some disk metrics for a test machine:
-
-        $ curl -s http://curie:9100/metrics | grep node_disk_io_time_ms
-        # HELP node_disk_io_time_ms Total Milliseconds spent doing I/Os.
-        # TYPE node_disk_io_time_ms counter
-        node_disk_io_time_ms{device="dm-0"} 466884
-        node_disk_io_time_ms{device="dm-1"} 133776
-        node_disk_io_time_ms{device="dm-2"} 80
-        node_disk_io_time_ms{device="dm-3"} 349936
-        node_disk_io_time_ms{device="sda"} 446028
-
-In the above, the metric is named `node_disk_io_time_ms`, has a single
-label/value pair (`device=sda`) attached to it, and finally the value of
-the metric itself (`446028`). This is only one of hundreds of metrics
+example, some network metrics for a test machine:
+
+    $ curl -s http://curie:9100/metrics | grep node_network_.*_bytes
+    # HELP node_network_receive_bytes Network device statistic receive_bytes.
+    # TYPE node_network_receive_bytes gauge
+    node_network_receive_bytes{device="eth0"} 2.720630123e+09
+    # HELP node_network_transmit_bytes Network device statistic transmit_bytes.
+    # TYPE node_network_transmit_bytes gauge
+    node_network_transmit_bytes{device="eth0"} 4.03286677e+08
+
+In the above, the metrics are named `node_network_receive_bytes` and `node_network_transmit_bytes`, has a single
+label/value pair (`device=eth0`) attached to it, and finally the value of
+the metrics themselves. This is only some of hundreds of metrics
 (usage of CPU, memory, disk, temperature, and so on) exposed by the
 "node exporter", a basic stats collector running on monitored hosts.
 Metrics can be counters (e.g. per-interface packet counts), gauges (e.g.
@@ -66,25 +65,17 @@ metadata (e.g. IP address found or server state), which is useful for
 dynamic environments such as Kubernetes or containers orchestration in
 general.
 
-Once collected, metrics can be queried through the web interface, using
-a custom language called PromQL. For example, a query showing the
-average latency, per minute, for `/dev/sda` would look like:
+Once collected, metrics can be queried through the web interface,
+using a custom language called PromQL.  For example, a query showing
+the average bandwidth over the last minute for `eth0` would look like:
 
-        alert: DiskHighRequestLatency
-        expr: rate(node_disk_io_time_ms{device="sda"}[1m]) > 200
-        for: 5m
-        labels:
-          severity: critical
-        annotations:
-          description: 'Unusually high latency on disk {{ $labels.disk }}'
-          summary: 'High disk latency on {{ $labels.instance }}'
+    rate(node_network_receive_bytes{device="eth0"}[1m])
 
 Notice the "device" label, which we use to restrict the search to a
-single disk. This query can also be plotted into a simple graph on the
+single interface. This query can also be plotted into a simple graph on the
 web interface:
 
-> ![\[My first Prometheus
-> graph\]](https://static.lwn.net/images/2018/prometheus-graph.png)
+![My first Prometheus graph](https://paste.anarc.at/snaps/snap-2018.01.16-10.56.12.png)
 
 What is interesting here is not really the node exporter metrics
 themselves, as those are fairly standard in any monitoring solution. But
@@ -108,8 +99,7 @@ results, and allows to graph multiple machines separately, using the
 [Node Exporter Server Metrics](https://grafana.com/dashboards/405)
 dashboard:
 
-> [![\[Grafana
-> dashboard\]](https://static.lwn.net/images/2018/prometheus-graph2-sm.png)](https://lwn.net/Articles/744415/)
+> ![Grafana dashboard](https://paste.anarc.at/snaps/snap-2018.01.16-10.58.20.png)
 
 All this work took roughly an hour of configuration, which is pretty
 good for a first try. Things get tougher when extending those basic
@@ -124,38 +114,33 @@ dashboard, not an *Apache* dashboard. So you need a [separate
 dashboard](https://grafana.com/dashboards/3894) for that. This is all
 work that's done automatically in Munin without any hand-holding.
 
-Even then, Apache is an easy one: monitoring a Postfix server, for
-example, currently requires installing a program like
-[mtail](https://github.com/google/mtail/) that parses the Postfix
-logfiles to expose some metrics to Prometheus. Yet this will not give
-you critical metrics like queue sizes that can indicate a backlog. There
-doesn't seem to be a way to write quick "run this command to count
-files" plugins that would allow administrators to write such quick hacks
-as watching the queue sizes in Postfix, without writing a [new
-exporter](https://prometheus.io/docs/instrumenting/writing_exporters/)
-using [client
-libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which
-seems to be a rather large undertaking for non-programmers. There are,
-however, a large number of [exporters already
-available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md),
-including ones that can tap into existing
-[Nagios](https://github.com/Griesbacher/Iapetos) and
-[Munin](https://github.com/pvdh/munin_exporter) servers to allow for a
-smooth transition.
-
-However I couldn't find a dashboard that would show and analyse those
-logs at all. To graph metrics from the [postfix mtail
-plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail),
-a graph must be created by hand in Grafana, with a magic PromQL formula.
-This may involve too much clicking around in a web browser for grumpy
-old administrators. There are tools like
-[Grafanalib](https://github.com/weaveworks/grafanalib) to
+Even then, Apache is an easy one: monitoring some arbitrary server not
+supported by a custom exporter will require installing a program like
+[mtail](https://github.com/google/mtail/) that parses the server's logfiles to expose some metrics to
+Prometheus. There doesn't seem to be a way to write quick "run this
+command to count files" plugins that would allow administrators to
+write quick hacks. The options available are writing a [new
+exporter](https://prometheus.io/docs/instrumenting/writing_exporters/) using [client libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which seems to be a rather
+large undertaking for non-programmers. You can also use the [node
+exporter textfile option](https://github.com/prometheus/node_exporter#textfile-collector) which reads arbitrary metrics from plain
+text files in a directory. It's not as direct as running a shell
+command, but may be good enough for some use cases. Besides, there is
+a large number of [exporters already available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md), including ones
+that can tap into existing [Nagios](https://github.com/Griesbacher/Iapetos) and [Munin](https://github.com/pvdh/munin_exporter) servers to allow
+for a smooth transition.
+
+Unfortunately, those will only give you metrics, not graphs. To graph
+metrics from a third-party [Postfix
+exporter](https://github.com/kumina/postfix_exporter), a graph must be
+created by hand in Grafana, with a magic PromQL formula. This may
+involve too much clicking around in a web browser for grumpy old
+administrators. There are tools like [Grafanalib](https://github.com/weaveworks/grafanalib) to
 programmaticaly create dashboards, but those also involve a lot of
 boilerplate. When building a custom application, however, creating
-graphs may actually be an fun and distracting task that some may enjoy.
-The Grafana/Prometheus design is certainly enticing and enables powerful
-abstractions that are not readily available with other monitoring
-systems.
+graphs may actually be an fun and distracting task that some may
+enjoy. The Grafana/Prometheus design is certainly enticing and
+enables powerful abstractions that are not readily available with
+other monitoring systems.
 
 Alerting and high availability
 ------------------------------
@@ -172,10 +157,17 @@ how it works here.
 
 In Prometheus, you design [alerting
 rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
-using PromQL. For example, to make sure we complete all disk I/O within
-a certain timeframe (say 200ms), we could set the following rule:
+using PromQL. For example, to warn operators when network interfaces
+are close to saturation, we could set the following rule:
 
-        rate(node_disk_io_time_ms{device="sda"}[1m]) < 200
+        alert: HighBandwidthUsage
+        expr: rate(node_network_transmit_bytes{device="eth0"}[1m]) > 0.95*1e+09
+        for: 5m
+        labels:
+          severity: critical
+        annotations:
+          description: 'Unusually high bandwidth on interface {{ $labels.device }}'
+          summary: 'High bandwidth on {{ $labels.instance }}'
 
 Those rules are regularly checked and matching rules are fired to an
 [alertmanager](https://github.com/prometheus/alertmanager) daemon that
@@ -221,16 +213,14 @@ In particular, Prometheus is not designed for long-term storage. By
 default, it keeps samples for only two weeks, which seems rather small
 for old system administrators used to RRDtool databases that efficiently
 store samples for years, with minimal disk space usage. As a comparison,
-here is the disk usage of Munin, which keeps one year of samples, and
-Prometheus, which has been running for a *day*, on my test server:
-
-        # du -sch /var/lib/munin /var/lib/prometheus/
-        368M    /var/lib/munin
-        149M    /var/lib/prometheus/
-
-To be fair, Munin only collects metrics every 5 minutes. But retaining
+my test Prometheus instance is taking up as much space for 5 days of
+samples than Munin, which has samples for the last year. Of course,
+Munin only collects metrics every 5 minutes, but this means Prometheus
+disk requirements are much larger than traditionnal RRDtool
+implementations, because it lacks native down-sampling
+facilities. Therefore, retaining
 samples for more than a year (which is a Munin limitation I was hoping
-to overcome) would be difficult without some serious hacking to
+to overcome) will be difficult without some serious hacking to
 selectively purge samples or add extra disk space. The [project
 documentation](https://prometheus.io/docs/prometheus/latest/storage/)
 recognizes this and suggests using alternatives:
@@ -299,9 +289,10 @@ continuous-integration and deployment workflow. By integrating
 monitoring into development workflows, developers are immediately made
 aware of the performance impacts of proposed changes. Performance
 regressions can therefore be trivially identified quickly, which is a
-powerful tool for any application.
-
-
+powerful tool for any application. Whereas system administrators may
+want to wait a bit before converting existing monitoring systems to
+Prometheus, application developers should certainly consider deploying
+Prometheus to instrument their applications, it will serve them well.

(Diff truncated)
more changes from lwn
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index c19892e7..dd9f4591 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-01-15T00:00:00+0000"]]
-[[!meta updated="2018-01-15T15:14:23-0500"]]
+[[!meta updated="2018-01-16T10:14:27-0500"]]
 
 [Prometheus](https://prometheus.io/) is a monitoring tool built from
 scratch by SoundCloud in 2012. It works by pulling metrics from
@@ -12,10 +12,11 @@ has a powerful query language to inspect that database, create alerts,
 and plot basic graphs. Those graphs can then be used to detect anomalies
 or trends for (possibly automated) resource provisioning. Prometheus
 also has extensive service discovery features and supports high
-availability configurations. That's what the brochure says anyways, let's see
-how it works in the hands of an old grumpy system administrator. I'll be
-drawing comparisons with Munin and Nagios frequently because those are
-the tools I have used for over a decade in monitoring Unix clusters.
+availability configurations. That's what the brochure says, anyway;
+let's see how it works in the hands of an old grumpy system
+administrator. I'll be drawing comparisons with Munin and Nagios
+frequently because those are the tools I have used for over a decade in
+monitoring Unix clusters.
 
 Monitoring with Prometheus and Grafana
 --------------------------------------
@@ -69,7 +70,14 @@ Once collected, metrics can be queried through the web interface, using
 a custom language called PromQL. For example, a query showing the
 average latency, per minute, for `/dev/sda` would look like:
 
-        rate(node_disk_io_time_ms{device="sda"}[1m])
+        alert: DiskHighRequestLatency
+        expr: rate(node_disk_io_time_ms{device="sda"}[1m]) > 200
+        for: 5m
+        labels:
+          severity: critical
+        annotations:
+          description: 'Unusually high latency on disk {{ $labels.disk }}'
+          summary: 'High disk latency on {{ $labels.instance }}'
 
 Notice the "device" label, which we use to restrict the search to a
 single disk. This query can also be plotted into a simple graph on the
@@ -120,11 +128,10 @@ Even then, Apache is an easy one: monitoring a Postfix server, for
 example, currently requires installing a program like
 [mtail](https://github.com/google/mtail/) that parses the Postfix
 logfiles to expose some metrics to Prometheus. Yet this will not give
-you critical metrics like queue sizes that can indicate a backlog.
-There doesn't seem to be a way to write quick "run this
-command to count files" plugins that would allow administrators to write
-such quick hacks as watching the queue sizes in Postfix, without writing
-a [new
+you critical metrics like queue sizes that can indicate a backlog. There
+doesn't seem to be a way to write quick "run this command to count
+files" plugins that would allow administrators to write such quick hacks
+as watching the queue sizes in Postfix, without writing a [new
 exporter](https://prometheus.io/docs/instrumenting/writing_exporters/)
 using [client
 libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which
@@ -139,9 +146,9 @@ smooth transition.
 However I couldn't find a dashboard that would show and analyse those
 logs at all. To graph metrics from the [postfix mtail
 plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail),
-a graph needs to be crafted by hand in Grafana, with a magic PromQL formula. This
-may involve too much clicking around in a web browser for grumpy old
-administrators. There are tools like
+a graph must be created by hand in Grafana, with a magic PromQL formula.
+This may involve too much clicking around in a web browser for grumpy
+old administrators. There are tools like
 [Grafanalib](https://github.com/weaveworks/grafanalib) to
 programmaticaly create dashboards, but those also involve a lot of
 boilerplate. When building a custom application, however, creating
@@ -168,14 +175,7 @@ rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules
 using PromQL. For example, to make sure we complete all disk I/O within
 a certain timeframe (say 200ms), we could set the following rule:
 
-    alert: DiskHighRequestLatency
-    expr: rate(node_disk_io_time_ms{device="sda"}[1m]) > 200
-    for: 5m
-    labels:
-      severity: critical
-    annotations:
-      description: 'Unusually high latency on disk {{ $labels.disk }}'
-      summary: 'High disk latency on {{ $labels.instance }}'
+        rate(node_disk_io_time_ms{device="sda"}[1m]) < 200
 
 Those rules are regularly checked and matching rules are fired to an
 [alertmanager](https://github.com/prometheus/alertmanager) daemon that
@@ -247,9 +247,8 @@ infrastructure. And when *that* is not enough [sharding is
 possible](https://www.robustperception.io/scaling-and-federating-prometheus/).
 In general, performance is dependent on avoiding variable data in
 labels, which keeps the cardinality of the dataset under control, but
-regardless: the dataset size will grow linearly with time. So long-term
-storage is not Prometheus' strongest suit. But starting with 2.0,
-Prometheus can
+regardless: the dataset size will grow with time. So long-term storage
+is not Prometheus' strongest suit. But starting with 2.0, Prometheus can
 [finally](https://github.com/prometheus/prometheus/issues/10) write to
 (and read from) [external storage
 engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage)
@@ -274,35 +273,33 @@ accomplished through a simple firewall rule.
 
 There is a large empty space for Prometheus dashboards and alert
 templates. Whereas tools like Munin or Nagios had years to come up with
-lots of plugins and alerts, and converge over best practices like "70%
-disk usage is a warning but 90% is critical", those things all need to be configured manually in
-Prometheus. Prometheus should aim at shipping standard
-sets of dashboards and alerts for built-in metrics, but the project
-currently lacks the time to implement those. The [Grafana list of Prometheus
-dashboards](https://grafana.com/dashboards?dataSource=prometheus) also
-shows
-the problem: there are many different dashboards, sometimes multiple
-ones for the same task, and it's unclear which one is the best. There is
-therefore space for a curated list of dashboards and a definite need for
-expanding those to feature more extensive coverage.
-
-As a replacement for traditional monitoring tools, Prometheus may not
-be quite there yet, but it will get there and I would certainly advise
-administrators to keep an eye on the project. Besides, Munin and
-Nagios feature-parity is just a requirement from an old grumpy system
-administrator. For hip young application developers smoking weird
-stuff in containers, Prometheus is the bomb. Just take for example how
-[GitLab started integrating
+lots of plugins and alerts, and to converge on best practices like "70%
+disk usage is a warning but 90% is critical", those things all need to
+be configured manually in Prometheus. Prometheus should aim at shipping
+standard sets of dashboards and alerts for built-in metrics, but the
+project currently lacks the time to implement those.
+
+The [Grafana list of Prometheus
+dashboards](https://grafana.com/dashboards?dataSource=prometheus) shows
+one aspect of the problem: there are many different dashboards,
+sometimes multiple ones for the same task, and it's unclear which one is
+the best. There is therefore space for a curated list of dashboards and
+a definite need for expanding those to feature more extensive coverage.
+
+As a replacement for traditional monitoring tools, Prometheus may not be
+quite there yet, but it will get there and I would certainly advise
+administrators to keep an eye on the project. Besides, Munin and Nagios
+feature-parity is just a requirement from an old grumpy system
+administrator. For hip young application developers smoking weird stuff
+in containers, Prometheus is the bomb. Just take for example how [GitLab
+started integrating
 Prometheus](https://about.gitlab.com/2017/01/05/prometheus-and-gitlab/),
 not only to monitor GitLab.com itself, but also to monitor the
 continuous-integration and deployment workflow. By integrating
 monitoring into development workflows, developers are immediately made
 aware of the performance impacts of proposed changes. Performance
 regressions can therefore be trivially identified quickly, which is a
-powerful tool for any application. Whereas system administrators may
-want to wait a bit before converting existing monitoring systems to
-Prometheus, application developers should certainly consider deploying
-Prometheus to instrument their applications, it will serve them well.
+powerful tool for any application.
 
 
 
@@ -312,8 +309,3 @@ Prometheus to instrument their applications, it will serve them well.
 [Linux Weekly News]: http://lwn.net/
 
 [[!tag debian-planet lwn]]
-
-1. change metrics examples
-2. postfix *has* an exporter
-3. there's a textile module to suck output from shell scripts
-4. alert on service indicators instead of host metrics

add downsampling note
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index 557ad22f..c1efe00a 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -126,3 +126,7 @@ Notes
 -----
 
  * staleness presentation: https://www.youtube.com/watch?v=GcTzd2CLH7I
+
+re. downsampling:
+https://github.com/prometheus/prometheus/blob/master/docs/querying/api.md#delete-series
+https://twitter.com/putadent/status/952419777640742912

review from tincho
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index baa2eeec..c19892e7 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -312,3 +312,8 @@ Prometheus to instrument their applications, it will serve them well.
 [Linux Weekly News]: http://lwn.net/
 
 [[!tag debian-planet lwn]]
+
+1. change metrics examples
+2. postfix *has* an exporter
+3. there's a textile module to suck output from shell scripts
+4. alert on service indicators instead of host metrics

my own review after lwn changes
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
index 0bc2550f..baa2eeec 100644
--- a/blog/monitoring-prometheus.mdwn
+++ b/blog/monitoring-prometheus.mdwn
@@ -12,7 +12,7 @@ has a powerful query language to inspect that database, create alerts,
 and plot basic graphs. Those graphs can then be used to detect anomalies
 or trends for (possibly automated) resource provisioning. Prometheus
 also has extensive service discovery features and supports high
-availability configurations. That's the brochure says anyways, let's see
+availability configurations. That's what the brochure says anyways, let's see
 how it works in the hands of an old grumpy system administrator. I'll be
 drawing comparisons with Munin and Nagios frequently because those are
 the tools I have used for over a decade in monitoring Unix clusters.
@@ -120,8 +120,8 @@ Even then, Apache is an easy one: monitoring a Postfix server, for
 example, currently requires installing a program like
 [mtail](https://github.com/google/mtail/) that parses the Postfix
 logfiles to expose some metrics to Prometheus. Yet this will not give
-you critical metrics like queue sizes that can indicate a backlog
-condition. There doesn't seem to be a way to write quick "run this
+you critical metrics like queue sizes that can indicate a backlog.
+There doesn't seem to be a way to write quick "run this
 command to count files" plugins that would allow administrators to write
 such quick hacks as watching the queue sizes in Postfix, without writing
 a [new
@@ -139,7 +139,7 @@ smooth transition.
 However I couldn't find a dashboard that would show and analyse those
 logs at all. To graph metrics from the [postfix mtail
 plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail),
-a graph created by hand in Grafana, with a magic PromQL formula. This
+a graph needs to be crafted by hand in Grafana, with a magic PromQL formula. This
 may involve too much clicking around in a web browser for grumpy old
 administrators. There are tools like
 [Grafanalib](https://github.com/weaveworks/grafanalib) to
@@ -147,7 +147,7 @@ programmaticaly create dashboards, but those also involve a lot of
 boilerplate. When building a custom application, however, creating
 graphs may actually be an fun and distracting task that some may enjoy.
 The Grafana/Prometheus design is certainly enticing and enables powerful
-abstractions that are not easily available with other monitoring
+abstractions that are not readily available with other monitoring
 systems.
 
 Alerting and high availability
@@ -168,7 +168,14 @@ rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules
 using PromQL. For example, to make sure we complete all disk I/O within
 a certain timeframe (say 200ms), we could set the following rule:
 
-        rate(node_disk_io_time_ms{device="sda"}[1m]) < 200
+    alert: DiskHighRequestLatency
+    expr: rate(node_disk_io_time_ms{device="sda"}[1m]) > 200
+    for: 5m
+    labels:
+      severity: critical
+    annotations:
+      description: 'Unusually high latency on disk {{ $labels.disk }}'
+      summary: 'High disk latency on {{ $labels.instance }}'
 
 Those rules are regularly checked and matching rules are fired to an
 [alertmanager](https://github.com/prometheus/alertmanager) daemon that
@@ -265,34 +272,37 @@ publicly without any protection. It would be nice to have at least
 IP-level blocking in the node exporter, although this could also be
 accomplished through a simple firewall rule.
 
-As a replacement for traditional monitoring tools, Prometheus may not be
-quite there yet, but it will get there and would certainly advise
-administrators to keep an eye on the project. Application developers
-should certainly consider deploying it to instrument their applications,
-it will serve them well.
-
 There is a large empty space for Prometheus dashboards and alert
 templates. Whereas tools like Munin or Nagios had years to come up with
 lots of plugins and alerts, and converge over best practices like "70%
-disk usage is a warning but 90% is critical", those things all need to
-be built into Prometheus. The [Grafana list of Prometheus
-dashboards](https://grafana.com/dashboards?dataSource=prometheus) shows
+disk usage is a warning but 90% is critical", those things all need to be configured manually in
+Prometheus. Prometheus should aim at shipping standard
+sets of dashboards and alerts for built-in metrics, but the project
+currently lacks the time to implement those. The [Grafana list of Prometheus
+dashboards](https://grafana.com/dashboards?dataSource=prometheus) also
+shows
 the problem: there are many different dashboards, sometimes multiple
 ones for the same task, and it's unclear which one is the best. There is
 therefore space for a curated list of dashboards and a definite need for
 expanding those to feature more extensive coverage.
 
-But this is just the requirement of an old grumpy system administrator.
-For hip young application developers smoking weird stuff in containers,
-Prometheus is the bomb. Just take for example how [GitLab started
-integrating
+As a replacement for traditional monitoring tools, Prometheus may not
+be quite there yet, but it will get there and I would certainly advise
+administrators to keep an eye on the project. Besides, Munin and
+Nagios feature-parity is just a requirement from an old grumpy system
+administrator. For hip young application developers smoking weird
+stuff in containers, Prometheus is the bomb. Just take for example how
+[GitLab started integrating
 Prometheus](https://about.gitlab.com/2017/01/05/prometheus-and-gitlab/),
 not only to monitor GitLab.com itself, but also to monitor the
 continuous-integration and deployment workflow. By integrating
 monitoring into development workflows, developers are immediately made
 aware of the performance impacts of proposed changes. Performance
 regressions can therefore be trivially identified quickly, which is a
-powerful tool for any application.
+powerful tool for any application. Whereas system administrators may
+want to wait a bit before converting existing monitoring systems to
+Prometheus, application developers should certainly consider deploying
+Prometheus to instrument their applications, it will serve them well.
 
 
 

some wart found in my own review
diff --git a/blog/prometheus-101.mdwn b/blog/prometheus-101.mdwn
deleted file mode 100644
index 9327c8d8..00000000
--- a/blog/prometheus-101.mdwn
+++ /dev/null
@@ -1,292 +0,0 @@
-Monitoring with Prometheus 2.0
-==============================
-
-[Prometheus](https://prometheus.io/) is a monitoring tool built from scratch by SoundCloud
-in 2012. It works by pulling metrics from monitored services and
-storing them in a time series database (TSDB). It has a powerful query
-language to inspect that database, create alerts, and plot basic
-graphs. Those graphs can then be used to detect anomalies or trends
-for (possibly automated) resource provisioning. Prometheus also has
-extensive service discovery features and supports high availability
-configurations.
-
-That's the brochure says anyways, let's see how it works in the hands
-of an old grumpy system administrator. I'll be drawing comparisons
-with Munin and Nagios frequently because those are the tools I have
-used for over a decade in monitoring Unix clusters.
-
-Monitoring with Prometheus and Grafana
---------------------------------------
-
-{should i show how to install prometheus? fairly trivial - go get or
-apt-get install works, basically}
-
-What distinguishes Prometheus from other solutions is the relative
-simplicity of its design: for one, metrics are exposed over HTTP using
-a special URL (`/metrics`) and a simple text format. Here is, as an
-example, some disk metrics for a test machine:
-
-    $ curl -s http://curie:9100/metrics | grep node_disk_io_time_ms
-    # HELP node_disk_io_time_ms Total Milliseconds spent doing I/Os.
-    # TYPE node_disk_io_time_ms counter
-    node_disk_io_time_ms{device="dm-0"} 466884
-    node_disk_io_time_ms{device="dm-1"} 133776
-    node_disk_io_time_ms{device="dm-2"} 80
-    node_disk_io_time_ms{device="dm-3"} 349936
-    node_disk_io_time_ms{device="sda"} 446028
-
-{should i use the more traditionnal bandwidth example instead? kubecon
-talk used the "cars driving by" analogy which i didn't find so
-useful. i deliberately picked a tricky example that's not commonly
-available, but it might be too confusing}
-
-In the above, the metric is named `node_disk_io_time_ms`, has a single
-label/value pair (`device=sda`) attached to it, and finally the value
-of the metric itself (`446028`). This is only one of hundreds of
-metrics (usage of CPU, memory, disk, temperature, and so on) exposed by
-the "node exporter", a basic stats collector running on monitored
-hosts. Metrics can be counters (e.g. per-interface packet counts),
-gauges (e.g. temperature or fan sensors), or [histograms](https://prometheus.io/docs/practices/histograms/). The
-latter allow for example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, something
-that has been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is essential to
-billing networking customers. Another popular use for historgrams
-maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N requests are
-answered in X time. The various metrics types are carefully analyzed
-before being stored to correctly handle conditions like overflow
-(which occurs surprisingly often on gigabit network interfaces) or
-resets (when a device restarts).
-
-Those metrics are fetched from "targets", which are simply HTTP
-endpoints, added to the Prometheus configuration file. Targets can
-also be automatically added through various [discovery mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/)
-like DNS that allows having a single `A` or `SRV` that lists all the
-hosts to monitor; or Kubernetes or cloud providers APIs that list all
-containers or virtual machines to monitor. Discovery works in
-realtime, so it will correctly pickup changes in DNS, for example. It
-can also add metadata (e.g. IP address found or server state), which
-is useful for dynamic environments such as Kubernetes or containers
-orchestration in general.
-
-Once collected, metrics can be queried through the web interface,
-using a custom language called PromQL. For example, a query showing
-the average latency, per minute, for `/dev/sda` would look like:
-
-    rate(node_disk_io_time_ms{device="sda"}[1m])
-
-Notice the "device" label, which we use to restrict the search to a
-single disk. This query can also be plotted into a simple graph on the
-web interface:
-
-![my first prometheus graph](https://paste.anarc.at/snaps/snap-2018.01.12-16.29.20.png)
-
-What is interesting here is not really the node exporter metrics
-themselves, as those are fairly standard in any monitoring solution.
-But in Prometheus, any (web) application can easily expose their own
-internal metrics to the monitoring server through regular HTTP,
-whereas other systems would require special plugins, both on the
-monitoring server but also the application side. Note that Munin also
-follows a similar pattern, but speaks its own text protocol on top of
-TCP, which means it is harder to implement for web apps and diagnose
-with a web browser.
-
-However, coming from the world of Munin, where all sorts of graphics
-just magically appear out of the box, this first experience can be a
-bit of a disappointement: everything is built by hand and
-ephemeral. While there are ways to add custom graphs to the Prometheus
-web interface using Go-based [console templates](https://prometheus.io/docs/visualization/consoles/), most Prometheus
-deployments generally [use Grafana](https://prometheus.io/docs/visualization/grafana/) to render the results using
-custom-built dashboards. This gives much better results, and allows to
-graph multiple machines separately, using the [Node Exporter Server
-Metrics](https://grafana.com/dashboards/405) dashboard:
-
-![A Grafana dashboard showing metrics from 2 servers](https://paste.anarc.at/snaps/snap-2018.01.12-16.30.40.png)
-
-All this work took roughly an hour of configuration, which is pretty
-good for a first try. Things get tougher when extending those basic
-metrics: because of the system's modularity, it is difficult to add
-new metrics to existing dashboards. For example, web or mail servers
-are not monitored by the node exporter. So monitoring a web server
-involves installing an [Apache-specific exporter](https://github.com/Lusitaniae/apache_exporter) that needs to be
-added to the Prometheus configuration. But it won't show up
-automatically in the above dashboard, because that's a "node exporter"
-dashboard, not an *Apache* dashboard. So you need a [separate
-dashboard](https://grafana.com/dashboards/3894) for that. This is all work that's done automatically in
-Munin without any hand-holding.
-
-Event then, Apache is an easy one: monitoring a Postfix server, for
-example, currently requires installing program like [mtail](https://github.com/google/mtail/) that
-parses the Postfix logfiles to expose some metrics to Prometheus. Yet
-this will not tell you critical metrics like queue sizes that can
-alert administrators of backlog conditions. There doesn't seem to be a
-way to write quick "run this command to count files" plugins that
-would allow administrators to write such quick hacks as watching the
-queue sizes in Postfix, without writing a [new exporter](https://prometheus.io/docs/instrumenting/writing_exporters/) using
-[client libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which seems to be a rather large undertaking for
-non-programmers. There are, however, a large number of [exporters
-already available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md), including ones that can tap into existing
-[Nagios](https://github.com/Griesbacher/Iapetos_) and [Munin](https://github.com/pvdh/munin_exporter) servers to allow for a smooth transition.
-
-However, at the time of writing, I couldn't find a dashboard that would
-show and analyse those logs at all. To graph metrics from the [postfix
-mtail plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail), a graph created by hand in Grafana, with a magic
-PromQL formula. This may involve too much clicking around a web
-browser for grumpy old administrators. There are tools like
-[Grafanalib](https://github.com/weaveworks/grafanalib) to programmaticaly create dashboards, but those also
-involve a lot of boilerplate. When building a custom application,
-however, creating graphs may actually be an fun and distracting task
-that some may enjoy. The Grafana/Prometheus design is certainly
-enticing and enables powerful abstractions that are not easily done
-with other monitoring systems.
-
-Alerting and high availability
-------------------------------
-
-So far, we've worked only with a single server, and did only
-graphing. But Prometheus also supports sending alarms when things go
-bad. After working over a decade as a system administrator, I have
-mixed feelings about "paging" or "alerting" as it's called in
-Prometheus. Regardless of how well the system is tweaked, I have come
-to believe it is basically impossible to design a system that will
-respect workers and not torture on-call personel through
-sleep-deprivation. It seems it's a feature people want regardless,
-especially in the entreprise so let's look at how it works here.
-
-In Prometheus, you design [alert rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) using PromQL. For example,
-to make sure we complete all disk I/O within a certain timeframe (say
-200ms), we could set the following rule:
-
-    rate(node_disk_io_time_ms{device="sda"}[1m]) < 200
-
-{this is a silly check - should i use something more obvious like disk
-space or keep consistency?}
-
-Those rules are regularly checked and matching rules are fired to an
-[alertmanager](https://github.com/prometheus/alertmanager) daemon that can receive alerts from multiple
-Prometheus servers. The alertmanager then deduplicates multiple
-alerts, regroups them (so a single notification is sent even if
-multiple alerts are received), and sends the actual notifications
-through [various services](https://prometheus.io/docs/alerting/configuration/) like email, PagerDuty, Slack or an
-arbitrary [webhooks](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver).
-
-The Alertmanager has a "gossip protocol" to enable multiple instance
-to coordinate notifications. This design allows you to run multiple
-Prometheus servers in a [federation](https://prometheus.io/docs/prometheus/latest/federation/) model, all simultaneously
-collecting metrics, and sending alerts to redundant Alertmanager
-instances to create a highly available monitoring system. Those who
-have struggled with such setups in Nagios will surely appreciate the
-simplicity of this design.
-
-The downside is that Prometheus doesn't ship a set of default alerts
-and exporters do not define default alerting thresholds that could be
-used to create rules automatically either. The Prometheus
-documentation [also lacks examples](https://github.com/prometheus/docs/issues/581) that the community could use as
-well, so alerting is harder to deploy than in classic monitoring
-systems.
-
-Issues and limitations
-----------------------
-
-Prometheus is already well-established out there: [Cloudflare](https://www.infoq.com/news/2017/10/monitoring-cloudflare-prometheus),
-[Canonical](https://www.cncf.io/blog/2017/07/17/prometheus-user-profile-canonical-talks-transition-prometheus/) and (of course) [SoundCloud](https://developers.soundcloud.com/blog/prometheus-monitoring-at-soundcloud) are all (still) using
-it in production. It is a common monitoring tool used in Kubernetes
-deployments because of its discovery features. Prometheus is, however,
-not a silver bullet and may [not the best tool for all workloads](https://prometheus.io/docs/introduction/overview/#when-does-it-not-fit?).
-

(Diff truncated)
changes from lwn
diff --git a/blog/monitoring-prometheus.mdwn b/blog/monitoring-prometheus.mdwn
new file mode 100644
index 00000000..0bc2550f
--- /dev/null
+++ b/blog/monitoring-prometheus.mdwn
@@ -0,0 +1,304 @@
+[[!meta title="Monitoring with Prometheus 2.0"]]
+\[LWN subscriber-only content\]
+-------------------------------
+
+[[!meta date="2018-01-15T00:00:00+0000"]]
+[[!meta updated="2018-01-15T15:14:23-0500"]]
+
+[Prometheus](https://prometheus.io/) is a monitoring tool built from
+scratch by SoundCloud in 2012. It works by pulling metrics from
+monitored services and storing them in a time series database (TSDB). It
+has a powerful query language to inspect that database, create alerts,
+and plot basic graphs. Those graphs can then be used to detect anomalies
+or trends for (possibly automated) resource provisioning. Prometheus
+also has extensive service discovery features and supports high
+availability configurations. That's the brochure says anyways, let's see
+how it works in the hands of an old grumpy system administrator. I'll be
+drawing comparisons with Munin and Nagios frequently because those are
+the tools I have used for over a decade in monitoring Unix clusters.
+
+Monitoring with Prometheus and Grafana
+--------------------------------------
+
+What distinguishes Prometheus from other solutions is the relative
+simplicity of its design: for one, metrics are exposed over HTTP using a
+special URL (`/metrics`) and a simple text format. Here is, as an
+example, some disk metrics for a test machine:
+
+        $ curl -s http://curie:9100/metrics | grep node_disk_io_time_ms
+        # HELP node_disk_io_time_ms Total Milliseconds spent doing I/Os.
+        # TYPE node_disk_io_time_ms counter
+        node_disk_io_time_ms{device="dm-0"} 466884
+        node_disk_io_time_ms{device="dm-1"} 133776
+        node_disk_io_time_ms{device="dm-2"} 80
+        node_disk_io_time_ms{device="dm-3"} 349936
+        node_disk_io_time_ms{device="sda"} 446028
+
+In the above, the metric is named `node_disk_io_time_ms`, has a single
+label/value pair (`device=sda`) attached to it, and finally the value of
+the metric itself (`446028`). This is only one of hundreds of metrics
+(usage of CPU, memory, disk, temperature, and so on) exposed by the
+"node exporter", a basic stats collector running on monitored hosts.
+Metrics can be counters (e.g. per-interface packet counts), gauges (e.g.
+temperature or fan sensors), or
+[histograms](https://prometheus.io/docs/practices/histograms/). The
+latter allow, for example, [95th
+percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile)
+analysis, something that has been [missing from Munin
+forever](http://munin-monitoring.org/ticket/443) and is essential to
+billing networking customers. Another popular use for histograms is
+maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to
+make sure that N requests are answered in X time. The various metrics
+types are carefully analyzed before being stored to correctly handle
+conditions like overflow (which occurs surprisingly often on gigabit
+network interfaces) or resets (when a device restarts).
+
+Those metrics are fetched from "targets", which are simply HTTP
+endpoints, added to the Prometheus configuration file. Targets can also
+be automatically added through various [discovery
+mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/),
+like DNS, that allow having a single `A` or `SRV` record that lists all
+the hosts to monitor; or Kubernetes or cloud-provider APIs that list all
+containers or virtual machines to monitor. Discovery works in real time,
+so it will correctly pickup changes in DNS, for example. It can also add
+metadata (e.g. IP address found or server state), which is useful for
+dynamic environments such as Kubernetes or containers orchestration in
+general.
+
+Once collected, metrics can be queried through the web interface, using
+a custom language called PromQL. For example, a query showing the
+average latency, per minute, for `/dev/sda` would look like:
+
+        rate(node_disk_io_time_ms{device="sda"}[1m])
+
+Notice the "device" label, which we use to restrict the search to a
+single disk. This query can also be plotted into a simple graph on the
+web interface:
+
+> ![\[My first Prometheus
+> graph\]](https://static.lwn.net/images/2018/prometheus-graph.png)
+
+What is interesting here is not really the node exporter metrics
+themselves, as those are fairly standard in any monitoring solution. But
+in Prometheus, any (web) application can easily expose its own internal
+metrics to the monitoring server through regular HTTP, whereas other
+systems would require special plugins, on both the monitoring server and
+the application side. Note that Munin follows a similar pattern, but
+uses its own text protocol on top of TCP, which means it is harder to
+implement for web apps and diagnose with a web browser.
+
+However, coming from the world of Munin, where all sorts of graphics
+just magically appear out of the box, this first experience can be a bit
+of a disappointment: everything is built by hand and ephemeral. While
+there are ways to add custom graphs to the Prometheus web interface
+using Go-based [console
+templates](https://prometheus.io/docs/visualization/consoles/), most
+Prometheus deployments generally [use
+Grafana](https://prometheus.io/docs/visualization/grafana/) to render
+the results using custom-built dashboards. This gives much better
+results, and allows to graph multiple machines separately, using the
+[Node Exporter Server Metrics](https://grafana.com/dashboards/405)
+dashboard:
+
+> [![\[Grafana
+> dashboard\]](https://static.lwn.net/images/2018/prometheus-graph2-sm.png)](https://lwn.net/Articles/744415/)
+
+All this work took roughly an hour of configuration, which is pretty
+good for a first try. Things get tougher when extending those basic
+metrics: because of the system's modularity, it is difficult to add new
+metrics to existing dashboards. For example, web or mail servers are not
+monitored by the node exporter. So monitoring a web server involves
+installing an [Apache-specific
+exporter](https://github.com/Lusitaniae/apache_exporter) that needs to
+be added to the Prometheus configuration. But it won't show up
+automatically in the above dashboard, because that's a "node exporter"
+dashboard, not an *Apache* dashboard. So you need a [separate
+dashboard](https://grafana.com/dashboards/3894) for that. This is all
+work that's done automatically in Munin without any hand-holding.
+
+Even then, Apache is an easy one: monitoring a Postfix server, for
+example, currently requires installing a program like
+[mtail](https://github.com/google/mtail/) that parses the Postfix
+logfiles to expose some metrics to Prometheus. Yet this will not give
+you critical metrics like queue sizes that can indicate a backlog
+condition. There doesn't seem to be a way to write quick "run this
+command to count files" plugins that would allow administrators to write
+such quick hacks as watching the queue sizes in Postfix, without writing
+a [new
+exporter](https://prometheus.io/docs/instrumenting/writing_exporters/)
+using [client
+libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which
+seems to be a rather large undertaking for non-programmers. There are,
+however, a large number of [exporters already
+available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md),
+including ones that can tap into existing
+[Nagios](https://github.com/Griesbacher/Iapetos) and
+[Munin](https://github.com/pvdh/munin_exporter) servers to allow for a
+smooth transition.
+
+However I couldn't find a dashboard that would show and analyse those
+logs at all. To graph metrics from the [postfix mtail
+plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail),
+a graph created by hand in Grafana, with a magic PromQL formula. This
+may involve too much clicking around in a web browser for grumpy old
+administrators. There are tools like
+[Grafanalib](https://github.com/weaveworks/grafanalib) to
+programmaticaly create dashboards, but those also involve a lot of
+boilerplate. When building a custom application, however, creating
+graphs may actually be an fun and distracting task that some may enjoy.
+The Grafana/Prometheus design is certainly enticing and enables powerful
+abstractions that are not easily available with other monitoring
+systems.
+
+Alerting and high availability
+------------------------------
+
+So far, we've worked only with a single server, and did only graphing.
+But Prometheus also supports sending alarms when things go bad. After
+working over a decade as a system administrator, I have mixed feelings
+about "paging" or "alerting" as it's called in Prometheus. Regardless of
+how well the system is tweaked, I have come to believe it is basically
+impossible to design a system that will respect workers and not torture
+on-call personel through sleep-deprivation. It seems it's a feature
+people want regardless, especially in the enterprise, so let's look at
+how it works here.
+
+In Prometheus, you design [alerting
+rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
+using PromQL. For example, to make sure we complete all disk I/O within
+a certain timeframe (say 200ms), we could set the following rule:
+
+        rate(node_disk_io_time_ms{device="sda"}[1m]) < 200
+
+Those rules are regularly checked and matching rules are fired to an
+[alertmanager](https://github.com/prometheus/alertmanager) daemon that
+can receive alerts from multiple Prometheus servers. The alertmanager
+then deduplicates multiple alerts, regroups them (so a single
+notification is sent even if multiple alerts are received), and sends
+the actual notifications through [various
+services](https://prometheus.io/docs/alerting/configuration/) like
+email, PagerDuty, Slack or an arbitrary
+[webhooks](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver).
+
+The Alertmanager has a "gossip protocol" to enable multiple instances to
+coordinate notifications. This design allows you to run multiple
+Prometheus servers in a
+[federation](https://prometheus.io/docs/prometheus/latest/federation/)
+model, all simultaneously collecting metrics, and sending alerts to
+redundant Alertmanager instances to create a highly available monitoring
+system. Those who have struggled with such setups in Nagios will surely
+appreciate the simplicity of this design.
+
+The downside is that Prometheus doesn't ship a set of default alerts and
+exporters do not define default alerting thresholds that could be used
+to create rules automatically either. The Prometheus documentation [also

(Diff truncated)
Added a comment: Re: no fun intended
diff --git a/blog/2017-12-20-demystifying-container-runtimes/comment_3_a46775326d9b0370d596f9c55281917a._comment b/blog/2017-12-20-demystifying-container-runtimes/comment_3_a46775326d9b0370d596f9c55281917a._comment
new file mode 100644
index 00000000..6e27cfc1
--- /dev/null
+++ b/blog/2017-12-20-demystifying-container-runtimes/comment_3_a46775326d9b0370d596f9c55281917a._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="104.163.166.8"
+ claimedauthor="Robin"
+ url="http://robin.millette.info/"
+ subject="Re: no fun intended"
+ date="2018-01-15T19:03:00Z"
+ content="""
+Oui, j'ai d'abord tenté de retrouvé la citation initale mais ça devait être un mail privé et non sur une liste publique. Je comprends mieux la phrase, au moins. Au petit «sic» n'a jamais fait de mal à personne ;-)
+
+Quant aux versions à maintenir, je comprends parfaitement, c'est pour ça que j'ai préfacé ma suggestion ainsi.
+
+Merci pour le retour rapide, à+
+"""]]

yet another bug report
diff --git a/blog/prometheus.mdwn b/blog/prometheus.mdwn
index 0ca25b2d..c3e7b355 100644
--- a/blog/prometheus.mdwn
+++ b/blog/prometheus.mdwn
@@ -630,3 +630,5 @@ bugs filed or expanded:
  * [grafana out of date](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=835210)
  * [document how to hook properly in prometheus](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886893)
  * [do not use daemon when systemd is available](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=886894)
+ * [new upstream version available (0.5.0)](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=887112)
+ * [disk usage metrics](https://github.com/prometheus/prometheus/issues/3684)

response
diff --git a/blog/2017-12-20-demystifying-container-runtimes/comment_2_cba2df8b74ea11573e19dbaec0f92118._comment b/blog/2017-12-20-demystifying-container-runtimes/comment_2_cba2df8b74ea11573e19dbaec0f92118._comment
new file mode 100644
index 00000000..37ee2d6c
--- /dev/null
+++ b/blog/2017-12-20-demystifying-container-runtimes/comment_2_cba2df8b74ea11573e19dbaec0f92118._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""Re: libreoffice (bad pun)"""
+ date="2018-01-15T01:38:17Z"
+ content="""
+Salut!
+
+`^^^` serait `we`, mais je crois que c'est grammaticalement correct de l'omettre. Anyways, c'est le quote de Walsh, donc pas ma faute. ;)
+
+Pour ce qui est de l'autre `we`, disons que c'est un "royal we", ca va comme ca? ;) J'essaie d'avoir à faire le moins de modifs au texte lors de la publication finale ici. J'ai un [convertisseur](https://gitlab.com/anarcat/lwn) qui fait le plus gros de la job, donc j'essaie d'automatiser le plus possible. Changer la grammaire comme ça... ça serait vraiment difficile et enclin à l'erreur...
+
+bonne année toi itou!
+"""]]

Added a comment: Word
diff --git a/blog/2017-12-20-demystifying-container-runtimes/comment_1_350510d048acd381c4081737d2c116b6._comment b/blog/2017-12-20-demystifying-container-runtimes/comment_1_350510d048acd381c4081737d2c116b6._comment
new file mode 100644
index 00000000..63467df0
--- /dev/null
+++ b/blog/2017-12-20-demystifying-container-runtimes/comment_1_350510d048acd381c4081737d2c116b6._comment
@@ -0,0 +1,19 @@
+[[!comment format=mdwn
+ ip="104.163.166.8"
+ claimedauthor="Robin"
+ url="http://robin.millette.info/"
+ subject="Word"
+ date="2018-01-14T22:21:34Z"
+ content="""
+Salut Anarcat,
+
+Bon 2018!
+
+Tu cites Dan Walsh mais il manque un mot (^^^), rkt?
+
+\"Without CoreOS we probably would not have CNI, and CRI and ^^^ would be still fighting about OCI.\"
+
+Aussi, si tu maintiens 2 versions du texte, 2 paragraphes plus haut:
+\"something we covered back in 2015.\" - remplacer \"we\" par \"LWN\".
+
+"""]]

first review from lwn
diff --git a/blog/prometheus-101.mdwn b/blog/prometheus-101.mdwn
index cbafe77c..9327c8d8 100644
--- a/blog/prometheus-101.mdwn
+++ b/blog/prometheus-101.mdwn
@@ -1,21 +1,19 @@
 Monitoring with Prometheus 2.0
 ==============================
 
-Prometheus is a monitoring tool built from scratch by SoundCloud
+[Prometheus](https://prometheus.io/) is a monitoring tool built from scratch by SoundCloud
 in 2012. It works by pulling metrics from monitored services and
 storing them in a time series database (TSDB). It has a powerful query
 language to inspect that database, create alerts, and plot basic
 graphs. Those graphs can then be used to detect anomalies or trends
-for (possibly automated) resource provisionning. Prometheus also has
+for (possibly automated) resource provisioning. Prometheus also has
 extensive service discovery features and supports high availability
 configurations.
 
 That's the brochure says anyways, let's see how it works in the hands
 of an old grumpy system administrator. I'll be drawing comparisons
 with Munin and Nagios frequently because those are the tools I have
-used for over a decade in monitoring Unix clusters. Readers already
-familiar with Prometheus can skip ahead to the last two sections to
-see what's new in Prometheus 2.0.
+used for over a decade in monitoring Unix clusters.
 
 Monitoring with Prometheus and Grafana
 --------------------------------------
@@ -45,18 +43,18 @@ available, but it might be too confusing}
 In the above, the metric is named `node_disk_io_time_ms`, has a single
 label/value pair (`device=sda`) attached to it, and finally the value
 of the metric itself (`446028`). This is only one of hundreds of
-metrics (usage of CPU, memory, disk, temperature and so on) exposed by
+metrics (usage of CPU, memory, disk, temperature, and so on) exposed by
 the "node exporter", a basic stats collector running on monitored
 hosts. Metrics can be counters (e.g. per-interface packet counts),
 gauges (e.g. temperature or fan sensors), or [histograms](https://prometheus.io/docs/practices/histograms/). The
-latter allow for example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, which is
-something that has been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is
-essential to billing networking customers. Another popular use for
-historgrams maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N
-requests are answered in X time. The various metrics types are
-carefully analyzed before being stored to correctly handle conditions
-like overflow (which occur surprisingly often on gigabit network
-interfaces) or resets (when a device restarts).
+latter allow for example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, something
+that has been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is essential to
+billing networking customers. Another popular use for historgrams
+maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N requests are
+answered in X time. The various metrics types are carefully analyzed
+before being stored to correctly handle conditions like overflow
+(which occurs surprisingly often on gigabit network interfaces) or
+resets (when a device restarts).
 
 Those metrics are fetched from "targets", which are simply HTTP
 endpoints, added to the Prometheus configuration file. Targets can
@@ -108,7 +106,7 @@ good for a first try. Things get tougher when extending those basic
 metrics: because of the system's modularity, it is difficult to add
 new metrics to existing dashboards. For example, web or mail servers
 are not monitored by the node exporter. So monitoring a web server
-involves installing an [Apache-specific exporter](https://github.com/Lusitaniae/apache_exporter) which needs to be
+involves installing an [Apache-specific exporter](https://github.com/Lusitaniae/apache_exporter) that needs to be
 added to the Prometheus configuration. But it won't show up
 automatically in the above dashboard, because that's a "node exporter"
 dashboard, not an *Apache* dashboard. So you need a [separate
@@ -118,12 +116,12 @@ Munin without any hand-holding.
 Event then, Apache is an easy one: monitoring a Postfix server, for
 example, currently requires installing program like [mtail](https://github.com/google/mtail/) that
 parses the Postfix logfiles to expose some metrics to Prometheus. Yet
-this will not tell you critical metrics like queue sizes which can
+this will not tell you critical metrics like queue sizes that can
 alert administrators of backlog conditions. There doesn't seem to be a
 way to write quick "run this command to count files" plugins that
 would allow administrators to write such quick hacks as watching the
 queue sizes in Postfix, without writing a [new exporter](https://prometheus.io/docs/instrumenting/writing_exporters/) using
-[client libraries](https://prometheus.io/docs/instrumenting/clientlibs/) which seems to be a rather large undertaking for
+[client libraries](https://prometheus.io/docs/instrumenting/clientlibs/), which seems to be a rather large undertaking for
 non-programmers. There are, however, a large number of [exporters
 already available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md), including ones that can tap into existing
 [Nagios](https://github.com/Griesbacher/Iapetos_) and [Munin](https://github.com/pvdh/munin_exporter) servers to allow for a smooth transition.
@@ -163,7 +161,7 @@ to make sure we complete all disk I/O within a certain timeframe (say
 space or keep consistency?}
 
 Those rules are regularly checked and matching rules are fired to an
-[alertmanager](https://github.com/prometheus/alertmanager) daemon which can receive alerts from multiple
+[alertmanager](https://github.com/prometheus/alertmanager) daemon that can receive alerts from multiple
 Prometheus servers. The alertmanager then deduplicates multiple
 alerts, regroups them (so a single notification is sent even if
 multiple alerts are received), and sends the actual notifications
@@ -179,7 +177,7 @@ have struggled with such setups in Nagios will surely appreciate the
 simplicity of this design.
 
 The downside is that Prometheus doesn't ship a set of default alerts
-and exporters do not define default alerting thresholds which could be
+and exporters do not define default alerting thresholds that could be
 used to create rules automatically either. The Prometheus
 documentation [also lacks examples](https://github.com/prometheus/docs/issues/581) that the community could use as
 well, so alerting is harder to deploy than in classic monitoring
@@ -196,7 +194,7 @@ not a silver bullet and may [not the best tool for all workloads](https://promet
 
 In particular, Prometheus is not designed for long-term storage. By
 default, it keeps samples for only two weeks, which seems rather small
-for old system administrators used to RRDtool databases which efficiently store
+for old system administrators used to RRDtool databases that efficiently store
 samples for years, with minimal disk space usage. As a comparison,
 here is the disk usage of Munin, which keeps one year of samples, and
 Prometheus, which has been running for a *day*, on my test server:
@@ -226,16 +224,16 @@ variable data in labels, which keeps the cardinality of the dataset
 under control, but regardless: the dataset size will grow linearly
 with time. So long term storage is not Prometheus' strongest suit. But
 starting with 2.0, Prometheus can [finally](https://github.com/prometheus/prometheus/issues/10) write to (and read
-from) [external storage engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage ) which can be more efficient than
+from) [external storage engines](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage ) that can be more efficient than
 Prometheus at storage. [InfluxDB](https://en.wikipedia.org/wiki/InfluxDB), for example, can be used as a
-backend and supports time-based downsampling which makes long-term
+backend and supports time-based downsampling that makes long-term
 storage manageable. This deployment, however, is not for the faint of
 heart.
 
 Also, security freaks won't help but notice that all this is happening
 over a clear-text HTTP protocol. Indeed, [by design](https://prometheus.io/docs/operating/security/#authentication-authorisation-encryption), "Prometheus
 and its components do not provide any server-side authentication,
-authorisation or encryption. If you require this, it is recommended to
+authorisation, or encryption. If you require this, it is recommended to
 use a reverse proxy." The issue is punted to a layer above, which is
 fine for the web interface: it is, after all, just a few Prometheus
 instances that need to be protected. But for monitoring endpoints,
@@ -252,7 +250,7 @@ applications, it will serve them well.
 
 There is a large empty space for Prometheus dashboards and alert
 templates. Whereas tools like Munin or Nagios had years to come up
-with lots of plugins, alerts and converge over best practices like
+with lots of plugins and alerts, and converge over best practices like
 "70% disk usage warning but 90% is critical", those things all need to
 be built in Prometheus. As Veeramanchaneni explained in his
 presentation, "every time there is an exporter, there should be a
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index ee775f06..557ad22f 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -3,7 +3,7 @@ Changes in Prometheus 2.0
 
 2017 was a big year for the Prometheus project, as it [published its
 2.0 release in November](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/), which ships long-standing bugfixes, a new
-storage engine which promises performance improvements and new
+storage engine that promises performance improvements and new
 features. This comes at the cost of incompatible changes to the
 storage format and some changes in the configuration but hopefully the
 migration will be easy for most deployments.
@@ -11,13 +11,13 @@ migration will be easy for most deployments.
 What changed
 ------------
 
-Performance issues were being felt particularly in container
+Performance problems were being felt particularly in container
 deployments. The 1.x series' performance was strained with short-lived
 containers: because some parameters like hostnames or IP addresses
 would change frequently in that environment, it would lead to some
 churn in the labels used by the storage engine, which created
-significant performance issues. A newly designed [storage engine](https://github.com/prometheus/tsdb_)
-resolves those issues with promising results.
+significant performance problems. A newly designed [storage engine](https://github.com/prometheus/tsdb)
+resolves this with promising results.
 
 The new engine [boasts](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization) a hundred-fold I/O performance
 improvements, a three-fold improvement in CPU and five-fold in memory
@@ -59,17 +59,17 @@ progress to try to provide a way to convert 1.8 data storage to 2.0.
 
 Another key point of [migration guide](https://prometheus.io/docs/prometheus/2.0/migration/) is a change in the rules
 file format. While a custom format was previously used, 2.0 now uses
-YAML which makes sense because it is the format used in other
+YAML, which makes sense because it is the format used in other
 Prometheus configuration files. Thankfully the [promtool](https://github.com/prometheus/prometheus/tree/master/cmd/promtool) command
 handles this migration automatically. The [new format](https://prometheus.io/blog/2017/06/21/prometheus-20-alpha3-new-rule-format/) also
 introduces [Rule groups](https://github.com/prometheus/prometheus/issues/1095), which improve control over the rules
 execution order. In 1.8, alerting rules were ran sequentially, but in
 2.0 the *groups* get executed esequentially and each group can have
-its own interval. This fixes issues with the long-standing [race
+its own interval. This fixes the long-standing [race
 conditions between dependent rules](https://github.com/prometheus/prometheus/issues/1095).
 
-Remaining issues and future
----------------------------
+Remaining limitations and future
+--------------------------------
 
 As we have seen in the introduction article, Prometheus may not be
 adapted for all workflows because of the limited default dashboards

another possible link
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index d95ae9f8..ee775f06 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -121,3 +121,8 @@ communities, where passion was more about creating great tools than
 milking money out of a community. If this community can organize well
 and keep that spirit together, this new release will light a bright
 path for the future of the Prometheus project.
+
+Notes
+-----
+
+ * staleness presentation: https://www.youtube.com/watch?v=GcTzd2CLH7I

add link for migration path
diff --git a/blog/prometheus-2.0.mdwn b/blog/prometheus-2.0.mdwn
index d39b4f77..d95ae9f8 100644
--- a/blog/prometheus-2.0.mdwn
+++ b/blog/prometheus-2.0.mdwn
@@ -50,7 +50,7 @@ Prometheus 2.0 cannot use the storage from the previous 1.x series. In
 the talk, Veeramanchaneni justified this by saying this was consistent
 with its [API stability promises](https://prometheus.io/blog/2016/07/18/prometheus-1-0-released/#fine-print): the major release is the time to
 "break everything we wanted to break". For those who can't afford to
-discard historical data, a possible workaround is to replicate the
+discard historical data, a [possible workaround](https://www.robustperception.io/accessing-data-from-prometheus-1-x-in-prometheus-2-0/) is to replicate the
 older 1.8 server to a new 2.0 replica - the network protocols are
 still compatible - and discard the older server once the retention
 window closes. Still, new deployments should probably use the 2.0

split prometheus article in two
diff --git a/blog/prometheus-101.mdwn b/blog/prometheus-101.mdwn
new file mode 100644
index 00000000..cbafe77c
--- /dev/null
+++ b/blog/prometheus-101.mdwn
@@ -0,0 +1,294 @@
+Monitoring with Prometheus 2.0
+==============================
+
+Prometheus is a monitoring tool built from scratch by SoundCloud
+in 2012. It works by pulling metrics from monitored services and
+storing them in a time series database (TSDB). It has a powerful query
+language to inspect that database, create alerts, and plot basic
+graphs. Those graphs can then be used to detect anomalies or trends
+for (possibly automated) resource provisionning. Prometheus also has
+extensive service discovery features and supports high availability
+configurations.
+
+That's the brochure says anyways, let's see how it works in the hands
+of an old grumpy system administrator. I'll be drawing comparisons
+with Munin and Nagios frequently because those are the tools I have
+used for over a decade in monitoring Unix clusters. Readers already
+familiar with Prometheus can skip ahead to the last two sections to
+see what's new in Prometheus 2.0.
+
+Monitoring with Prometheus and Grafana
+--------------------------------------
+
+{should i show how to install prometheus? fairly trivial - go get or
+apt-get install works, basically}
+
+What distinguishes Prometheus from other solutions is the relative
+simplicity of its design: for one, metrics are exposed over HTTP using
+a special URL (`/metrics`) and a simple text format. Here is, as an
+example, some disk metrics for a test machine:
+
+    $ curl -s http://curie:9100/metrics | grep node_disk_io_time_ms
+    # HELP node_disk_io_time_ms Total Milliseconds spent doing I/Os.
+    # TYPE node_disk_io_time_ms counter
+    node_disk_io_time_ms{device="dm-0"} 466884
+    node_disk_io_time_ms{device="dm-1"} 133776
+    node_disk_io_time_ms{device="dm-2"} 80
+    node_disk_io_time_ms{device="dm-3"} 349936
+    node_disk_io_time_ms{device="sda"} 446028
+
+{should i use the more traditionnal bandwidth example instead? kubecon
+talk used the "cars driving by" analogy which i didn't find so
+useful. i deliberately picked a tricky example that's not commonly
+available, but it might be too confusing}
+
+In the above, the metric is named `node_disk_io_time_ms`, has a single
+label/value pair (`device=sda`) attached to it, and finally the value
+of the metric itself (`446028`). This is only one of hundreds of
+metrics (usage of CPU, memory, disk, temperature and so on) exposed by
+the "node exporter", a basic stats collector running on monitored
+hosts. Metrics can be counters (e.g. per-interface packet counts),
+gauges (e.g. temperature or fan sensors), or [histograms](https://prometheus.io/docs/practices/histograms/). The
+latter allow for example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, which is
+something that has been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is
+essential to billing networking customers. Another popular use for
+historgrams maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N
+requests are answered in X time. The various metrics types are
+carefully analyzed before being stored to correctly handle conditions
+like overflow (which occur surprisingly often on gigabit network
+interfaces) or resets (when a device restarts).
+
+Those metrics are fetched from "targets", which are simply HTTP
+endpoints, added to the Prometheus configuration file. Targets can
+also be automatically added through various [discovery mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/)
+like DNS that allows having a single `A` or `SRV` that lists all the
+hosts to monitor; or Kubernetes or cloud providers APIs that list all
+containers or virtual machines to monitor. Discovery works in
+realtime, so it will correctly pickup changes in DNS, for example. It
+can also add metadata (e.g. IP address found or server state), which
+is useful for dynamic environments such as Kubernetes or containers
+orchestration in general.
+
+Once collected, metrics can be queried through the web interface,
+using a custom language called PromQL. For example, a query showing
+the average latency, per minute, for `/dev/sda` would look like:
+
+    rate(node_disk_io_time_ms{device="sda"}[1m])
+
+Notice the "device" label, which we use to restrict the search to a
+single disk. This query can also be plotted into a simple graph on the
+web interface:
+
+![my first prometheus graph](https://paste.anarc.at/snaps/snap-2018.01.12-16.29.20.png)
+
+What is interesting here is not really the node exporter metrics
+themselves, as those are fairly standard in any monitoring solution.
+But in Prometheus, any (web) application can easily expose their own
+internal metrics to the monitoring server through regular HTTP,
+whereas other systems would require special plugins, both on the
+monitoring server but also the application side. Note that Munin also
+follows a similar pattern, but speaks its own text protocol on top of
+TCP, which means it is harder to implement for web apps and diagnose
+with a web browser.
+
+However, coming from the world of Munin, where all sorts of graphics
+just magically appear out of the box, this first experience can be a
+bit of a disappointement: everything is built by hand and
+ephemeral. While there are ways to add custom graphs to the Prometheus
+web interface using Go-based [console templates](https://prometheus.io/docs/visualization/consoles/), most Prometheus
+deployments generally [use Grafana](https://prometheus.io/docs/visualization/grafana/) to render the results using
+custom-built dashboards. This gives much better results, and allows to
+graph multiple machines separately, using the [Node Exporter Server
+Metrics](https://grafana.com/dashboards/405) dashboard:
+
+![A Grafana dashboard showing metrics from 2 servers](https://paste.anarc.at/snaps/snap-2018.01.12-16.30.40.png)
+
+All this work took roughly an hour of configuration, which is pretty
+good for a first try. Things get tougher when extending those basic
+metrics: because of the system's modularity, it is difficult to add
+new metrics to existing dashboards. For example, web or mail servers
+are not monitored by the node exporter. So monitoring a web server
+involves installing an [Apache-specific exporter](https://github.com/Lusitaniae/apache_exporter) which needs to be
+added to the Prometheus configuration. But it won't show up
+automatically in the above dashboard, because that's a "node exporter"
+dashboard, not an *Apache* dashboard. So you need a [separate
+dashboard](https://grafana.com/dashboards/3894) for that. This is all work that's done automatically in
+Munin without any hand-holding.
+
+Event then, Apache is an easy one: monitoring a Postfix server, for
+example, currently requires installing program like [mtail](https://github.com/google/mtail/) that
+parses the Postfix logfiles to expose some metrics to Prometheus. Yet
+this will not tell you critical metrics like queue sizes which can
+alert administrators of backlog conditions. There doesn't seem to be a
+way to write quick "run this command to count files" plugins that
+would allow administrators to write such quick hacks as watching the
+queue sizes in Postfix, without writing a [new exporter](https://prometheus.io/docs/instrumenting/writing_exporters/) using
+[client libraries](https://prometheus.io/docs/instrumenting/clientlibs/) which seems to be a rather large undertaking for
+non-programmers. There are, however, a large number of [exporters
+already available](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exporters.md), including ones that can tap into existing
+[Nagios](https://github.com/Griesbacher/Iapetos_) and [Munin](https://github.com/pvdh/munin_exporter) servers to allow for a smooth transition.
+
+However, at the time of writing, I couldn't find a dashboard that would
+show and analyse those logs at all. To graph metrics from the [postfix
+mtail plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail), a graph created by hand in Grafana, with a magic
+PromQL formula. This may involve too much clicking around a web
+browser for grumpy old administrators. There are tools like
+[Grafanalib](https://github.com/weaveworks/grafanalib) to programmaticaly create dashboards, but those also
+involve a lot of boilerplate. When building a custom application,
+however, creating graphs may actually be an fun and distracting task
+that some may enjoy. The Grafana/Prometheus design is certainly
+enticing and enables powerful abstractions that are not easily done
+with other monitoring systems.
+
+Alerting and high availability
+------------------------------
+
+So far, we've worked only with a single server, and did only
+graphing. But Prometheus also supports sending alarms when things go
+bad. After working over a decade as a system administrator, I have
+mixed feelings about "paging" or "alerting" as it's called in
+Prometheus. Regardless of how well the system is tweaked, I have come
+to believe it is basically impossible to design a system that will
+respect workers and not torture on-call personel through
+sleep-deprivation. It seems it's a feature people want regardless,
+especially in the entreprise so let's look at how it works here.
+
+In Prometheus, you design [alert rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) using PromQL. For example,
+to make sure we complete all disk I/O within a certain timeframe (say
+200ms), we could set the following rule:
+
+    rate(node_disk_io_time_ms{device="sda"}[1m]) < 200
+
+{this is a silly check - should i use something more obvious like disk
+space or keep consistency?}
+
+Those rules are regularly checked and matching rules are fired to an
+[alertmanager](https://github.com/prometheus/alertmanager) daemon which can receive alerts from multiple
+Prometheus servers. The alertmanager then deduplicates multiple
+alerts, regroups them (so a single notification is sent even if
+multiple alerts are received), and sends the actual notifications
+through [various services](https://prometheus.io/docs/alerting/configuration/) like email, PagerDuty, Slack or an
+arbitrary [webhooks](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver).
+
+The Alertmanager has a "gossip protocol" to enable multiple instance
+to coordinate notifications. This design allows you to run multiple
+Prometheus servers in a [federation](https://prometheus.io/docs/prometheus/latest/federation/) model, all simultaneously
+collecting metrics, and sending alerts to redundant Alertmanager
+instances to create a highly available monitoring system. Those who
+have struggled with such setups in Nagios will surely appreciate the
+simplicity of this design.
+
+The downside is that Prometheus doesn't ship a set of default alerts
+and exporters do not define default alerting thresholds which could be
+used to create rules automatically either. The Prometheus
+documentation [also lacks examples](https://github.com/prometheus/docs/issues/581) that the community could use as
+well, so alerting is harder to deploy than in classic monitoring
+systems.
+
+Issues and limitations
+----------------------
+
+Prometheus is already well-established out there: [Cloudflare](https://www.infoq.com/news/2017/10/monitoring-cloudflare-prometheus),
+[Canonical](https://www.cncf.io/blog/2017/07/17/prometheus-user-profile-canonical-talks-transition-prometheus/) and (of course) [SoundCloud](https://developers.soundcloud.com/blog/prometheus-monitoring-at-soundcloud) are all (still) using
+it in production. It is a common monitoring tool used in Kubernetes
+deployments because of its discovery features. Prometheus is, however,

(Diff truncated)
more notes
diff --git a/blog/prometheus.mdwn b/blog/prometheus.mdwn
index 0bcdfc6d..f982efa5 100644
--- a/blog/prometheus.mdwn
+++ b/blog/prometheus.mdwn
@@ -348,6 +348,10 @@ Notes
 
 not for publication
 
+{grafana config management?}
+
+{[unsee alertmanager frontend](https://github.com/cloudflare/unsee)}
+
 {flap detection? not included - should not happen?
 https://github.com/prometheus/alertmanager/issues/204}
 

first rewrite
diff --git a/blog/prometheus.mdwn b/blog/prometheus.mdwn
index 52949567..0bcdfc6d 100644
--- a/blog/prometheus.mdwn
+++ b/blog/prometheus.mdwn
@@ -1,22 +1,21 @@
 Monitoring with Prometheus 2.0
 ==============================
 
-Prometheus is a fairly new monitoring tool that was built from the
-ground up at SoundCloud. Prometheus works by pulling metrics from
-monitored hosts and storing them in a time series database (TSDB). It
-has a powerful query language to inspect that database, create alerts,
-and plot basic graphs. Those graphs can then be used to detect
-anomalies or figure out trends to establish server
-provisionning. Prometheus also has extensive service discovery
-features and supports high availability configurations.
+Prometheus is a monitoring tool built from scratch by SoundCloud
+in 2012. It works by pulling metrics from monitored services and
+storing them in a time series database (TSDB). It has a powerful query
+language to inspect that database, create alerts, and plot basic
+graphs. Those graphs can then be used to detect anomalies or trends
+for (possibly automated) resource provisionning. Prometheus also has
+extensive service discovery features and supports high availability
+configurations.
 
 That's the brochure says anyways, let's see how it works in the hands
-of your favorite old grumpy system administrator. I'll be drawing
-comparisons with Munin and Nagios frequently because those are the
-tools I have used for over a decade in monitoring clusters of
-machines, relatively effectively. Readers already familiar with
-Prometheus can skip ahead to the last two sections to see what's new
-in Prometheus 2.0.
+of an old grumpy system administrator. I'll be drawing comparisons
+with Munin and Nagios frequently because those are the tools I have
+used for over a decade in monitoring Unix clusters. Readers already
+familiar with Prometheus can skip ahead to the last two sections to
+see what's new in Prometheus 2.0.
 
 Monitoring with Prometheus and Grafana
 --------------------------------------
@@ -25,9 +24,9 @@ Monitoring with Prometheus and Grafana
 apt-get install works, basically}
 
 What distinguishes Prometheus from other solutions is the relative
-simplicity of its design: for example, metrics are exposed over HTTP
-using a special URL (`/metrics`) and a simple text format. Here is the
-latency of the disks on my local workstation:
+simplicity of its design: for one, metrics are exposed over HTTP using
+a special URL (`/metrics`) and a simple text format. Here is, as an
+example, some disk metrics for a test machine:
 
     $ curl -s http://curie:9100/metrics | grep node_disk_io_time_ms
     # HELP node_disk_io_time_ms Total Milliseconds spent doing I/Os.
@@ -38,53 +37,49 @@ latency of the disks on my local workstation:
     node_disk_io_time_ms{device="dm-3"} 349936
     node_disk_io_time_ms{device="sda"} 446028
 
-{should i use the more traditionnal bandwidth example instead? talk
-used the "cars driving by" analogy which i didn't find so useful. i
-deliberately picked a tricky example that's not commonly available,
-but it might be too confusing}
-
-In the above, the metric is named `node_disk_io_time_ms` and has a
-single label/value pair (`device=sda`) attached to it and finally the
-value of the metric itself (`446028`), which is a counter summing the
-time disks have been spinning doing I/O, which I'll (somewhat wrongly)
-summarize as disk latency. This is only one of hundreds of metrics
-(usage of CPU, memory, disk, temperature and so on) exposed by the
-"node exporter", a simple program designed to run on monitored
+{should i use the more traditionnal bandwidth example instead? kubecon
+talk used the "cars driving by" analogy which i didn't find so
+useful. i deliberately picked a tricky example that's not commonly
+available, but it might be too confusing}
+
+In the above, the metric is named `node_disk_io_time_ms`, has a single
+label/value pair (`device=sda`) attached to it, and finally the value
+of the metric itself (`446028`). This is only one of hundreds of
+metrics (usage of CPU, memory, disk, temperature and so on) exposed by
+the "node exporter", a basic stats collector running on monitored
 hosts. Metrics can be counters (e.g. per-interface packet counts),
-gauges (e.g. temperature sensor), or [histograms](https://prometheus.io/docs/practices/histograms/) (a more complex
-metric that samples observations into buckets). Histograms allow for
-example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, which is something that has
-been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is essential to billing
-networking customers. Another popular use of historgrams is checking
-an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N requests are answered in X
-time. The various metrics types are carefully analyzed before being
-stored to handle conditions like overflow (which occur surprisingly
-often on gigabit network interfaces) or resets (when a device
-restarts).
-
-Those metrics are fetched from "targets", which are simply HTTP URLs,
-added to the Prometheus configuration file. Targets can also be
-automatically added through various [discovery mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/) like DNS
-that allows having a single `A` or `SRV` that lists all the hosts to
-monitor; or Kubernetes or cloud providers APIs that list all
-containers or virtual machines to monitor. Discovery works in realtime
-and can also add metadata (e.g. IP address found or server state),
-which is useful for dynamic environments such as Kubernetes or
-containers orchestration in general.
+gauges (e.g. temperature or fan sensors), or [histograms](https://prometheus.io/docs/practices/histograms/). The
+latter allow for example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, which is
+something that has been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is
+essential to billing networking customers. Another popular use for
+historgrams maintaining an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N
+requests are answered in X time. The various metrics types are
+carefully analyzed before being stored to correctly handle conditions
+like overflow (which occur surprisingly often on gigabit network
+interfaces) or resets (when a device restarts).
+
+Those metrics are fetched from "targets", which are simply HTTP
+endpoints, added to the Prometheus configuration file. Targets can
+also be automatically added through various [discovery mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/)
+like DNS that allows having a single `A` or `SRV` that lists all the
+hosts to monitor; or Kubernetes or cloud providers APIs that list all
+containers or virtual machines to monitor. Discovery works in
+realtime, so it will correctly pickup changes in DNS, for example. It
+can also add metadata (e.g. IP address found or server state), which
+is useful for dynamic environments such as Kubernetes or containers
+orchestration in general.
 
 Once collected, metrics can be queried through the web interface,
 using a custom language called PromQL. For example, a query showing
-the average latency, per minute, for `/dev/sda` would be:
+the average latency, per minute, for `/dev/sda` would look like:
 
-    rate(node_disk_io_time_ms{device="sda",instance="curie:9100"}[1m])
+    rate(node_disk_io_time_ms{device="sda"}[1m])
 
-Notice the disk name and the "instance" name which identifies a
-service uniquely. The instance name is automatically added by
-Prometheus when it fetches the metric and the device name comes from
-the node exporter label. This query can also be plotted into a simple
-graph on the web interface:
+Notice the "device" label, which we use to restrict the search to a
+single disk. This query can also be plotted into a simple graph on the
+web interface:
 
-![my first prometheus graph](http://paste.anarc.at/snaps/snap-2018.01.11-16.10.16.png)
+![my first prometheus graph](https://paste.anarc.at/snaps/snap-2018.01.12-16.29.20.png)
 
 What is interesting here is not really the node exporter metrics
 themselves, as those are fairly standard in any monitoring solution.
@@ -96,43 +91,54 @@ follows a similar pattern, but speaks its own text protocol on top of
 TCP, which means it is harder to implement for web apps and diagnose
 with a web browser.
 
-{https://prometheus.io/docs/visualization/grafana/}
-
 However, coming from the world of Munin, where all sorts of graphics
-just magically appear out of the box, this first experience was a
-disappointement: everything is built by hand and ephemeral. Doing one
-thing and one thing well makes sense, however and Prometheus
-deployments often rely on Grafana or other graphing engines to render
-the results. A set of graphs using the Grafana [Node Exporter Server
-Metrics](https://grafana.com/dashboards/405) dashboard looks something like this for the same dataset:
+just magically appear out of the box, this first experience can be a
+bit of a disappointement: everything is built by hand and
+ephemeral. While there are ways to add custom graphs to the Prometheus
+web interface using Go-based [console templates](https://prometheus.io/docs/visualization/consoles/), most Prometheus
+deployments generally [use Grafana](https://prometheus.io/docs/visualization/grafana/) to render the results using
+custom-built dashboards. This gives much better results, and allows to
+graph multiple machines separately, using the [Node Exporter Server
+Metrics](https://grafana.com/dashboards/405) dashboard:
 
-![A Grafana dashboard showing metrics from 4 servers](http://paste.anarc.at/snaps/snap-2018.01.11-16.16.14.png)
+![A Grafana dashboard showing metrics from 2 servers](https://paste.anarc.at/snaps/snap-2018.01.12-16.30.40.png)
 
 All this work took roughly an hour of configuration, which is pretty
 good for a first try. Things get tougher when extending those basic
 metrics: because of the system's modularity, it is difficult to add
-new metrics to existing dashboards. For example, your web or mail
-servers are not monitored by the node exporter by default. So
-monitoring a web server involves installing an [Apache-specific
-exporter](https://github.com/Lusitaniae/apache_exporter) which needs to be added to the Prometheus
-configuration. But it won't show up automatically in the above
-dashboard, because that's a "node exporter" dashboard, not an *Apache*
-dashboard. So you need a [separate dashboard](https://grafana.com/dashboards/3894) for that. This is all
-work that's done automatically in Munin without any hand-holding.
-
-Apache is an easy one: monitoring a Postfix server, for example,
-requires installing program like [mtail](https://github.com/google/mtail/) that parses the Postfix
-logfiles to expose meaningful metrics to Prometheus. At the time of
-writing, I couldn't find a nice dashboard that would show and analyse
-those logs. To graph metrics from the [postfix mtail plugin](https://github.com/google/mtail/blob/master/examples/postfix.mtail), a
-formula need to be crafted for each graph, created by hand in Grafana,
-which involves way too much clicking around a web browser. There are
-tools like [Grafanalib](https://github.com/weaveworks/grafanalib) to programmatically create dashboards, but
-those also involve a lot of boilerplate. When building a custom
-application, however, creating graphs may actually be an fun and
-distracting task that some may enjoy. The Grafana/Prometheus design is
-certainly enticing and enables powerful abstractions that are not
-easily done with other monitoring systems.
+new metrics to existing dashboards. For example, web or mail servers
+are not monitored by the node exporter. So monitoring a web server
+involves installing an [Apache-specific exporter](https://github.com/Lusitaniae/apache_exporter) which needs to be
+added to the Prometheus configuration. But it won't show up
+automatically in the above dashboard, because that's a "node exporter"
+dashboard, not an *Apache* dashboard. So you need a [separate
+dashboard](https://grafana.com/dashboards/3894) for that. This is all work that's done automatically in

(Diff truncated)
first prom draft
diff --git a/blog/prometheus.mdwn b/blog/prometheus.mdwn
index 140482b5..52949567 100644
--- a/blog/prometheus.mdwn
+++ b/blog/prometheus.mdwn
@@ -1,141 +1,362 @@
-
-features optimizations for Kubernetes using a new storage engine. It
-seems like the 1.x performance was strained with short-lived
-containers: because the metrics could change very quickly, this lead
-to performance issues. The new release boasts hundred-fold I/O
-performance improvements and three-fold improvements in CPU usage.
-
-
-only part of the slides:
-
-https://schd.ws/hosted_files/kccncna17/c4/KubeCon%20P8s%20Salon%20-%20Kubernetes%20Metrics%20Deep%20Dive.pdf
-
-https://kccncna17.sched.com/event/Cs4d/prometheus-salon-hosted-by-frederic-branczyk-coreos-bob-cotton-freshtracksio-goutham-veeramanchaneni-tom-wilkie-kausal
-
-Frederic Brancyk from CoreOS
-
-bborgmon to borg , prom to k8s
-
-k8s and prom didn't know about each other
-
-cars_total
-
-cars_total{color="white"} <= dimension
-
-we can observe from multiple locations
-
-over time, we can make queries to make sense of it
-
-congested traffic? rate(sum(cars_total)[5m]) < 10 
-flowing traffic? rate(sum(cars_total)[5m]) > 10 
-
-sum because of multiple dimensions
-
-collects and stores data
-
-instrumented applications, e.g. HTTP GET /metrics
-
-every 15s (configurable interval)
-
-T0 = 0
-T1 = 2
-
-
-a target is:
-
-* HTTP://example.com/metrics
-* service discovery:
-  * static targets
-  * DNS
-  * k8s
-  * and more
-
-k8s discv
-
-targets:
- * pods = our workload . list all pods. great if you're strict about
-   organization. not always the case
- * nodes
- * endpoints / services - grouped pods are by service and prom can
-   find this
- * powerful API to subscribe to events to automatically reconfigure it
-   in realtime
-
-k8s metadata can be added when feeding into prom: e.g. add service/pod
-information to have different severity
-
-alerting rules
-
-alert codes loaded by the server. every interval alerts are fired
-
-there's an alert manager that receives all alerts from multiple
-servers. deduplicates multiple entries and regroups them (so you don't
-get an announce every 15s) and sends the alerts. has a gossip protocol
-to be HA without other components
-
-name is a meta-label
-unique combination of labels uniquely identifies a time series
-
-if our app creates a new label, it will explode the cardinality of
-metrics. don't put variable data in metrics. like creating a new table
-for each query or user
-
-metric types
-
-* counter = requests
-* gauge = mem usage or temperature
-* histogram = latency distribution... ..?
-
-prom knows about counter resets (at collect time) and will keep a
-straight line instead of reseting. well thought through
-
-photo with the guy's name.
+Monitoring with Prometheus 2.0
+==============================
+
+Prometheus is a fairly new monitoring tool that was built from the
+ground up at SoundCloud. Prometheus works by pulling metrics from
+monitored hosts and storing them in a time series database (TSDB). It
+has a powerful query language to inspect that database, create alerts,
+and plot basic graphs. Those graphs can then be used to detect
+anomalies or figure out trends to establish server
+provisionning. Prometheus also has extensive service discovery
+features and supports high availability configurations.
+
+That's the brochure says anyways, let's see how it works in the hands
+of your favorite old grumpy system administrator. I'll be drawing
+comparisons with Munin and Nagios frequently because those are the
+tools I have used for over a decade in monitoring clusters of
+machines, relatively effectively. Readers already familiar with
+Prometheus can skip ahead to the last two sections to see what's new
+in Prometheus 2.0.
+
+Monitoring with Prometheus and Grafana
+--------------------------------------
+
+{should i show how to install prometheus? fairly trivial - go get or
+apt-get install works, basically}
+
+What distinguishes Prometheus from other solutions is the relative
+simplicity of its design: for example, metrics are exposed over HTTP
+using a special URL (`/metrics`) and a simple text format. Here is the
+latency of the disks on my local workstation:
+
+    $ curl -s http://curie:9100/metrics | grep node_disk_io_time_ms
+    # HELP node_disk_io_time_ms Total Milliseconds spent doing I/Os.
+    # TYPE node_disk_io_time_ms counter
+    node_disk_io_time_ms{device="dm-0"} 466884
+    node_disk_io_time_ms{device="dm-1"} 133776
+    node_disk_io_time_ms{device="dm-2"} 80
+    node_disk_io_time_ms{device="dm-3"} 349936
+    node_disk_io_time_ms{device="sda"} 446028
+
+{should i use the more traditionnal bandwidth example instead? talk
+used the "cars driving by" analogy which i didn't find so useful. i
+deliberately picked a tricky example that's not commonly available,
+but it might be too confusing}
+
+In the above, the metric is named `node_disk_io_time_ms` and has a
+single label/value pair (`device=sda`) attached to it and finally the
+value of the metric itself (`446028`), which is a counter summing the
+time disks have been spinning doing I/O, which I'll (somewhat wrongly)
+summarize as disk latency. This is only one of hundreds of metrics
+(usage of CPU, memory, disk, temperature and so on) exposed by the
+"node exporter", a simple program designed to run on monitored
+hosts. Metrics can be counters (e.g. per-interface packet counts),
+gauges (e.g. temperature sensor), or [histograms](https://prometheus.io/docs/practices/histograms/) (a more complex
+metric that samples observations into buckets). Histograms allow for
+example [95th percentiles](https://en.wikipedia.org/wiki/Burstable_billing#95th_percentile) analysis, which is something that has
+been [missing from Munin forever](http://munin-monitoring.org/ticket/443) and is essential to billing
+networking customers. Another popular use of historgrams is checking
+an [Apdex score](https://en.wikipedia.org/wiki/Apdex), to make sure that N requests are answered in X
+time. The various metrics types are carefully analyzed before being
+stored to handle conditions like overflow (which occur surprisingly
+often on gigabit network interfaces) or resets (when a device
+restarts).
+
+Those metrics are fetched from "targets", which are simply HTTP URLs,
+added to the Prometheus configuration file. Targets can also be
+automatically added through various [discovery mechanisms](https://prometheus.io/blog/2015/06/01/advanced-service-discovery/) like DNS
+that allows having a single `A` or `SRV` that lists all the hosts to
+monitor; or Kubernetes or cloud providers APIs that list all
+containers or virtual machines to monitor. Discovery works in realtime
+and can also add metadata (e.g. IP address found or server state),
+which is useful for dynamic environments such as Kubernetes or
+containers orchestration in general.
+
+Once collected, metrics can be queried through the web interface,
+using a custom language called PromQL. For example, a query showing
+the average latency, per minute, for `/dev/sda` would be:
+
+    rate(node_disk_io_time_ms{device="sda",instance="curie:9100"}[1m])
+
+Notice the disk name and the "instance" name which identifies a
+service uniquely. The instance name is automatically added by
+Prometheus when it fetches the metric and the device name comes from
+the node exporter label. This query can also be plotted into a simple
+graph on the web interface:
+
+![my first prometheus graph](http://paste.anarc.at/snaps/snap-2018.01.11-16.10.16.png)
+
+What is interesting here is not really the node exporter metrics
+themselves, as those are fairly standard in any monitoring solution.
+But in Prometheus, any (web) application can easily expose their own
+internal metrics to the monitoring server through regular HTTP,
+whereas other systems would require special plugins, both on the
+monitoring server but also the application side. Note that Munin also
+follows a similar pattern, but speaks its own text protocol on top of
+TCP, which means it is harder to implement for web apps and diagnose
+with a web browser.
+
+{https://prometheus.io/docs/visualization/grafana/}
+
+However, coming from the world of Munin, where all sorts of graphics
+just magically appear out of the box, this first experience was a
+disappointement: everything is built by hand and ephemeral. Doing one

(Diff truncated)
typo
diff --git a/blog/2017-12-20-demystifying-container-runtimes.mdwn b/blog/2017-12-20-demystifying-container-runtimes.mdwn
index eeffa94f..942a434d 100644
--- a/blog/2017-12-20-demystifying-container-runtimes.mdwn
+++ b/blog/2017-12-20-demystifying-container-runtimes.mdwn
@@ -7,7 +7,7 @@
 >
 >  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
 >  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
->  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]] this article)
+>  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]] (this article)
 
 As we briefly mentioned in our [[overview article|2017-12-13-kubecon-overview]] about KubeCon +
 CloudNativeCon, there are multiple container "runtimes", which are

cross-ref kubecon articles
diff --git a/blog/2017-12-13-kubecon-overview.mdwn b/blog/2017-12-13-kubecon-overview.mdwn
index d62223e9..4c9623d4 100644
--- a/blog/2017-12-13-kubecon-overview.mdwn
+++ b/blog/2017-12-13-kubecon-overview.mdwn
@@ -2,6 +2,13 @@
 [[!meta date="2017-12-13T12:00:00-0500"]]
 [[!meta updated="2017-12-29T13:22:53-0500"]]
 
+> This is one part of my coverage of KubeCon Austin 2017. Other
+> articles include:
+>
+>  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]] (this article)
+>  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
+>  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]]
+
 The [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF)
 held its conference, [KubeCon +
 CloudNativeCon](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america),
diff --git a/blog/2017-12-20-demystifying-container-runtimes.mdwn b/blog/2017-12-20-demystifying-container-runtimes.mdwn
index 4749d886..eeffa94f 100644
--- a/blog/2017-12-20-demystifying-container-runtimes.mdwn
+++ b/blog/2017-12-20-demystifying-container-runtimes.mdwn
@@ -2,8 +2,14 @@
 [[!meta date="2017-12-20T12:00:00-0500"]]
 [[!meta updated="2018-01-05T11:11:55-0500"]]
 
-As we briefly mentioned in our [overview
-article](https://lwn.net/Articles/741301/) about KubeCon +
+> This is one part of my coverage of KubeCon Austin 2017. Other
+> articles include:
+>
+>  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
+>  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]]
+>  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]] this article)
+
+As we briefly mentioned in our [[overview article|2017-12-13-kubecon-overview]] about KubeCon +
 CloudNativeCon, there are multiple container "runtimes", which are
 programs that can create and execute containers that are typically
 fetched from online images. That space is slowly reaching maturity both
diff --git a/blog/2017-12-20-docker-without-docker.mdwn b/blog/2017-12-20-docker-without-docker.mdwn
index 126ba963..576bd8d9 100644
--- a/blog/2017-12-20-docker-without-docker.mdwn
+++ b/blog/2017-12-20-docker-without-docker.mdwn
@@ -2,6 +2,13 @@
 [[!meta date="2017-12-20T12:00:00-0500"]]
 [[!meta updated="2018-01-05T11:11:55-0500"]]
 
+> This is one part of my coverage of KubeCon Austin 2017. Other
+> articles include:
+>
+>  * [[An overview of KubeCon + CloudNativeCon|2017-12-13-kubecon-overview]]
+>  * [[Docker without Docker at Red Hat|2017-12-20-docker-without-docker]] (this article)
+>  * [[Demystifying Container Runtimes|2017-12-20-demystifying-container-runtimes]]
+
 The Docker (now [Moby](https://mobyproject.org/)) project has done a lot
 to popularize containers in recent years. Along the way, though, it has
 generated concerns about its concentration of functionality into a
@@ -18,7 +25,7 @@ philosophy.
 The quest to modularize Docker
 ------------------------------
 
-As we saw in an [earlier article](https://lwn.net/Articles/741897/), the
+As we saw in an [[earlier article|2017-12-20-demystifying-container-runtimes]], the
 basic set of container operations is not that complicated: you need to
 pull a container image, create a container from the image, and start it.
 On top of that, you need to be able to build images and push them to a

fix broken links, again
diff --git a/blog/2017-12-20-demystifying-container-runtimes.mdwn b/blog/2017-12-20-demystifying-container-runtimes.mdwn
index 2c78540b..4749d886 100644
--- a/blog/2017-12-20-demystifying-container-runtimes.mdwn
+++ b/blog/2017-12-20-demystifying-container-runtimes.mdwn
@@ -154,7 +154,7 @@ CRI-O: the minimal runtime
 
 Seeing those new standards, some Red Hat folks figured they could make a
 simpler runtime that would *only* do what Kubernetes needed. That
-"skunkworks" project was eventually called [CRI-O](http://cri-o.io/%20)
+"skunkworks" project was eventually called [CRI-O](http://cri-o.io/)
 and implements a minimal CRI interface. During a [talk at KubeCon Austin
 2017](https://kccncna17.sched.com/event/CU6T/cri-o-all-the-runtime-kubernetes-needs-and-nothing-more-mrunal-patel-red-hat),
 Walsh explained that "CRI-O is designed to be simpler than other
@@ -233,7 +233,7 @@ straightforward: just a single environment variable changes the runtime
 socket, which is what Kubernetes uses to communicate with the runtime.
 
 CRI-O 1.0 was [released in October
-2017](https://www.redhat.com/en/blog/introducing-cri-o-10%20) with
+2017](https://www.redhat.com/en/blog/introducing-cri-o-10) with
 support for Kubernetes 1.7. Since then, CRI-O 1.8 and 1.9 were released
 to follow the Kubernetes 1.8 and 1.9 releases (and sync version
 numbers). Patel considers CRI-O to be production-ready and it is already
diff --git a/blog/2017-12-20-docker-without-docker.mdwn b/blog/2017-12-20-docker-without-docker.mdwn
index d22d801c..126ba963 100644
--- a/blog/2017-12-20-docker-without-docker.mdwn
+++ b/blog/2017-12-20-docker-without-docker.mdwn
@@ -202,7 +202,7 @@ provide a good process for working with local containers, they don't
 cover remote registries, which allow developers to actively collaborate
 on application packaging. Registries are also an essential part of a
 continuous-deployment framework. This is where the
-[skopeo](https://github.com/projectatomic/skopeo%20) project comes in.
+[skopeo](https://github.com/projectatomic/skopeo) project comes in.
 Skopeo is another Atomic project that "performs various operations on
 container images and image repositories", according to the `README`
 file. It was originally designed to inspect the contents of container

fix dates
diff --git a/blog/2017-12-13-kubecon-overview.mdwn b/blog/2017-12-13-kubecon-overview.mdwn
index e7cef9e1..d62223e9 100644
--- a/blog/2017-12-13-kubecon-overview.mdwn
+++ b/blog/2017-12-13-kubecon-overview.mdwn
@@ -1,4 +1,6 @@
 [[!meta title="An overview of KubeCon + CloudNativeCon"]]
+[[!meta date="2017-12-13T12:00:00-0500"]]
+[[!meta updated="2017-12-29T13:22:53-0500"]]
 
 The [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF)
 held its conference, [KubeCon +

publish runtime articles
commit 4bee1d702b1f9e7898785e23a2af719e0f78c07d
Author: Antoine Beaupré <anarcat@debian.org>
Date: Fri Jan 5 12:38:36 2018 -0500
prepare publication of runtimes articles
commit ef004eeb2ec8aa9cd219ffe66ef5c25f161ff450
Author: Antoine Beaupré <anarcat@debian.org>
Date: Thu Dec 21 11:43:57 2017 -0500
final reviews on runtimes
commit 7aa493fc8108c41d656d6272105839de95a9105a
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 16:42:50 2017 -0500
small typo
commit 7db17ed2b5b6733c2087dabc70f5017febecbcce
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 16:42:26 2017 -0500
more fixes from lwn
commit b43764c09f811a50c3650de695e1739e5bc5724f
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 16:30:14 2017 -0500
remove controversial quote
commit 99e230413e855f9caaa9e25580f4a83c6b7cc105
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 16:18:11 2017 -0500
DwoD review
commit 455043bbd95eeb4332f919acf6246cdbbfd53c76
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 15:52:30 2017 -0500
pull in changes from lwn again
commit d5ba5ea741c8ccaa54d2bd5f112f0d54c13dd819
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 15:40:12 2017 -0500
remove yous
commit f4e7542631ebbcbe21ede0eaed66d4f6a564a789
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 15:33:34 2017 -0500
more lwn changes
commit 004dc5388130d28ba65758a746e5b5e2a1417e1d
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 11:24:48 2017 -0500
runtimes in progress
commit 41617fe7cb12c2ae920eb056bfb578cebc9d1c1c
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 10:44:12 2017 -0500
clarify build issue
commit 341741474adb0bfd4822576fdf42239b68a0dfe3
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 10:43:59 2017 -0500
jon lead rewrite with touchups
commit 8b896957c163a58472d1e54a76ba928f781bc3cb
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 10:19:54 2017 -0500
minor changes from LWN
commit bfd87252739ac81dfd0e8e614b7052c83346a60c
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 10:19:44 2017 -0500
fixup title
commit 1535b8edee26f65cfbaa481ff4c78e582bba5a47
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 20 10:18:07 2017 -0500
small fix
commit 656eef11e1a10646c614c2915cd8ff670a76be10
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 21:16:17 2017 -0500
review after jakes comments
commit b75ab4d56cdb19ab88c1aff72ab393b0ab8e1a7c
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 20:34:09 2017 -0500
more fixes
commit 8656121a4cf292eb67d8a92d01bec8c61af058e0
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 20:27:08 2017 -0500
docker article in progress, merge with local changes
commit 6edbeb574d3fdd107c99cfcb926608545b896aae
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 13:22:30 2017 -0500
add answers from coreos and docker
commit 400b53470eb55cca5b9ecb68be6bec4b91939014
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 13:16:45 2017 -0500
docker fact-check
commit 833747f0d9c8f1a1537c62c4c569e68899cf65b4
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 11:13:40 2017 -0500
small changes in docker article
commit 4c15d3dc5858845a1b38e70926a42992a55fd2e2
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 11:12:41 2017 -0500
yet another review of runtimes
commit 0abe3391c4cb44599c8c65216319c9b98e1a840d
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 19 09:46:12 2017 -0500
changes requested by jake
commit a4d7be12b797d6f90678b422a4e0465b872107d7
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 16:20:27 2017 -0500
expand coreOS with quotes from philips
commit af1788c4cc8a08ed9957d9f8b35b5ac9e0440793
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 15:40:53 2017 -0500
rewrite docker article after split off
commit b7413ea0acaa69c532b8d5e393a88a71e09d21c4
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 15:40:35 2017 -0500
another review
commit 4993a0dd4daa248ea332d517a8f65469c148e391
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 15:38:33 2017 -0500
rewrite lead
commit 1f39a35b6d7cb318985cea5082672c22967923db
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 15:38:23 2017 -0500
add photos
commit d865098b47385200a14a3962175a6944f9865073
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 14:45:36 2017 -0500
yet another review
commit 593fb92ce4f6b04bfc8d4aa5edbe6e0584116bec
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 14:45:16 2017 -0500
move notes down
commit d55f24b5fb49b2fb1d0ac443a287242f2ee16cb8
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 14:08:28 2017 -0500
try to finalize a first draft again
commit 48ef0d2a49404e259a150474b988b7357fa44bcd
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 12:53:14 2017 -0500
confirm some quotes, rephrase
commit f6449f7704ee023ffa6df47942f37156e895e616
Merge: d0bd44b 3c76f06
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 11:52:26 2017 -0500
Merge branch 'fin/kubecon-overview' into backlog/kubecon
commit 3c76f0615542783e2beab56a7d03593235ded917
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 18 11:51:20 2017 -0500
last changes from LWN, article online
commit d0bd44b66c1f12dea96d12babd2197fa6f7585d9
Author: Antoine Beaupré <anarcat@debian.org>
Date: Sat Dec 16 00:51:31 2017 -0500
rephrase quote after convo with batts
commit d96f8be807b330e22dcf40a535bbd4cb2dbfd55b
Author: Antoine Beaupré <anarcat@debian.org>
Date: Fri Dec 15 16:48:43 2017 -0500
notes from chris
commit 0af1237f155fbbd1ec8857c01a9057a6e352e8f8
Author: Antoine Beaupré <anarcat@debian.org>
Date: Fri Dec 15 16:46:18 2017 -0500
start warpping up runtimes article 3600 words
remove notes from vbatts and crio.mdwn, draft conclusions
commit 483171bf45e0ca644a9a14c0d9b83cdcbcff6313
Author: Antoine Beaupré <anarcat@debian.org>
Date: Fri Dec 15 15:13:48 2017 -0500
extra note docker-w-d
commit 39892c551c711dd55da98ae6d0f13000bf0947ee
Author: Antoine Beaupré <anarcat@debian.org>
Date: Fri Dec 15 15:13:33 2017 -0500
first draft cri-o containerd
commit c747657837a0b3e2f7418944a02e67ed44179d4f
Author: Antoine Beaupré <anarcat@debian.org>
Date: Fri Dec 15 13:52:26 2017 -0500
create runtimes article, cleaning up crio notes
commit 8afdb92bd8c46d8181145e5ab4a8f34415f026de
Author: Antoine Beaupré <anarcat@debian.org>
Date: Fri Dec 15 11:11:13 2017 -0500
first pass sorting through crio notes, steal containers intro from docker
commit 072270ffb43533c5a89da0de2e55e8de2306193f
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 17:38:24 2017 -0500
finish rewrite/review of docker article
commit be8b40d154c31213e0fbadc218de6d49026ea2dc
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 13:36:30 2017 -0500
more fixes from lwn
commit af5cc6a62ca68cccbcc916e841b73d19f05fe5eb
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 13:36:30 2017 -0500
more fixes from lwn
commit aad453d275109837ddba1549df4580c0301e8dd1
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 12:55:18 2017 -0500
more goodies from lwn
commit 079a63348753b4a212dd31ba5e2dfee8a9f9cfc8
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 12:55:18 2017 -0500
more goodies from lwn
commit ba879d346055cc18ac45b3decbf3e33db642c4e2
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 12:54:06 2017 -0500
some cleanups from me and ng
commit 726c6b0c01a0bf30e2e6492208f004bc79f36268
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 12:54:06 2017 -0500
some cleanups from me and ng
commit 93e7b031206ea33499cd6472cc7b9251e625b9f3
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 12:02:33 2017 -0500
more changes from LWN
commit d0706e367c374c13aea55737c8af954769a09c02
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 12:02:33 2017 -0500
more changes from LWN
commit e53716ec4646fd6b5a955149b614efc54a708b47
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 11:55:46 2017 -0500
first version pushed to LWN, reviewed changes from jake
commit d4f151fa563d22d72f225b14e2a1f7153b7676e1
Author: Antoine Beaupré <anarcat@debian.org>
Date: Wed Dec 13 11:55:46 2017 -0500
first version pushed to LWN, reviewed changes from jake
commit c448ed1dddd84ef19bda59e8b801d0bc8d4d5ddd
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 17:08:17 2017 -0500
minor overview fixes, sent up to jake
commit 65907a86bb0e36caaccb499d02320b40594e26c3
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 17:08:17 2017 -0500
minor overview fixes, sent up to jake
commit 7ea3565056a09ca2bd6cdc5e97dbe25c5240052e
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 16:56:56 2017 -0500
attempt final draft rewrite of overview
commit e89295fa6c21f843237d2a5c26c7e3ea0e3e219c
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 16:56:56 2017 -0500
attempt final draft rewrite of overview
commit a23586b1adb7807303d403eef1c91df7f912a430
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 15:35:17 2017 -0500
rewrite pass on kubernetes-overview
commit de81eddac72c3f0654ef605c3db7fe6e71b5e3ae
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 15:35:17 2017 -0500
rewrite pass on kubernetes-overview
commit 464cb8526946c51d1ba798ec6e669cb1f3472482
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 12:50:33 2017 -0500
add possible pic
commit c9e1107c4723f05916f86da799f935ec710f4f66
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 12:50:33 2017 -0500
add possible pic
commit 7bcaea18feee389c779da5fd81401f047869ea7b
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 12:17:44 2017 -0500
overview: first jake review
commit 5ed0a35fdb9535a3d94ab0132813d1d62e9eb1db
Author: Antoine Beaupré <anarcat@debian.org>
Date: Tue Dec 12 12:17:44 2017 -0500
overview: first jake review
commit 863d5a22122278c18ef33317cab416604c2ef710
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 11 17:26:24 2017 -0500
docker-without-docker: partial review rewrite
commit b01463afbe579c61b8996131e59715cb6216808b
Author: Antoine Beaupré <anarcat@debian.org>
Date: Mon Dec 11 13:17:07 2017 -0500
docker-without-docker: finalize notes processing
commit 10f9b1ab5a60a382060aa2df36c05e9512570642
Author: Antoine Beaupré <anarcat@debian.org>
Date: Sun Dec 10 17:17:35 2017 -0500
more notes
commit 95782786a0eaf4676cfdbf574b3a2eb9f1feb811
Author: Antoine Beaupré <anarcat@debian.org>
Date: Sun Dec 10 17:17:35 2017 -0500
more notes
commit f9f93f0ee7149f3528611abeefc3373a12a2bb8a
Author: Antoine Beaupré <anarcat@debian.org>
Date: Sun Dec 10 17:17:21 2017 -0500
rephrase docker/pod analogy, thanks to @yomateo on slack
commit 78355b82599d6e0e8906d8d9f76ef514cd5531c6
Author: Antoine Beaupré <anarcat@debian.org>
Date: Sun Dec 10 12:14:03 2017 -0500
docker-without-docker: process more notes, add refs, move notes out of crio
commit f576faa5795f5e5f8e1b31d12bea2cef02fd7301
Author: Antoine Beaupré <anarcat@debian.org>
Date: Sat Dec 9 16:25:51 2017 -0500
first docker-without-docker draft
commit b825198874b539f038cbf56e0fb7636bf18bc6aa
Author: Antoine Beaupré <anarcat@debian.org>
Date: Thu Dec 7 13:37:43 2017 -0500
more notes
commit fe02359ae35e099d9ed7bdee58887f5ebe04792d
Author: Antoine Beaupré <anarcat@debian.org>
Date: Thu Dec 7 13:09:25 2017 -0500
finalize a first draft for overview
commit 566095fe22a4c51672a10a34a9cc988903af3545
Author: Antoine Beaupré <anarcat@debian.org>
Date: Thu Dec 7 11:43:38 2017 -0500
move draft out of notes
diff --git a/blog/2017-12-20-demystifying-container-runtimes.mdwn b/blog/2017-12-20-demystifying-container-runtimes.mdwn
new file mode 100644
index 00000000..2c78540b
--- /dev/null
+++ b/blog/2017-12-20-demystifying-container-runtimes.mdwn
@@ -0,0 +1,435 @@
+[[!meta title="Demystifying container runtimes"]]
+[[!meta date="2017-12-20T12:00:00-0500"]]
+[[!meta updated="2018-01-05T11:11:55-0500"]]
+
+As we briefly mentioned in our [overview
+article](https://lwn.net/Articles/741301/) about KubeCon +
+CloudNativeCon, there are multiple container "runtimes", which are
+programs that can create and execute containers that are typically
+fetched from online images. That space is slowly reaching maturity both
+in terms of standards and implementation: Docker's containerd 1.0 was
+released during KubeCon, CRI-O 1.0 was released a few months ago, and
+rkt is also still in the game. With all of those runtimes, it may be a
+confusing time for those looking at deploying their own container-based
+system or [Kubernetes](https://kubernetes.io/) cluster from scratch.
+This article will try to explain what container runtimes are, what they
+do, how they compare with each other, and how to choose the right one.
+It also provides a primer on container specifications and standards.
+
+What is a container?
+--------------------
+
+Before we go further in looking at specific runtimes, let's see what
+containers actually are. Here is basically what happens when a container
+is launched:
+
+1.  A container is created from a container image. Images are tarballs
+    with a JSON configuration file attached. Images are often nested:
+    for example this [Libresonic
+    image](https://github.com/tonipes/libresonic-docker) is built on top
+    of a [Tomcat image](https://hub.docker.com/_/tomcat/) that
+    depends (eventually) on a base [Debian
+    image](https://hub.docker.com/_/debian/). This allows for content
+    deduplication because that Debian image (or any intermediate step)
+    may be the basis for other containers. A container image is
+    typically created with a command like `docker build`.
+2.  If necessary, the runtime downloads the image from somewhere,
+    usually some "container registry" that exposes the metadata and the
+    files for download over a simple HTTP-based protocol. It used to be
+    only [Docker Hub](https://hub.docker.com/), but now everyone has
+    their own registry: for example, Red Hat has one for its [OpenShift
+    project](https://openshift.io/), Microsoft has one for
+    [Azure](https://azure.microsoft.com/en-ca/services/container-registry/),
+    and [GitLab](https://gitlab.com/) has [one for its continuous
+    integration
+    platform](https://gitlab.com/help/user/project/container_registry).
+    A registry is the server that `docker pull` or `push` talks with,
+    for example.
+3.  The runtime extracts that layered image onto a copy-on-write
+    (CoW) filesystem. This is usually done using an overlay filesystem,
+    where all the container layers overlay each other to create a
+    merged filesystem. This step is not generally directly accessible
+    from the command line but happens behind the scenes when the runtime
+    creates a container.
+4.  Finally, the runtime actually executes the container, which means
+    telling the kernel to assign resource limits, create isolation
+    layers (for processes, networking, and filesystems), and so on,
+    using a cocktail of mechanisms like control groups (cgroups),
+    namespaces, capabilities, seccomp, AppArmor, SELinux, and whatnot.
+    For Docker, `docker run` is what creates and runs the container, but
+    underneath it actually calls the
+    [`runc`](https://github.com/opencontainers/runc) command.
+
+Those concepts were first elaborated in Docker's [Standard Container
+manifesto](https://github.com/moby/moby/commit/0db56e6c519b19ec16c6fbd12e3cee7dfa6018c5)
+which was eventually removed from Docker, but other standardization
+efforts followed. The [Open Container
+Initiative](https://www.opencontainers.org/) (OCI) now specifies most of
+the above under a few specifications:
+
+-   the [Image
+    Specification](https://github.com/opencontainers/image-spec) (often
+    referred to as "OCI 1.0 images") which defines the content of
+    container images
+-   the [Runtime
+    Specification](https://github.com/opencontainers/runtime-spec)
+    (often referred to as CRI 1.0 or Container Runtime Interface)
+    describes the "configuration, execution environment, and lifecycle
+    of a container"
+-   the [Container Network
+    Interface](https://github.com/containernetworking/cni) (CNI)
+    specifies how to configure network interfaces inside containers,
+    though it was standardized under the [Cloud Native Computing
+    Foundation](https://www.cncf.io/) (CNCF) umbrella, not the OCI
+
+Implementation of those standards varies among the different projects.
+For example, Docker is generally compatible with the standards except
+for the image format. Docker has its own image format that predates
+standardization and it has promised to convert to the new specification
+soon. Implementation of the runtime interface also differs as not
+everything Docker does is standardized, as we shall see.
+
+The Docker and rkt story
+------------------------
+
+Since Docker was the first to popularize containers, it seems fair to
+start there. Originally, Docker used [LXC](https://linuxcontainers.org/)
+but its isolation layers were incomplete, so Docker wrote
+[libcontainer](https://github.com/docker/libcontainer), which eventually
+became `runc`. Container popularity exploded and Docker became the de
+facto standard to deploy containers. When it came out in 2014,
+Kubernetes naturally used Docker, as Docker was the only runtime
+available at the time. But Docker is an ambitious company and kept on
+developing new features on its own. [Docker
+Compose](https://docs.docker.com/compose/), for example, reached 1.0 at
+the same time as Kubernetes and there is some overlap between the two
+projects. While there are ways to make the two tools interoperate using
+tools such as [Kompose](http://kompose.io), Docker is often seen as a
+big project doing too many things. This situation led CoreOS to
+[release](https://coreos.com/blog/rocket.html) a simpler, standalone
+runtime in the form of rkt, that was explained this way:
+
+> Docker now is building tools for launching cloud servers, systems for
+> clustering, and a wide range of functions: building images, running
+> images, uploading, downloading, and eventually even overlay
+> networking, all compiled into one monolithic binary running primarily
+> as root on your server. The standard container manifesto was removed.
+> We should stop talking about Docker containers, and start talking
+> about the Docker Platform. It is not becoming the simple composable
+> building block we had envisioned.
+
+One of the innovations of rkt was to standardize image formats through
+the appc specification, something we [covered back in
+2015](https://lwn.net/Articles/631630/). CoreOS doesn't yet have a fully
+standard implementation of the runtime interfaces: at the time of
+writing, rkt's Kubernetes compatibility layer
+([`rktlet`](https://github.com/kubernetes-incubator/rktlet)), doesn't
+pass all of Kubernetes integration tests and is still under development.
+Indeed, according to Brandon Philips, CTO of CoreOS, in an email
+exchange:
+
+> rkt has initial support for OCI image-spec, but it is incomplete in
+> places. OCI support is less important at the moment as the support for
+> OCI is still emerging in container registries and is notably absent
+> from Kubernetes. OCI runtime-spec is not used, consumed, nor handled
+> by rkt. This is because rkt execution is based on pod semantics, while
+> runtime-spec only covers single container execution.
+
+However, according to Dan Walsh, head of the container team at Red Hat,
+in an email interview, CoreOS's efforts were vital to the
+standardization of the container space within the Kubernetes ecosystem:
+"Without CoreOS we probably would not have CNI, and CRI and would be
+still fighting about OCI. The CoreOS effort is under-appreciated in the
+market." Indeed, according to Philips, the "CNI project and
+specifications originated from rkt, and were later spun off and moved to
+CNCF. CNI is still widely used by rkt today, both internally and for
+user-provided configuration." At this point, however, CoreOS seems to be
+gearing up toward building its Kubernetes platform
+([Tectonic](https://coreos.com/tectonic/)) and image distribution
+service ([Quay](https://coreos.com/quay-enterprise/)) rather than
+competing in the runtime layer.
+
+CRI-O: the minimal runtime
+--------------------------
+
+Seeing those new standards, some Red Hat folks figured they could make a
+simpler runtime that would *only* do what Kubernetes needed. That
+"skunkworks" project was eventually called [CRI-O](http://cri-o.io/%20)
+and implements a minimal CRI interface. During a [talk at KubeCon Austin
+2017](https://kccncna17.sched.com/event/CU6T/cri-o-all-the-runtime-kubernetes-needs-and-nothing-more-mrunal-patel-red-hat),
+Walsh explained that "CRI-O is designed to be simpler than other
+solutions, following the Unix philosophy of doing one thing and doing it
+well, with reusable components."
+
+Started in late 2016 by Red Hat for its OpenShift platform, the project
+also benefits from support by Intel and SUSE, according to Mrunal Patel,
+lead CRI-O developer at Red Hat who hosted the talk. CRI-O is compatible
+with the CRI (runtime) specification and the OCI and Docker image
+formats. It can also verify image GPG signatures. It uses the [CNI
+package](https://github.com/containernetworking/cni) for networking and
+supports CNI plugins, which OpenShift uses for its software-defined
+networking layer. It supports multiple CoW filesystems, like the
+commonly used overlay and aufs, but also the less common Btrfs.
+
+One of CRI-O's most notable features, however is that it supports mixed
+workloads between "trusted" and "untrusted" containers. For example,
+CRI-O can use [Clear Containers](https://clearlinux.org/containers) for
+stronger isolation promises, which is useful in multi-tenant
+configurations or to run untrusted code. It is currently unclear how
+that functionality will trickle up into Kubernetes, which currently
+considers all backends to be the same.
+
+CRI-O has an interesting architecture (see the diagram below from the
+talk [slides
+\[PDF\]](https://schd.ws/hosted_files/kccncna17/e8/CRI-O-Kubecon-2017.pdf)).
+It reuses basic components like `runc` to start containers, and software
+libraries like [containers/image](https://github.com/containers/image/)
+and [containers/storage](https://github.com/containers/storage/),
+created for the [skopeo](https://github.com/projectatomic/skopeo)
+project, to pull container images and create container filesystems. A
+separate library called
+[oci-runtime-tool](https://github.com/opencontainers/runtime-tools)
+prepares the container configuration. CRI-O introduces a new daemon to
+handle containers called
+[`conmon`](https://github.com/kubernetes-incubator/cri-o/tree/master/conmon).

(fichier de différences tronqué)
switch to sigal for photo gallery
diff --git a/blog/2012-12-15-retour-la-photo-nouvelle-galerie.mdwn b/blog/2012-12-15-retour-la-photo-nouvelle-galerie.mdwn
index f0b431c8..1872f105 100644
--- a/blog/2012-12-15-retour-la-photo-nouvelle-galerie.mdwn
+++ b/blog/2012-12-15-retour-la-photo-nouvelle-galerie.mdwn
@@ -5,37 +5,44 @@
 [[!meta guid="188 at http://anarcat.koumbit.org"]]
 
 <!--break-->
-[<img src="http://photos.orangeseeds.org/cache/plein_air-gasp%C3%A9sie-rivi%C3%A8re_aux_%C3%A9meraudes.jpg_150s.jpg" align="right" />](http://photos.orangeseeds.org/#!/plein_air-gaspésie/rivière_aux_émeraudes.jpg)Je suis fort heureux d'annoncer que je me suis remis à faire de la photographie. J'ai retapé ma [galerie de photo](http://photos.orangeseeds.org/) pour y présenter seulement les meilleures images depuis que j'ai commencé à faire de la photo digitale il y a 8 ans, mais peut-être aussi un jour les photos analogues que je fais depuis bientôt 25 ans. Je considère également exposer des agrandissements de ces photos publiquement durant l'année à venir, suivre le [[tag photo|tag/photo]] pour rester au courant des nouvelles à ce sujet.
+[<img src="https://photos.anarc.at/plein%20air/Gasp%C3%A9sie/Rivi%C3%A8re%20aux%20%C3%A9meraudes.JPG" align="right" />](https://photos.anarc.at/plein%20air/Gasp%C3%A9sie/#/1)Je suis fort heureux d'annoncer que je me suis remis à faire de la photographie. J'ai retapé ma [galerie de photo](http://photos.anarc.at/) pour y présenter seulement les meilleures images depuis que j'ai commencé à faire de la photo digitale il y a 8 ans, mais peut-être aussi un jour les photos analogues que je fais depuis bientôt 25 ans. Je considère également exposer des agrandissements de ces photos publiquement durant l'année à venir, suivre le [[tag photo|tag/photo]] pour rester au courant des nouvelles à ce sujet.
 
 Les débuts argentiques
 ----------------------
 
-[<img src="http://photos.orangeseeds.org/cache/plein_air-gasp%C3%A9sie-baie_des_chaleurs.jpg_150s.jpg" align="left" />](http://photos.orangeseeds.org/#!/plein_air-gaspésie/baie_des_chaleurs.jpg)Bien que j'ai un appareil photo depuis le noël de mes 10 ans, où j'ai reçu un Ricoh automatique (oh le luxe!), c'est seulement lorsque j'ai eu des bons appareils que j'ai vraiment pris plaisir à faire de la photo. Mon premier véritable appareil manuel fut le [Minolta SRT 200](http://camerapedia.wikia.com/wiki/Minolta_SRT_100X), que j'ai vraiment traîné partout. De la côte ouest à l'espagne en passant par la gaspésie, son poids n'a jamais vraiment été un si gros problème, c'était même une certaine fierté, d'avoir un appareil quasi-indestructible, qui n'avait pas besoin de batterie sauf pour le posemètre, optionnel.
+[<img src="https://photos.anarc.at/plein%20air/Gasp%C3%A9sie/baie%20des%20chaleurs.JPG" align="left" />](https://photos.anarc.at/plein%20air/Gasp%C3%A9sie/#/1)
 
-[<img src="http://photos.orangeseeds.org/cache/plein_air-gasp%C3%A9sie-forillon_soir.jpg_150s.jpg" align="right" />](http://photos.orangeseeds.org/#!/plein_air-gaspésie/forillon_soir.jpg)Mais j'ai fini par peu à peu abandonner la photo argentique, par dégoût des prix montants du film et du développement, et de la difficulté à monter un laboratoire de développement couleur complet à la maison. J'ai eu particulièrement de la difficulté à digérer la vitesse à laquelle mes collègues de Indymedia étaient capable de mettre des photos en ligne pendant que j'attendais après la production des photos "développement une heure" après laquelle je devais scanner chaque photo individuellement.
+Bien que j'ai un appareil photo depuis le noël de mes 10 ans, où j'ai reçu un Ricoh automatique (oh le luxe!), c'est seulement lorsque j'ai eu des bons appareils que j'ai vraiment pris plaisir à faire de la photo. Mon premier véritable appareil manuel fut le [Minolta SRT 200](http://camerapedia.wikia.com/wiki/Minolta_SRT_100X), que j'ai vraiment traîné partout. De la côte ouest à l'espagne en passant par la gaspésie, son poids n'a jamais vraiment été un si gros problème, c'était même une certaine fierté, d'avoir un appareil quasi-indestructible, qui n'avait pas besoin de batterie sauf pour le posemètre, optionnel.
+
+[<img src="https://photos.anarc.at/plein%20air/Gasp%C3%A9sie/Forillon%20soir.JPG" align="right" />](https://photos.anarc.at/plein%20air/Gasp%C3%A9sie/#/3)
+Mais j'ai fini par peu à peu abandonner la photo argentique, par dégoût des prix montants du film et du développement, et de la difficulté à monter un laboratoire de développement couleur complet à la maison. J'ai eu particulièrement de la difficulté à digérer la vitesse à laquelle mes collègues de Indymedia étaient capable de mettre des photos en ligne pendant que j'attendais après la production des photos "développement une heure" après laquelle je devais scanner chaque photo individuellement.
 
 Les débuts digitaux
 -------------------
 
-Jamais je ne renierai le plaisir zen que j'ai eu à faire de la photo argentique. De penser que la lumière brûle une petite pellicule qu'il faut exposer à l'acide, cela ajoute nécessairement du sens à l'acte, et chaque pose est importante. [<img src="http://photos.orangeseeds.org/cache/plein_air-v%C3%A9n%C3%A9rable_sur_la_rivi%C3%A8re_des_prairies.jpg_150s.jpg" align="left" />](http://photos.orangeseeds.org/#!/plein_air/vénérable_sur_la_rivière_des_prairies.jpg)Je suis cependant entré dans le monde digital, encore une fois avec un [cadeau de noël](https://www.youtube.com/watch?v=D2p5svFJ9cQ), grâce à un modeste [Canon Powershot A430](http://www.dpreview.com/products/canon/compacts/canon_a430). Ce "point and shoot" somme toute très trivial m'a bien rendu service. La liberté de prendre pratiquement autant de cliché qu'on veut, gratuitement, est très libératrice, au bout du compte. Mais évidemment, cet appareil fort limité n'approchait en rien mon vieux Minolta qui m'avait si bien servi durant toutes ces années, alors mon intérêt pour la photo s'est fanné, voyant peu de futur dans les appareils digitaux, particulièrement à cause du premier appareil digital que j'ai été donné d'utiliser, le [HP Photosmart C200](http://www.dpreview.com/products/hp/compacts/hp_c200), avec un misérable 1 megapixel...
+Jamais je ne renierai le plaisir zen que j'ai eu à faire de la photo argentique. De penser que la lumière brûle une petite pellicule qu'il faut exposer à l'acide, cela ajoute nécessairement du sens à l'acte, et chaque pose est importante. [<img src="https://photos.anarc.at/plein%20air/V%C3%A9n%C3%A9rable%20sur%20la%20rivi%C3%A8re%20des%20prairies.JPG" align="left" />](https://photos.anarc.at/plein%20air/#/3)Je suis cependant entré dans le monde digital, encore une fois avec un [cadeau de noël](https://www.youtube.com/watch?v=D2p5svFJ9cQ), grâce à un modeste [Canon Powershot A430](http://www.dpreview.com/products/canon/compacts/canon_a430). Ce "point and shoot" somme toute très trivial m'a bien rendu service. La liberté de prendre pratiquement autant de cliché qu'on veut, gratuitement, est très libératrice, au bout du compte. Mais évidemment, cet appareil fort limité n'approchait en rien mon vieux Minolta qui m'avait si bien servi durant toutes ces années, alors mon intérêt pour la photo s'est fanné, voyant peu de futur dans les appareils digitaux, particulièrement à cause du premier appareil digital que j'ai été donné d'utiliser, le [HP Photosmart C200](http://www.dpreview.com/products/hp/compacts/hp_c200), avec un misérable 1 megapixel...
 
-[<img src="http://photos.orangeseeds.org/cache/cities-n%C3%B8rre_alslev_station.jpg_150s.jpg" align="right" />](http://photos.orangeseeds.org/#!/cities/nørre_alslev_station.jpg)Puis je m'y suis remis, sur mon [téléphone](https://en.wikipedia.org/wiki/Nokia_N900), qui est somme toute un téléphone modeste, mais assez fonctionnel pour faire de bonnes photos, pour vu que les conditions sont bonnes. Mais voilà le problème: un peu de vitesse, de contrejour, manque de lumière et c'est foutu, on ne voit plus rieu. Surtout parce que le contrôle sur l'appareil est médiocre, sans parler de l'optique. J'en était revenu à faire des photos de voyage *ad nauseum*.
+[<img src="https://photos.anarc.at/cities/N%C3%B8rre%20Alslev%20station.jpg" align="right" />](https://photos.anarc.at/cities/#/4)
+Puis je m'y suis remis, sur mon [téléphone](https://en.wikipedia.org/wiki/Nokia_N900), qui est somme toute un téléphone modeste, mais assez fonctionnel pour faire de bonnes photos, pour vu que les conditions sont bonnes. Mais voilà le problème: un peu de vitesse, de contrejour, manque de lumière et c'est foutu, on ne voit plus rieu. Surtout parce que le contrôle sur l'appareil est médiocre, sans parler de l'optique. J'en était revenu à faire des photos de voyage *ad nauseum*.
 
-[<img src="http://photos.orangeseeds.org/cache/plein_air-mt._orford-pic_de_lours.jpg_150s.jpg" align="left" />](http://photos.orangeseeds.org/#!/plein_air-mt._orford/pic_de_lours.jpg)J'ai malgré tout réussi à faire ce que je crois être d'excellentes photos durant cette période. Tant à cause de sujets exceptionnels qu'à cause de la bonne qualité de l'appareil. Ce coucher de soleil, par exemple, fut pris rapidement en haut du mont Orford sans aucun ajustement sur l'appareil... Bref, c'est encore un petit appareil automatique qui m'a redonné la piqûre et j'ai commencé à regarder pour un nouvel appareil.
+[<img src="https://photos.anarc.at/plein%20air/Mt.%20Orford/Pic%20de%20l'ours.jpg" align="left" />](https://photos.anarc.at/plein%20air/Mt.%20Orford/#/0)
+J'ai malgré tout réussi à faire ce que je crois être d'excellentes photos durant cette période. Tant à cause de sujets exceptionnels qu'à cause de la bonne qualité de l'appareil. Ce coucher de soleil, par exemple, fut pris rapidement en haut du mont Orford sans aucun ajustement sur l'appareil... Bref, c'est encore un petit appareil automatique qui m'a redonné la piqûre et j'ai commencé à regarder pour un nouvel appareil.
 
 Le renouveau digital: le RAW
 ----------------------------
 
 Et cet été, j'ai cédé: je me suis acheté un petit [Canon Powershot G12](http://www.dpreview.com/reviews/CanonG12), un joli petit gadget, tout discret, qui fait de sacrées belles photos. Bon, ce n'est pas un [SLR](https://en.wikipedia.org/wiki/Single-lens_reflex_camera), mais il fait quand même du [RAW](https://en.wikipedia.org/wiki/Raw_image_format) et laisse beaucoup de contrôle, particulièrement à cause des nombreuses roulettes sur le boîtier, qui semble pas mal robuste.
 
-[<img src="http://photos.orangeseeds.org/cache/gens-img_0279.jpg_150s.jpg" align="right"/>](http://photos.orangeseeds.org/#!/gens/img_0279.jpg)Le développement des photos RAW a particulièrement renouvelé mon intérêt à la photographie digitale. Grâce à un logiciel libre nommé [darktable](http://darktable.org/), j'ai commencé à *développer* mes photos digitales, un peu comme l'étape de du développement de la photo sur papier une fois que le négatif est fait. On peut changer l'exposition, le contraste, le focus de la photo, faire des masques, etc. Bien que ceci peut mener à des abus et des photos un peu synthétiques (ou "photoshoppées"), j'apprécie cette nouvelle liberté et la possibilité que ce processus offre à la création. Durant le développement, je peux faire ressortir ce que je voulais vraiment voir dans la prise, ce qui était vraiment devant moi, ou dans ma tête quand j'ai pris la photo. Il s'agit en partie du [High Dynamic Range](https://en.wikipedia.org/wiki/High_dynamic_range_rendering), c'est à dire de pouvoir reproduire la portée visuelle de l'œil humain dans l'appareil, ce qui n'est normalement pas possible en photographie traditionnelle sans faire des expositions multiples. 
+[<img src="https://photos.anarc.at/gens/IMG_0279.jpg" align="right"/>](https://photos.anarc.at/gens/#/5)
+Le développement des photos RAW a particulièrement renouvelé mon intérêt à la photographie digitale. Grâce à un logiciel libre nommé [darktable](http://darktable.org/), j'ai commencé à *développer* mes photos digitales, un peu comme l'étape de du développement de la photo sur papier une fois que le négatif est fait. On peut changer l'exposition, le contraste, le focus de la photo, faire des masques, etc. Bien que ceci peut mener à des abus et des photos un peu synthétiques (ou "photoshoppées"), j'apprécie cette nouvelle liberté et la possibilité que ce processus offre à la création. Durant le développement, je peux faire ressortir ce que je voulais vraiment voir dans la prise, ce qui était vraiment devant moi, ou dans ma tête quand j'ai pris la photo. Il s'agit en partie du [High Dynamic Range](https://en.wikipedia.org/wiki/High_dynamic_range_rendering), c'est à dire de pouvoir reproduire la portée visuelle de l'œil humain dans l'appareil, ce qui n'est normalement pas possible en photographie traditionnelle sans faire des expositions multiples. 
 
 Je me garde cependant, pour l'instant, de briser les règles de fidélité que je m'étais fixés au début de la photo argentique: une photo est une photo... On peut la recadrer, changer l'exposition, faire sortir les ombres ou changer la balance de blanc, mais je préfère éviter de faire disparaître des points noirs ou des poteaux de téléphone mal placés. La fibre journalistique est encore trop forte pour trafiquer mes photos à ce point.
 
 Le geekage: PhotoFloat
 ----------------------
 
-[<img src="http://photos.orangeseeds.org/cache/cities-montreal-fireproof_roof_plane.jpg_150s.jpg" align="right" />](http://photos.orangeseeds.org/#!/cities-montreal/fireproof_roof_plane.jpg)Finalement, j'ai également pris un peu de temps à "geeker" pour installer une [bonne galerie de photo](http://photos.orangeseeds.org/), basée sur [PhotoFloat](http://www.zx2c4.com/projects/photofloat/), un logiciel fort intéressant et très simple, basé fortement sur le [Javascript](https://en.wikipedia.org/wiki/JavaScript). En fait, du côté serveur, très peu de logiciel: simplement un petit script [Python](https://en.wikipedia.org/wiki/Python_(programming_language)) qui examine les photos et génère une série de [fichiers JSON](https://en.wikipedia.org/wiki/JSON). Les albums sont simplements des dossiers dans lesquels on copie les fichiers qu'on veut publier, le tout est d'une simplicité déconcertante, avec une interface minimaliste que j'adore.
+[<img src="https://photos.anarc.at/cities/montreal/Fireproof_roof_plane.jpg" align="right" />](https://photos.anarc.at/cities/montreal/#/3)
+Finalement, j'ai également pris un peu de temps à "geeker" pour installer une [bonne galerie de photo](https://photos.anarc.at/), basée sur [PhotoFloat](http://www.zx2c4.com/projects/photofloat/), un logiciel fort intéressant et très simple, basé fortement sur le [Javascript](https://en.wikipedia.org/wiki/JavaScript). En fait, du côté serveur, très peu de logiciel: simplement un petit script [Python](https://en.wikipedia.org/wiki/Python_(programming_language)) qui examine les photos et génère une série de [fichiers JSON](https://en.wikipedia.org/wiki/JSON). Les albums sont simplements des dossiers dans lesquels on copie les fichiers qu'on veut publier, le tout est d'une simplicité déconcertante, avec une interface minimaliste que j'adore.
 
 J'ai utilisé auparavant [Gallery](http://gallery.menalto.com/) et [Piwigo](http://www.piwigo.org/) et, dans les deux cas, j'ai fini par trouver les logiciels trop lours, trop lents et trop complexes pour mes besoins. Pourquoi rouler du [PHP](http://wiki.theory.org/YourLanguageSucks#PHP_sucks_because:) quand on peut simplement héberger les fichiers directement sur le serveur web? Je préfère de loin avoir ma galerie de photo basée sur des fichiers statiques qui vont survivre n'importe quelle mise à jour logicielle que de dépendre de la version X.Y d'un logiciel qui dépend de la version Y.Z de PHP...
 
@@ -44,9 +51,9 @@ J'ai contacté l'auteur de PhotoFloat pour contribuer des petites améliorations
 La galerie et la vie privée
 ---------------------------
 
-[<img src="http://photos.orangeseeds.org/cache/gens-pianovelo.jpg_150s.jpg" align="right" />](http://photos.orangeseeds.org/#!/gens/pianovelo.jpg)Vous remarquerez que la [galerie](http://photos.orangeseeds.org/) a une [section "gens"](http://photos.orangeseeds.org/#!/gens) où je présente les portraits de quelques personnes que je trouve assez réussis. Le portrait est assez nouveau pour moi, car j'ai pris beaucoup de temps à oser prendre les gens en photo, de peur qu'ils refusent, que la photo ne leur plaise pas, etc. Les paysages, eux, n'ont rien à dire, mais je considère que les personnes ont droit de regard sur leur image, s'ils sont le sujet principal de la photo. J'ai donc pris un peu plus de temps avant d'annoncer cette galerie pour pouvoir avoir l'accord des sujets avant de les mettre en ligne, ce qui, je pense, devrait être pratique plus courante même sur les sites comme Facebook.
+[<img src="https://photos.anarc.at/gens/pianovelo.JPG" align="right" />](https://photos.anarc.at/gens/#/9)Vous remarquerez que la [galerie](https://photos.anarc.at/) a une [section "gens"](http://photos.anarc.at/gens/) où je présente les portraits de quelques personnes que je trouve assez réussis. Le portrait est assez nouveau pour moi, car j'ai pris beaucoup de temps à oser prendre les gens en photo, de peur qu'ils refusent, que la photo ne leur plaise pas, etc. Les paysages, eux, n'ont rien à dire, mais je considère que les personnes ont droit de regard sur leur image, s'ils sont le sujet principal de la photo. J'ai donc pris un peu plus de temps avant d'annoncer cette galerie pour pouvoir avoir l'accord des sujets avant de les mettre en ligne, ce qui, je pense, devrait être pratique plus courante même sur les sites comme Facebook.
 
-[<img src="http://photos.orangeseeds.org/cache/plein_air-mt._st-gr%C3%A9goire-coucher.jpg_150s.jpg" align="left"/>](http://photos.orangeseeds.org/#!/plein_air-mt._st-grégoire/coucher.jpg)En plus des portraits, j'ai mis des sections sur l'architecture et le plein air, où je vais collectionner les meilleurs clichés que j'ai pu faire par le passé. Les sections "évènements" et "documentation" sont réservés pour des photos plus journalistiques ou documentaire et pas nécessairement des photos de haute qualité.
+[<img src="https://photos.anarc.at/plein%20air/Mt.%20St-Gr%C3%A9goire/Coucher.jpg" align="left"/>](https://photos.anarc.at/plein%20air/Mt.%20St-Gr%C3%A9goire/#/0)En plus des portraits, j'ai mis des sections sur l'architecture et le plein air, où je vais collectionner les meilleurs clichés que j'ai pu faire par le passé. Les sections "évènements" et "documentation" sont réservés pour des photos plus journalistiques ou documentaire et pas nécessairement des photos de haute qualité.
 
 Je vous invite à revisiter fréquemment la galerie. Je vais y publier mes meilleurs clichés au fur et à mesure que je les travaille, et vos commentaires seront très appréciés ici.
 
diff --git a/blog/2012-12-15-retour-la-photo-nouvelle-galerie/comment_1_92a37f3b4dde06ff227871f4f7163474._comment b/blog/2012-12-15-retour-la-photo-nouvelle-galerie/comment_1_92a37f3b4dde06ff227871f4f7163474._comment
new file mode 100644
index 00000000..604a36b1
--- /dev/null
+++ b/blog/2012-12-15-retour-la-photo-nouvelle-galerie/comment_1_92a37f3b4dde06ff227871f4f7163474._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="anarcat"
+ subject="""Update: maintenant Sigal"""
+ date="2018-01-01T17:36:41Z"
+ content="""
+6 ans plus tard, j'utilise maintenant Sigal pour gérer mes photos.
+
+Je n'ai pas réussi à faire que mes patches soient fusionnées dans PhotoFloat et j'ai abandonné le projet. Pendant ce temps, je n'ai eu besoin d'aucunes patches pour avoir les fonctionalités équivalentes dans Sigal. Donc: victoire!
+"""]]
diff --git a/communication.mdwn b/communication.mdwn
index b203341c..61ef42cd 100644
--- a/communication.mdwn
+++ b/communication.mdwn
@@ -90,15 +90,23 @@ depuis je m'y suis remis, et oui, au digital...
 
 Mes meilleurs clichés ainsi que certaines photos journalistiques ou de
 documentation sont disponibles sur ma
-[gallerie de photo](http://photos.orangeseeds.org/).
+[gallerie de photo](http://photos.anarc.at/).
 
 J'utilise [Darktable](http://darktable.org) pour gérer ma collection
 de photos, particulièrement depuis que j'ai commencé à faire du
-traitement d'images `RAW` plus sérieusement. J'utilise une version
+traitement d'images `RAW` plus sérieusement. J'utilisais une version
 modifiée de [Photofloat](http://www.zx2c4.com/projects/photofloat/)
-pour la gallerie de photos.
-
-Je shoote directement de ma caméra, une Canon G12, mais si je shootais
+pour la gallerie de photos, mais j'utilise maintenant
+[Sigal](http://sigal.saimon.org/) parce que le maintainer est plus
+ouvert à la collaboration et semble mieux conçu. J'ai également
+commencé à utiliser (et contribuer à) [Rapid Photo
+Downloader](http://www.damonlynch.net/rapid/) pour remplacer
+l'importeur un [peu](https://redmine.darktable.org/issues/8415)
+[limité](https://redmine.darktable.org/issues/9056) de darktable.
+
+Je shoote directement de ma caméra, une Canon G12 et une Fujifilm X-T2
+(voir [[blog/2012-12-15-retour-la-photo-nouvelle-galerie]] pour un
+historique plus complet de mes appareils), mais si je shootais
 avec un laptop, j'essaierais
 [Entangle](http://entangle-photo.org/). Voir aussi [[hardware/camera]]
 pour les différents appareils que j'ai eu ou que je considère me procurer.
diff --git a/services.mdwn b/services.mdwn
index 60c33787..9f5a1395 100644
--- a/services.mdwn
+++ b/services.mdwn
@@ -26,7 +26,7 @@ Service        | État                                      | Détails  | Depuis
 [[Web]]        | [[!color background=#00ff00 text="OK"]]   | [[sites hébergés|hosted]], [[nginx]] considéré, [[SSL]] à faire | 1999?    | public      | [Apache][] | hébergement de sites web, sur demande
 [[Wiki]]       | [[!color background=#ffff00 text="dev"]]  | <http://wiki.anarc.at>, à automatiser | 2011     | [public][9]      | [ikiwiki-hosting][]      | Hébergement de wikis [[ikiwiki]], sur demande
 [[Git]]        | [[!color background=#ffff00 text="dev"]]  | <http://src.anarc.at>, deprecated         | ~2012?   | [public][6] | [Git][]                | hébergement de dépôts git, sur demande, en migration vers [Gitlab](https://gitlab.com/anarcat)
-[[Gallery]]    | [[!color background=#ffff00 text="dev"]]  | <http://photos.orangeseeds.org>, à automatiser | 1999?    | [public][4] | [PhotoFloat][]         | galleries de photos, sur demande
+[[Gallery]]    | [[!color background=#ffff00 text="dev"]]  | <https://photos.anarc.at>, à automatiser | 1999?    | [public][4] | [Sigal][]         | galleries de photos, sur demande
 [[Stats]]      | [[!color background=#00ff00 text="OK"]]   | <http://munin.anarc.at> | 2012     | [public][8] | [Munin][]              | statistiques du réseau
 
 Ancien services
@@ -44,7 +44,7 @@ Service        | État                                      | Détails  | Depuis
 
  [2]: http://sondage.orangeseeds.org/
  [3]: http://bm.orangeseeds.org/
- [4]: http://photos.orangeseeds.org/
+ [4]: http://photos.anarc.at/
  [5]: http://status.orangeseeds.org/
  [6]: http://src.anarc.at/
  [7]: http://bm.orangeseeds.org/
@@ -56,7 +56,7 @@ Service        | État                                      | Détails  | Depuis
  [MPD]: http://mpd.wikia.com/
  [OpenSondage]: https://github.com/leblanc-simon/OpenSondage
  [ikiwiki-hosting]: http://ikiwiki-hosting.branchable.com
- [PhotoFloat]: http://www.zx2c4.com/projects/photofloat/
+ [Sigal]: http://sigal.saimon.org/
  [StatusNet]: http://status.net/
  [Drupal]: http://drupal.org/
  [Aegir]: http://aegirproject.org/
diff --git a/services/nginx.mdwn b/services/nginx.mdwn
index 8141af4e..31cdb074 100644
--- a/services/nginx.mdwn
+++ b/services/nginx.mdwn
@@ -14,7 +14,7 @@ Site inventory:
    <https://munin.readthedocs.org/en/latest/example/webserver/nginx.html>
  * orangeseeds.org: port, easy
  * paste.anarcat.ath.cx: port, easy, move to main domain
- * photos.orangeseeds.org: port, easy
+ * photos.anarc.at: port, easy
  * sondage.orangeseeds.org: port, php-fpm
  * stats.reseaulibre.ca: awstats static files?
  * status.orangeseeds.org: replace with buddycloud
diff --git a/services/welcome.mdwn b/services/welcome.mdwn
index 79ed82e5..df418691 100644
--- a/services/welcome.mdwn
+++ b/services/welcome.mdwn
@@ -41,7 +41,7 @@ Voir aussi la [liste complète des services][]:
  [Hébergement web]: http://orangeseeds.org/$username
  [Partage de fichiers]: sftp://shell.orangeseeds.org
  [Radio]: http://radio.anarc.at/
- [Gallerie de photos]: http://photos.orangeseeds.org
+ [Gallerie de photos]: https://photos.anarc.at/
  [liste complète des services]: http://anarc.at/services
 
 Commandes shell

update: bought a camera and some more notes
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 773a3613..d2af4ac5 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -2,6 +2,11 @@ Ceci est la documentation sur l'achat d'un nouvel appareil, mais aussi une tenta
 
 [[!toc levels=2]]
 
+Update: j'ai acheté une Fujifilm X-T2, principalement à cause de sa
+familiarité avec les vieux systèmes argentique et la qualité
+exceptionnelle des rendus de l'appareil, ce qui réduit ma dépendance à
+l'ordinateur.
+
 [Comparatif actuel DPR](https://www.dpreview.com/products/compare/side-by-side?products=fujifilm_x100f&products=fujifilm_xt1&products=fujifilm_xt2&products=oly_em5ii&products=olympus_em1ii&products=sony_a7_ii&products=sony_a9&products=nikon_d750&products=nikon_d7500&products=canon_g12&sortDir=ascending)
 
 Things I need
@@ -344,6 +349,8 @@ Possible lenses
  * Telephoto: Fujifilm XF 55-200mm F3.5-4.8 R LM OIS ([dpreview](https://www.dpreview.com/reviews/fujifilm-55-200-3p5-4p8),
    [600$USD](https://www.bhphotovideo.com/c/product/966855-REG/fujifilm_55_200mm_f_3_5_4_8_xf_r.html), ~84-305mm)
 
+See also [this excellent overview of most XF lenses](https://alikgriffin.com/a-complete-list-of-fujifilm-x-mount-lenses/).
+
 X100F
 -----
 
@@ -352,7 +359,7 @@ stats:
  * 1800$ lozeau
  * similar stats to X-T2, except:
  * F2-16, 35mm
- * USB-2, chargin
+ * USB-2, charging
  * 390g
  * 8fps
 
@@ -361,6 +368,7 @@ pro:
  * really cute and small
  * good but minimal controls
  * nostalgia
+ * same battery as X-T2
 
 cons:
 

fix markup and broken link
diff --git a/blog/2017-12-13-kubecon-overview.mdwn b/blog/2017-12-13-kubecon-overview.mdwn
index 6f71c68f..e7cef9e1 100644
--- a/blog/2017-12-13-kubecon-overview.mdwn
+++ b/blog/2017-12-13-kubecon-overview.mdwn
@@ -16,11 +16,11 @@ Alibaba.
 
 In addition, KubeCon saw an impressive number of diversity scholarships,
 which "*include free admission to KubeCon and a travel stipend of up to
-\$1,500, aimed at supporting those from traditionally underrepresented
+$1,500, aimed at supporting those from traditionally underrepresented
 and/or marginalized groups in the technology and/or open source
 communities*", [according to Neil McAllister of
 CoreOS](https://coreos.com/blog/kubecon-2017-recap-day-1). The diversity
-team raised an impressive \$250,000 to bring 103 attendees to Austin
+team raised an impressive $250,000 to bring 103 attendees to Austin
 from all over the world.
 
 We have looked into Kubernetes in the past but, considering the speed at
@@ -137,7 +137,7 @@ developers would deploy containers on Amazon by using EC2 (Elastic
 Compute Cloud) VMs to run containers with Docker.
 
 This move by Amazon has been met with [skepticism in the
-community](https://medium.com/@cloud_opinion/Kubernetes-on-aws-caution-c5acae0e1790%0A).
+community](https://medium.com/@cloud_opinion/Kubernetes-on-aws-caution-c5acae0e1790).
 The concern here is that Amazon could pull the plug on Kubernetes when
 it hinders the bottom line, like [it did with the Chromecast products on
 Amazon](https://www.recode.net/2015/10/1/11619122/amazon-raises-a-walled-garden-by-booting-apple-tv-google-chromecast).
@@ -182,7 +182,7 @@ And, of course, Kubernetes can still be run on bare metal in a
 colocation facility, but those costs are getting less and less
 affordable. In an [enlightening
 talk](https://kccncna17.sched.com/event/CU8N/the-true-costs-of-running-cloud-native-infrastructure-b-dmytro-dyachuk-pax-automa),
-Dmytro Dyachuk explained that unless cloud costs hit \$100,000 per
+Dmytro Dyachuk explained that unless cloud costs hit $100,000 per
 month, users may be better off staying in the cloud. Indeed, that is
 where a lot of applications end up. During an [industry
 roundtable](https://kccncna17.sched.com/event/CU7S/panel-Kubernetes-cloud-native-and-the-public-cloud-b-moderated-by-dan-kohn-cloud-native-computing-foundation),

creating tag page tag/containers
diff --git a/tag/containers.mdwn b/tag/containers.mdwn
new file mode 100644
index 00000000..979477e9
--- /dev/null
+++ b/tag/containers.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged containers"]]
+
+[[!inline pages="tagged(containers)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/kubernetes
diff --git a/tag/kubernetes.mdwn b/tag/kubernetes.mdwn
new file mode 100644
index 00000000..fa418fe4
--- /dev/null
+++ b/tag/kubernetes.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged kubernetes"]]
+
+[[!inline pages="tagged(kubernetes)" actions="no" archive="yes"
+feedshow=10]]

merge kubecon article
diff --git a/blog/2017-12-13-kubecon-overview.mdwn b/blog/2017-12-13-kubecon-overview.mdwn
new file mode 100644
index 00000000..6f71c68f
--- /dev/null
+++ b/blog/2017-12-13-kubecon-overview.mdwn
@@ -0,0 +1,257 @@
+[[!meta title="An overview of KubeCon + CloudNativeCon"]]
+
+The [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF)
+held its conference, [KubeCon +
+CloudNativeCon](http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america),
+in December 2017. There were 4000 attendees at this gathering in Austin,
+Texas, more than all the previous KubeCons before, which shows the rapid
+growth of the community building around the tool that was announced by
+Google in 2014. Large corporations are also taking a larger part in the
+community, with major players in the industry
+[joining](https://techcrunch.com/2017/09/20/Kubernetes-gains-momentum-as-big-name-vendors-join-cloud-native-computing-foundation/)
+the CNCF, which is a project of the Linux Foundation. The CNCF now
+features three of the largest cloud hosting businesses (Amazon, Google,
+and Microsoft), but also emerging companies from Asia like Baidu and
+Alibaba.
+
+In addition, KubeCon saw an impressive number of diversity scholarships,
+which "*include free admission to KubeCon and a travel stipend of up to
+\$1,500, aimed at supporting those from traditionally underrepresented
+and/or marginalized groups in the technology and/or open source
+communities*", [according to Neil McAllister of
+CoreOS](https://coreos.com/blog/kubecon-2017-recap-day-1). The diversity
+team raised an impressive \$250,000 to bring 103 attendees to Austin
+from all over the world.
+
+We have looked into Kubernetes in the past but, considering the speed at
+which things are moving, it seems time to make an update on the projects
+surrounding this newly formed ecosystem.
+
+The CNCF and its projects
+-------------------------
+
+The CNCF was founded, in part, to manage the Kubernetes software
+project, which was donated to it by Google in 2015. From there, the
+number of projects managed under the CNCF umbrella has grown quickly. It
+first added the [Prometheus](http://prometheus.io/) monitoring and
+alerting system, and then quickly went up from four projects in the
+first year, to [14 projects](https://www.cncf.io/projects/) at the time
+of this writing, with more expected to join shortly. The CNCF's latest
+additions to its roster are
+[Notary](https://github.com/theupdateframework/notary) and [The Update
+Framework](https://github.com/theupdateframework/specification) (TUF,
+which we [previously covered](https://lwn.net/Articles/629426/)), both
+projects aimed at providing software verification. Those add to the
+already existing projects which are, bear with me,
+[OpenTracing](http://opentracing.io/) (a tracing API),
+[Fluentd](http://www.fluentd.org/) (a logging system),
+[Linkerd](https://linkerd.io/) (a "service mesh", which we [previously
+covered](https://lwn.net/Articles/719282/)), [gRPC](http://www.grpc.io/)
+(a "universal RPC framework" used to communicate between pods),
+[CoreDNS](https://coredns.io/) (DNS and service discovery),
+[rkt](https://github.com/rkt/rkt) (a container runtime),
+[containerd](http://containerd.io/) (*another* container runtime),
+[Jaeger](https://github.com/uber/jaeger) (a tracing system),
+[Envoy](https://lyft.github.io/envoy/) (*another* "service mesh"), and
+[Container Network
+Interface](https://github.com/containernetworking/cni) (CNI, a
+networking API).
+
+This is an incredible diversity, if not fragmentation, in the community.
+The CNCF made this large diagram depicting Kubernetes-related
+projects—so large that you will have a hard time finding a monitor that
+will display the whole graph without scaling it (seen below, click
+through for larger version). The diagram shows *hundreds* of projects,
+and it is hard to comprehend what all those components do and if they
+are all necessary or how they overlap. For example, Envoy and Linkerd
+are similar tools yet both are under the CNCF umbrella—and I'm ignoring
+*two* more such projects presented at KubeCon ([Istio](https://istio.io)
+and [Conduit](https://buoyant.io/2017/12/05/introducing-conduit/)). You
+could argue that all tools have different focus and functionality, but
+it still means you need to learn about all those tools to pick the right
+one, which may discourage and confuse new users.
+
+[![Cloud Native landscape](https://static.lwn.net/images/2017/cn-landscape.png)](https://raw.githubusercontent.com/cncf/landscape/master/landscape/CloudNativeLandscape_latest.jpg)
+
+You may notice that containerd and rkt are *both* projects of the CNCF,
+even though they overlap in functionality. There is also a *third*
+Kubernetes runtime called [CRI-O](http://cri-o.io/) built by RedHat.
+This kind of fragmentation leads to significant confusion within the
+community as to which runtime they should use, or if they should even
+care. We'll run a separate article about CRI-O and the other runtimes to
+try to clarify this shortly.
+
+Regardless of this complexity, it does seem the space is maturing. In
+his keynote, Dan Kohn, executive director of the CNCF, announced "1.0"
+releases for 4 projects:
+[CoreDNS](https://coredns.io/2017/12/01/coredns-1.0.0-release/),
+[containerd](https://www.cncf.io/blog/2017/12/05/general-availability-containerd-1-0/),
+[Fluentd](https://www.cncf.io/blog/2017/12/06/fluentd-v1-0/) and
+[Jaeger](https://www.cncf.io/blog/2017/12/06/announcing-jaeger-1-0-release/).
+Prometheus also had a [major 2.0
+release](https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0/),
+which we will cover in a separate article.
+
+There were significant announcements at KubeCon for projects that are
+not directly under the CNCF umbrella. Most notable for operators
+concerned about security is the introduction of [Kata
+Containers](http://katacontainers.io/), which is basically a merge of
+[runV](https://github.com/hyperhq/runv) from
+[Hyper.sh](https://hyper.sh/) and Intel's [Clear
+Containers](https://clearlinux.org/containers) projects. Kata
+Containers, introduced during a keynote by Intel's VP of the software
+and services group, Imad Sousou, are virtual-machine-based containers,
+or, in other words, containers that run in a hypervisor instead of under
+the supervision of the Linux kernel. The rationale here is that
+containers are convenient but all run on the same kernel, so the
+compromise of a single container can leak into all containers on the
+same host. This may be unacceptable in certain environments, for example
+for multi-tenant clusters where containers cannot trust each other.
+
+Kata Containers promises the "best of both worlds" by providing the
+speed of containers and the isolation of VMs. It does this by using
+minimal custom kernel builds, to speed up boot time, and parallelizing
+container image builds and VM startup. It also uses tricks like
+[same-page memory sharing across
+VMs](https://en.wikipedia.org/wiki/Kernel_same-page_merging) to
+deduplicate memory across virtual machines. It currently works only on
+x86 and KVM, but it integrates with Kubernetes, Docker, and OpenStack.
+There was a [talk explaining the technical
+details](https://kccncna17.sched.com/event/CU81/kata-containers-hypervisor-based-container-runtime-xu-wang-hyperhq-samuel-ortiz-intel);
+that page should eventually feature video and slide links.
+
+Industry adoption
+-----------------
+
+As hinted earlier, large cloud providers like [Amazon Web
+Services](https://en.wikipedia.org/wiki/Amazon_Web_Services) (AWS) and
+[Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure) are
+adopting the Kubernetes platform, or at least its API. The keynotes
+featured AWS prominently; Adrian Cockcroft (AWS vice president of cloud
+architecture strategy) announced the
+[Fargate](https://aws.amazon.com/fargate/) service, which introduces
+containers as "first class citizens" in the Amazon infrastructure.
+Fargate should run alongside, and potentially replace, the existing
+Amazon EC2 Container Service (ECS), which is currently the way
+developers would deploy containers on Amazon by using EC2 (Elastic
+Compute Cloud) VMs to run containers with Docker.
+
+This move by Amazon has been met with [skepticism in the
+community](https://medium.com/@cloud_opinion/Kubernetes-on-aws-caution-c5acae0e1790%0A).
+The concern here is that Amazon could pull the plug on Kubernetes when
+it hinders the bottom line, like [it did with the Chromecast products on
+Amazon](https://www.recode.net/2015/10/1/11619122/amazon-raises-a-walled-garden-by-booting-apple-tv-google-chromecast).
+This seems to be part of a changing strategy by the corporate sector in
+adoption of free-software tools. While historically companies like
+Microsoft or Oracle have been hostile to free software, they are now not
+only using free software but also releasing free software. Oracle, for
+example, released what it called "[Kubernetes Tools for Serverless
+Deployment and Intelligent Multi-Cloud
+Management](https://www.oracle.com/corporate/pressrelease/oracle-Kubernetes-tools-120617.html)",
+named [Fn](http://fnproject.io/). Large cloud providers are getting
+certified by the CNCF for compliance with the Kubernetes API and other
+standards.
+
+One theory to explain this adoption is that free-software projects are
+becoming on-ramps to proprietary products. In this strategy, as
+[explained by
+InfoWorld](https://www.infoworld.com/article/3238491/open-source-tools/open-source-innovation-is-now-all-about-vendor-on-ramps.html),
+open-source tools like Kubernetes are merely used to bring consumers
+over to proprietary platforms. Sure, the client and the API are open,
+but the underlying software can be proprietary. The data and *some*
+magic interfaces, especially, remain proprietary. Key examples of this
+include the ["serverless"
+services](https://en.wikipedia.org/wiki/Serverless_computing), which are
+currently not standardized at all: each provider has its own
+incompatible framework that could be a deliberate lock-in strategy.
+Indeed, a [common
+definition](https://www.martinfowler.com/articles/serverless.html) of
+serverless, from Martin Fowler, goes as follows:
+
+> Serverless architectures refer to applications that significantly
+> depend on third-party services (knows as Backend as a Service or
+> "BaaS") or on custom code that's run in ephemeral containers (Function
+> as a Service or "FaaS").
+
+By designing services that explicitly require proprietary,
+provider-specific APIs, providers ensure customer lock-in at the core of
+the software architecture. One of the upcoming battles in the community
+will be exactly how to standardize this emerging architecture.
+
+And, of course, Kubernetes can still be run on bare metal in a
+colocation facility, but those costs are getting less and less
+affordable. In an [enlightening
+talk](https://kccncna17.sched.com/event/CU8N/the-true-costs-of-running-cloud-native-infrastructure-b-dmytro-dyachuk-pax-automa),
+Dmytro Dyachuk explained that unless cloud costs hit \$100,000 per
+month, users may be better off staying in the cloud. Indeed, that is
+where a lot of applications end up. During an [industry
+roundtable](https://kccncna17.sched.com/event/CU7S/panel-Kubernetes-cloud-native-and-the-public-cloud-b-moderated-by-dan-kohn-cloud-native-computing-foundation),
+Hong Tang, chief architect at Alibaba Cloud, posited that the "majority
+of computing will be in the public cloud, just like electricity is
+produced by big power plants".
+
+The question, then, is how to split that market between the large
+providers. And, indeed, according to a [CNCF survey of 550 conference

(fichier de différences tronqué)
quote from twitter
diff --git a/sigs.fortune b/sigs.fortune
index 357fa2c1..cbd863e2 100644
--- a/sigs.fortune
+++ b/sigs.fortune
@@ -1080,3 +1080,7 @@ and suddenly write virtualization layers without security holes.
 %
 Be curious. Read widely. Try new things. I think a lot of what people
 call intelligence boils down to curiosity.  - Aaron Swartz
+%
+The class which has the power to rob upon a large scale has also the
+power to control the government and legalize their robbery.
+                        - Eugene V. Debs

add superzoom option for fuji
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 96dac792..773a3613 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -336,6 +336,8 @@ Possible lenses
  * Zoom: Fujifilm Fujinon XF 18-55mm f/2.8-4 R LM OIS, incl. dans le
    [kit lozeau](https://lozeau.com/produits/fr/photo/appareils-sans-miroir-hybrides/fujifilm/fujifilm/ensemble-fujifilm-x-t2-avec-18-55mm-f-2-8-4-ois-p29371c74c77c101/) (2250$CAD donc ~300CAD vs [900$CAD](https://lozeau.com/produits/fr/photo/objectifs/fujifilm/fujifilm/fujifilm-fujinon-xf-18-55mm-f-2-8-4-r-lm-ois-p24714c74c91c145/?sort=p.price&order=ASC) - meme prix
    ([700USD](https://www.bhphotovideo.com/c/product/883530-REG/Fujifilm_XF_18_55mm_f_2_8_4_OIS.html) chez B&H et Henry's)
+ * Superzoom: Fujifilm Fujinon XF 18-135mm f/3.5-5.6 R LM OIS WR
+   ([1050$CAD](https://lozeau.com/produits/fr/photo/objectifs/fujifilm/fujifilm/fujifilm-fujinon-xf-18-135mm-f-3-5-5-6-r-lm-ois-wr-p24746c74c91c145/?sort=p.price&order=ASC), [dpreview](https://www.dpreview.com/products/fujifilm/lenses/fujifilm_xf_18-135_3p5-5p6_wr/overview) (no review), ~27-206mm)
  * Prime 35mm: Fujifilm XF 27mm f/2.8 Lens ([400$USD](https://www.bhphotovideo.com/c/product/984430-REG/fujifilm_16389123_fujinon_xf_27mm_f_2_8.html) at B&H,
    [500$CAD](https://www.henrys.com/78759-FUJINON-XF-27MM-F2-8-LENS-BLACK.aspx) at Lozeau's and Henry's, ~41mm)
  * Prime 50mm: Fujifilm XF 35mm f/2 R WR Lens ([400$USD](https://www.bhphotovideo.com/c/product/1191420-REG/fujifilm_xf_35mm_f_2_r.html), ~53mm)

add possible lenses for fuji
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 91737510..96dac792 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -330,6 +330,18 @@ Con:
  * may not be used to their system (although it looks quite intuitive)
  * firmware not hackable?
 
+Possible lenses
+---------------
+
+ * Zoom: Fujifilm Fujinon XF 18-55mm f/2.8-4 R LM OIS, incl. dans le
+   [kit lozeau](https://lozeau.com/produits/fr/photo/appareils-sans-miroir-hybrides/fujifilm/fujifilm/ensemble-fujifilm-x-t2-avec-18-55mm-f-2-8-4-ois-p29371c74c77c101/) (2250$CAD donc ~300CAD vs [900$CAD](https://lozeau.com/produits/fr/photo/objectifs/fujifilm/fujifilm/fujifilm-fujinon-xf-18-55mm-f-2-8-4-r-lm-ois-p24714c74c91c145/?sort=p.price&order=ASC) - meme prix
+   ([700USD](https://www.bhphotovideo.com/c/product/883530-REG/Fujifilm_XF_18_55mm_f_2_8_4_OIS.html) chez B&H et Henry's)
+ * Prime 35mm: Fujifilm XF 27mm f/2.8 Lens ([400$USD](https://www.bhphotovideo.com/c/product/984430-REG/fujifilm_16389123_fujinon_xf_27mm_f_2_8.html) at B&H,
+   [500$CAD](https://www.henrys.com/78759-FUJINON-XF-27MM-F2-8-LENS-BLACK.aspx) at Lozeau's and Henry's, ~41mm)
+ * Prime 50mm: Fujifilm XF 35mm f/2 R WR Lens ([400$USD](https://www.bhphotovideo.com/c/product/1191420-REG/fujifilm_xf_35mm_f_2_r.html), ~53mm)
+ * Telephoto: Fujifilm XF 55-200mm F3.5-4.8 R LM OIS ([dpreview](https://www.dpreview.com/reviews/fujifilm-55-200-3p5-4p8),
+   [600$USD](https://www.bhphotovideo.com/c/product/966855-REG/fujifilm_55_200mm_f_3_5_4_8_xf_r.html), ~84-305mm)
+
 X100F
 -----
 

more camera details
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index b4c4658c..91737510 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -338,7 +338,7 @@ stats:
  * 1800$ lozeau
  * similar stats to X-T2, except:
  * F2-16, 35mm
- * USB-2
+ * USB-2, chargin
  * 390g
  * 8fps
 
@@ -359,6 +359,8 @@ cons:
 X-T2
 ----
 
+maybe wait for usb-c and better/smaller/cheaper lenses?
+
 stats:
 
  * APS-C
@@ -391,6 +393,7 @@ cons:
  * no builtin-flash (but small hot-shoe flash included)
  * lower battery life than G12
  * AF may not work on all lenses? issues in low light?
+ * cannot charge while turned on
 
 X-T1
 ----

got two cameras reversed, add link to comparison
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index b3dced73..b4c4658c 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -429,7 +429,7 @@ pro:
 
  * 200$ flat-fee to repair
 
-E-M1 not sealed, but cheap (<1000$)
+E-M10 not sealed, but cheap (<1000$)
 
 OM-D E-M5 II
 ------------
@@ -464,9 +464,8 @@ con:
 
 try it in store to see if this is a deal-breaker.
 
-
-OM-D E-M10 III
---------------
+OM-D E-M1 II
+------------
 
 stats:
 
@@ -490,6 +489,7 @@ cons:
  * trop cher (2300$+ chez lozeau)!
  * compliquée
  * no builtin flash (but hot-shoe flash included)
+ * [grainy when compared with X-T2](https://www.dpreview.com/reviews/image-comparison/fullscreen?attr18=lowlight&attr13_0=fujifilm_xt2&attr13_1=olympus_em1ii&attr15_0=raw&attr15_1=raw&attr16_0=25600&attr16_1=25600&attr126_0=1&attr126_1=1&normalization=full&widget=9&x=0.036022800728683066&y=0.3545271629778671)
 
 Inventaire
 ==========

add note about repair with olympus
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index a29d0809..b3dced73 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -425,8 +425,11 @@ con:
 Olympus
 =======
 
-E-M1 not sealed, but cheap (<1000$)
+pro:
 
+ * 200$ flat-fee to repair
+
+E-M1 not sealed, but cheap (<1000$)
 
 OM-D E-M5 II
 ------------

more camera reviews and stats: include the olympus M5
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 93dc6410..a29d0809 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -2,6 +2,8 @@ Ceci est la documentation sur l'achat d'un nouvel appareil, mais aussi une tenta
 
 [[!toc levels=2]]
 
+[Comparatif actuel DPR](https://www.dpreview.com/products/compare/side-by-side?products=fujifilm_x100f&products=fujifilm_xt1&products=fujifilm_xt2&products=oly_em5ii&products=olympus_em1ii&products=sony_a7_ii&products=sony_a9&products=nikon_d750&products=nikon_d7500&products=canon_g12&sortDir=ascending)
+
 Things I need
 =============
 
@@ -15,21 +17,22 @@ Absolute requirements
  * 3200+ ISO
  * case max 1000$
  * APS-C or larger
+ * sealed body
+ * SD card support
 
 Nice to have
 ------------
 
  * USB charging
- * top screen
+ * top screen?
  * cursor or pad? not sure
  * 6400+ ISO
  * 2-3 FPS+
  * articulated display
  * casing max 650$, 1000$ total
  * easy access live view
- * SD card support
  * timelapse mode (intervalometer)
- * sealed body
+ * compatible with my current remote
 
 Candy on top
 ------------
@@ -38,7 +41,6 @@ Candy on top
  * exposure bracketing
  * full frame
  * free or cheap (~200-300$)
- * compatible with my current remote
 
 Not necessary
 -------------
@@ -222,7 +224,7 @@ Con:
  * expensive, paying for the name
  * no flash on full-frame cameras
  * EF vs EF-S: entry-level lenses are not compatible with higher end bodies, that's evil
- * older lenses not supported   
+ * older lenses not supported
 
 5D Mark III
 -----------
@@ -423,12 +425,45 @@ con:
 Olympus
 =======
 
-E-M5 ignored because of poor grip and weird top layout.
-
 E-M1 not sealed, but cheap (<1000$)
 
-Olympus OM-D E-M10 III
-----------------------
+
+OM-D E-M5 II
+------------
+
+stats:
+
+ * fully articulated
+ * 469g
+ * wifi
+ * USB-2
+ * single SD
+ * 1080p video, stereo mike
+ * 10fps
+ * 4/3 sensor
+ * ISO 25600
+ * 16 MPx
+
+pro:
+
+ * sealed
+ * very customizable
+ * interesting hi-res mode, but fails on moving targets
+ * first-class stabilisation
+
+con:
+
+ * very customizable (!) means steep second learning curve
+ * poor grip?
+ * weird top layout?
+ * possibly limited battery life (CIPA worse than G12 and X-T2)
+ * no built-in flash (hot-shoe external *rotating* flash included)
+
+try it in store to see if this is a deal-breaker.
+
+
+OM-D E-M10 III
+--------------
 
 stats:
 
@@ -613,14 +648,15 @@ the place, and Olympus is aiming at a lower-range.
 Documentation
 =============
 
- * [Sensor sizes](https://en.wikipedia.org/wiki/Image_sensor_format)
-   * Full frame: 864mm²
-   * APS-C: 370mm²
-   * APS-C (Canon): 329mm²
-   * 1/1.7" (Canon Powershot G12): 43mm²
+ * [Sensor sizes](https://en.wikipedia.org/wiki/Image_sensor_format#Common_image_sensor_formats)
+   * Full frame (Sony, etc): 864mm² (100%)
+   * APS-C (Fuji, etc): 370mm² (42%)
+   * APS-C (Canon): 329mm² (38%)
+   * 4/3 (Olympus): 225mm² (26%)
+   * 1/1.7" (Canon Powershot G12): 43mm² (5%)
  * Lentilles:
    * [Lens mount guide](http://www.kehblog.com/2011/12/lens-mount-guide-part-1.html)
    * [Another](http://rick_oleson.tripod.com/index-99.html)
    * [Wikipedia](https://en.wikipedia.org/wiki/Lens_mount)
    * [Lens buying guide](https://www.dpreview.com/articles/9162056837/digital-camera-lens-buying-guide)
-
+ * [Darktable camera support](https://www.darktable.org/resources/camera-support/): pretty uniform across brands

olympus and lens price comparison
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 07ed7ce5..93dc6410 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -420,6 +420,39 @@ con:
  * may have autofocus issues
  * no cursor
 
+Olympus
+=======
+
+E-M5 ignored because of poor grip and weird top layout.
+
+E-M1 not sealed, but cheap (<1000$)
+
+Olympus OM-D E-M10 III
+----------------------
+
+stats:
+
+ * touchscreen
+ * dual SD
+ * wireless
+ * USB-3
+ * 4/3
+ * ISO 25600
+ * 20MPx
+
+pros:
+
+ * sealed
+ * good battery life
+ * excellent stabilisation according to dpreview
+ * fully articulated
+
+cons:
+
+ * trop cher (2300$+ chez lozeau)!
+ * compliquée
+ * no builtin flash (but hot-shoe flash included)
+
 Inventaire
 ==========
 
@@ -465,25 +498,129 @@ Lentilles
 
 Note: of all those lens, only the [Canon EF] and [Pentax K] are still in use.
 
-Voir aussi:
-
- * [Lens mount guide](http://www.kehblog.com/2011/12/lens-mount-guide-part-1.html)
- * [Another](http://rick_oleson.tripod.com/index-99.html)
- * [Wikipedia](https://en.wikipedia.org/wiki/Lens_mount)
-
-https://www.dpreview.com/articles/9162056837/digital-camera-lens-buying-guide
-
 Flash
 -----
 
 * Vivitar 2000, horse-shoe avec câble de synchro (pentax?)
 * Suntax 16A
 
-Autre docu
-----------
+Lens Comparison
+===============
+
+ * wide: <28mm (<18mm APS-C, or <14mm 4/3)
+ * normal: ~50mm (~30mm APS-C or 25mm 4/3)
+ * zoom: 70-210mm (40-125mm APS-C or 35-105mm 4/3)
+
+Prix chez B&H, 2017-12-13.
+
+Prime, normal, 1.8
+------------------
+
+ * Canon: 125$ 50mm EF
+ * Sony:  200$ 50mm FE
+ * Nikon: 215$ 50mm AF-S
+ * Olympus: 250$ 50mm (25mm 4/3)
+ * Fuji: 450$ (?) 50mm f/2 (?)
+ * Fuji: 720$ 32mm (??)
+
+Conclusion:
+
+ * Canon cheapest
+ * Sony competitive
+ * Olympus close
+ * Nikon closer
+ * Fuji weird, but very competitive for <= f/1.4
+
+Prime, normal, 1.4
+------------------
+
+ * Sony: 300$ 50mm
+ * Canon: 330$ 50mm
+ * Fuji: 340$ 50mm f/1.2 (!)
+ * Fuji: 400$ 75mm (50mm) f1.2 (!)
+ * Nikon: 445$ 50mm AF-S
+ * Olympus: N/A seul. 1.2 (1300$!) ou 1.8
+ * Fuji: 430$ (?)
+
+Conclusion:
+
+ * Sony cheapest
+ * Canon close
+ * Fuji way ahead for 1.2
+ * Nikon weird
+ * Olympus weirder
+
+Zoom
+----
+
+SLR (Canon, Nikon) ommitted for simplicity.
+
+ * Olympus: 14-42mm (28-84mm) 200$ f/3.5-5.6
+ * Sony: 16-50mm (24-75mm) 275$ f/3.5-5.6
+ * Fuji: 16-50mm (24-76mm) 400$ f/3.5-5.6
+ * Sony: 18-105mm (27-158mm!!) 550$ f/4 (?)
+ * Fuji: 18-55mm (27-84mm) 700$ f/2.8-4
+ * Olympus: 12-50mm (24-80mm) 800$ f/2.8 (!)
+
+Conclusion:
+
+ * Olympus cheaper for f/3.5, amazing continous
+ * Sony leading all around, esp. in that continus f/4
+ * Fuji last, but competitive
+
+Telephoto
+---------
+
+SLR (Canon, Nikon) ommitted for simplicity.
+
+ * Olympus: 40-155mm (80-300) 100$ (!) f/4-5.6
+ * Sony: 55-210 (82-315mm) 350$ f/4.5-6.3
+ * Fuji: 50-230mm (76-350mm) 400$ f4.5-6.7
+ * Fuji: 55-200mm (84-305mm) 600$ f/3.5-4.8
+
+Continous:
+
+ * Olympus: 40-155mm (80-300mm) 1300$ f/2.8 (!!)
+ * Fuji: 50-140mm (75-213mm) 1450$ f/2.8 (!!)
+
+Primes:
+
+ * Olympus: 300mm (450mm) 180$ (!) f/6.3
+ * Fuji: 300mm (450mm) 210$ f/6.3
+ * Sony: 300mm (450mm) 250$ f/6.3
+ * Fuji: 135mm (200mm) 480$ f/2 (!!)
+ * Sony: 135mm (...) 1700$ (?) f/1.8 (!!)
+
+Conclusion:
+
+ * Olympus cheapest again
+ * Sony generally close
+ * Fuji competitive, cheapest for fast primes
+
+Overall
+-------
+
+ * SLR are cheapest
+ * Olympus cheapest for zooms, Sony very close, Fuji not completely off
+ * Sony cheapest for 1.4-1.8 primes
+ * Fuji cheapest for best primes (1.2, which are affordable, including
+   telephoto!)
+ * Olympus expensive for primes.
+
+Looks like Fuji is targeting a more high-end market, Sony is all over
+the place, and Olympus is aiming at a lower-range.
+
+Documentation
+=============
 
  * [Sensor sizes](https://en.wikipedia.org/wiki/Image_sensor_format)
    * Full frame: 864mm²
    * APS-C: 370mm²
    * APS-C (Canon): 329mm²
    * 1/1.7" (Canon Powershot G12): 43mm²
+ * Lentilles:
+   * [Lens mount guide](http://www.kehblog.com/2011/12/lens-mount-guide-part-1.html)
+   * [Another](http://rick_oleson.tripod.com/index-99.html)
+   * [Wikipedia](https://en.wikipedia.org/wiki/Lens_mount)
+   * [Lens buying guide](https://www.dpreview.com/articles/9162056837/digital-camera-lens-buying-guide)
+

MOAR CAMERAS MIAM
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 6f0486db..07ed7ce5 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -107,9 +107,10 @@ D700
 
 1000$ used @ simons
 
+full frame
+
 Pros:
 
- * full frame
  * very similar to D300
 
 Cons:
@@ -161,19 +162,22 @@ d7200: 1000$ lozeau: https://lozeau.com/produits/fr/photo/appareils-reflex/nikon
 D750
 ----
 
-still recommended option by dpreview:
+stats:
 
-https://www.dpreview.com/reviews/2017-buying-guide-best-cameras-under-2000/2
+ * 24Mpx
+ * full frame
+ * still recommended option by dpreview:
+   https://www.dpreview.com/reviews/2017-buying-guide-best-cameras-under-2000/2
+ * ISO 12800
 
 pro:
 
  * articulated display
- * fullframe
  * 2 sd slots
 
 con:
 
- * expensive (1800CAD lozeau 2017-12-7)
+ * expensive (boitier 1800CAD lozeau 2017-12-7)
 
 Lentilles
 ---------
@@ -223,11 +227,12 @@ Con:
 5D Mark III
 -----------
 
-Amazing camera
+Amazing camera (performance-wise)
+
+full frame sensor
 
 pro:
 
- * full frame sensor
  * amazing ISO range (100 - 102400)
  * dual SD and CF card support
  * awesome grip and feel
@@ -243,6 +248,7 @@ con:
 
  * price: 650$ (lozeau)
  * dpreview rating: 79%
+ * APS-C
 
 pro:
 
@@ -259,13 +265,13 @@ con:
  * film record button is live view, not shutter (!?)
  * film mode is on the mode dial, not next to live view (?! no auto
    mode then??)
- * APS-C
 
 7D
 --
 
  * price: ~1000$ (lozeau)
  * dpreview rating: 84%
+ * APS-C
 
 pro:
 
@@ -277,10 +283,24 @@ con:
 
  * mode lock
  * cursor, not pad?
- * APS-C
  * no articulated display
  * CF card!!!
 
+PowerShot G12
+-------------
+
+for comparison...
+
+stats:
+
+ * ~600$?
+ * F2.8-4.5 / 28-140mm
+ * ISO 3200, grainy
+ * 10Mpx
+ * 1.7" sensor
+ * fully articulated
+ * 1fps
+
 Sony
 ====
 
@@ -295,6 +315,111 @@ con:
  * can be pricey
  * yet another lens lock-in
 
+Fujifilm
+========
+
+Pro:
+
+ * not Canon *and* not Nikon
+ * cheaper lenses
+
+Con:
+
+ * may not be used to their system (although it looks quite intuitive)
+ * firmware not hackable?
+
+X100F
+-----
+
+stats:
+
+ * 1800$ lozeau
+ * similar stats to X-T2, except:
+ * F2-16, 35mm
+ * USB-2
+ * 390g
+ * 8fps
+
+pro:
+
+ * really cute and small
+ * good but minimal controls
+ * nostalgia
+
+cons:
+
+ * fixed lens (but that's also awesome in a way)
+ * single card storage
+ * not sealed, but may be hacked to do so?
+ * expensive
+ * no tilting screen
+
+X-T2
+----
+
+stats:
+
+ * APS-C
+ * 1800$ lozeau, juste boitier - 2100$ avec 28-55mm F2.8-4.6 (de memoire)
+ * 24 Mpx
+ * ISO 12800
+ * 325 focus points (!)
+ * USB charging
+ * dual SD-card
+ * USB-3
+ * wifi
+ * 507g
+ * 14fps
+ * 4k video, mike socket and audio monitor
+
+pro:
+
+ * gorgeous controls on top: classic time/iso controls with F-stop
+   control on the lens
+ * awesome grip and feel
+ * sealed
+ * bulb setting!
+ * interesting two-exposure-mode can be used to do custom bracketing
+   or arbitrary collages
+ * fast startup?
+
+cons:
+
+ * expensive
+ * no builtin-flash (but small hot-shoe flash included)
+ * lower battery life than G12
+ * AF may not work on all lenses? issues in low light?
+
+X-T1
+----
+
+ * 2014
+ * 16Mpx
+ * APS-C
+ * ISO 6400
+ * tilting screen
+ * 8fps
+ * 1080p video, stero mike
+ * USB-2
+ * wireless
+ * 440g
+ * ~1000$ - amazon.ca

(fichier de différences tronqué)
include gemini to potential hardware
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index 6073709c..365957da 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -26,6 +26,13 @@ Extras:
 Modèles
 =======
 
+Gemini
+------
+
+Tiny - closer to a phone:
+
+https://www.indiegogo.com/projects/gemini-pda-android-linux-keyboard-mobile-device-phone#/
+
 Mnt reform
 ----------
 
diff --git a/hardware/phone.mdwn b/hardware/phone.mdwn
index d1576ed5..737ad02d 100644
--- a/hardware/phone.mdwn
+++ b/hardware/phone.mdwn
@@ -11,6 +11,11 @@ phones as well:
 Potential phones
 ================
 
+Gemini
+------
+
+See [[laptop#gemini]].
+
 Galaxy S3
 ---------
 

proms notes
diff --git a/blog/prometheus.mdwn b/blog/prometheus.mdwn
new file mode 100644
index 00000000..140482b5
--- /dev/null
+++ b/blog/prometheus.mdwn
@@ -0,0 +1,364 @@
+
+features optimizations for Kubernetes using a new storage engine. It
+seems like the 1.x performance was strained with short-lived
+containers: because the metrics could change very quickly, this lead
+to performance issues. The new release boasts hundred-fold I/O
+performance improvements and three-fold improvements in CPU usage.
+
+
+only part of the slides:
+
+https://schd.ws/hosted_files/kccncna17/c4/KubeCon%20P8s%20Salon%20-%20Kubernetes%20Metrics%20Deep%20Dive.pdf
+
+https://kccncna17.sched.com/event/Cs4d/prometheus-salon-hosted-by-frederic-branczyk-coreos-bob-cotton-freshtracksio-goutham-veeramanchaneni-tom-wilkie-kausal
+
+Frederic Brancyk from CoreOS
+
+bborgmon to borg , prom to k8s
+
+k8s and prom didn't know about each other
+
+cars_total
+
+cars_total{color="white"} <= dimension
+
+we can observe from multiple locations
+
+over time, we can make queries to make sense of it
+
+congested traffic? rate(sum(cars_total)[5m]) < 10 
+flowing traffic? rate(sum(cars_total)[5m]) > 10 
+
+sum because of multiple dimensions
+
+collects and stores data
+
+instrumented applications, e.g. HTTP GET /metrics
+
+every 15s (configurable interval)
+
+T0 = 0
+T1 = 2
+
+
+a target is:
+
+* HTTP://example.com/metrics
+* service discovery:
+  * static targets
+  * DNS
+  * k8s
+  * and more
+
+k8s discv
+
+targets:
+ * pods = our workload . list all pods. great if you're strict about
+   organization. not always the case
+ * nodes
+ * endpoints / services - grouped pods are by service and prom can
+   find this
+ * powerful API to subscribe to events to automatically reconfigure it
+   in realtime
+
+k8s metadata can be added when feeding into prom: e.g. add service/pod
+information to have different severity
+
+alerting rules
+
+alert codes loaded by the server. every interval alerts are fired
+
+there's an alert manager that receives all alerts from multiple
+servers. deduplicates multiple entries and regroups them (so you don't
+get an announce every 15s) and sends the alerts. has a gossip protocol
+to be HA without other components
+
+name is a meta-label
+unique combination of labels uniquely identifies a time series
+
+if our app creates a new label, it will explode the cardinality of
+metrics. don't put variable data in metrics. like creating a new table
+for each query or user
+
+metric types
+
+* counter = requests
+* gauge = mem usage or temperature
+* histogram = latency distribution... ..?
+
+prom knows about counter resets (at collect time) and will keep a
+straight line instead of reseting. well thought through
+
+photo with the guy's name.
+
+? alerting? rules are coming from first to last?
+
+altering rules were just done sequentially 
+
+in 2.0 there are rule groups where the *groups* get executed
+esequentially. keep configuration simple... to dynamically change
+rules you change the config files and reload the daemon, which can
+also be done through an HTTP endpoint.
+
+reloading is close to free (SIGHUP or HTTP)
+
+? more robust way to deal with resets. counter at 5, reset, counter at
+6, you don't notice. we know when the target goes away, so we can
+handle that.
+
+prometheus is not designed to be clustered, the idea is that you have
+two instances. that are redundant
+
+? at what scale should i worry. a single instance can handle thousands
+of targets. 120kHz is fine. beyond that, the problem is often
+network. prom can also be sharded.
+
+bob cotton
+
+cofounder of freshtracks.io
+
+sources of metrics
+
+we need a shim for system-level metrics. node_exporter is one such
+shim. can send standard host metrics (load, cpu, mem, disk, net, etc)
+
+~1000 metrics
+
+containers
+
+kubelet also exports through cAdvisor project from google that allows
+core level metrics to be exposed to prom, embedded in kubelet
+
+cpu user/system, amount time that CPU was througgtled so you know when
+you hit resource limits.
+
+FS read/writes + limits
+mem usage + limits
+packets in/out/drop
+
+
+then k8s API server itself
+
+standard:
+
+ * GRPC requests rates and latency
+ * etcd helper cache hit rate
+ * golang GC/Memory/THreads, from the standard golang process stats
+ * general process stats (e.g. file descriptor limits)
+ 
+etcd metrics, master of all truth
+
+3-node cluster. one is a master, if not you have a bad time. leader
+existence and change rate
+
+proposals committed/applied/pending/failed - the actual work coming
+in, will show possible slow disk issues, which means backlog because
+it's not "triuth" yet
+
+prom written in go
+
+derived metrics
+
+API server can tell you lots of things. but it doesn't give you count
+about number of pods, node, etc, you need to use the API for that. so
+there's kube-state-metrics or that
+cronjob, job, node, pod, service, etc..
+
+you can track resource limits as well...
+
+you can detect container flapping
+
+bings in the labels from the k8s cluster... 
+
+sources:
+
+ * node via `node_exporter` - installed as a `daemon_set` in k8s
+ * container metrics by kubelet
+ * k8s API server
+ * etcd stats
+ * kube-state-metrics, third party you may want to installed
+
+most prom packaging handles installing all of this for you
+
+new metrics server replaces heapster
+
+standard API, versioned and authenticated like other APIs... 
+
+k8s API aggregation to extend the API with new commands. the metrics
+server is one of the first things to use this. turned on by default in
+1.8, in beta.
+
+used by the scheduler
+
+the prom server doesn't talk to the metrics server, the metrics server
+is just internal

(Diff truncated)
more camera notes
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index b4001bae..6f0486db 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -10,30 +10,32 @@ Absolute requirements
 
  * interchangeable lenses
  * builtin flash
- * top screen
  * fast startup/shutdown
  * RAW
  * 3200+ ISO
- * casing max 1000$
+ * case max 1000$
+ * APS-C or larger
 
 Nice to have
 ------------
 
+ * USB charging
+ * top screen
  * cursor or pad? not sure
  * 6400+ ISO
- * 2-3 FPS
+ * 2-3 FPS+
  * articulated display
  * casing max 650$, 1000$ total
  * easy access live view
  * SD card support
  * timelapse mode (intervalometer)
+ * sealed body
 
 Candy on top
 ------------
 
  * reasonable video features (e.g. be able to adjust settings while filming)
  * exposure bracketing
- * sealed body
  * full frame
  * free or cheap (~200-300$)
  * compatible with my current remote
@@ -153,6 +155,26 @@ Cons:
 
  * 1000$ for the box at lozeau
 
+d7500: 1500 lozeau
+d7200: 1000$ lozeau: https://lozeau.com/produits/fr/photo/appareils-reflex/nikon/nikon/boitier-nikon-d7200-p24089c74c75c76/
+
+D750
+----
+
+still recommended option by dpreview:
+
+https://www.dpreview.com/reviews/2017-buying-guide-best-cameras-under-2000/2
+
+pro:
+
+ * articulated display
+ * fullframe
+ * 2 sd slots
+
+con:
+
+ * expensive (1800CAD lozeau 2017-12-7)
+
 Lentilles
 ---------
 
@@ -259,6 +281,20 @@ con:
  * no articulated display
  * CF card!!!
 
+Sony
+====
+
+Pros:
+
+ * really interesting line up of mirrorless cameras that start to
+   rival with traditional SLRs
+
+con:
+
+ * controls seem ackward until the a7r
+ * can be pricey
+ * yet another lens lock-in
+
 Inventaire
 ==========
 
@@ -310,6 +346,8 @@ Voir aussi:
  * [Another](http://rick_oleson.tripod.com/index-99.html)
  * [Wikipedia](https://en.wikipedia.org/wiki/Lens_mount)
 
+https://www.dpreview.com/articles/9162056837/digital-camera-lens-buying-guide
+
 Flash
 -----
 

Added a comment: D8
diff --git a/blog/2017-11-30-free-software-activities-november-2017/comment_2_b01a581820a083d4d2318310a0560360._comment b/blog/2017-11-30-free-software-activities-november-2017/comment_2_b01a581820a083d4d2318310a0560360._comment
new file mode 100644
index 00000000..338dd2e9
--- /dev/null
+++ b/blog/2017-11-30-free-software-activities-november-2017/comment_2_b01a581820a083d4d2318310a0560360._comment
@@ -0,0 +1,15 @@
+[[!comment format=mdwn
+ ip="173.246.7.196"
+ claimedauthor="LeLutin"
+ url="https://lelutin.ca"
+ subject="D8"
+ date="2017-12-04T02:22:31Z"
+ content="""
+Drupal 8 is clearly not oriented towards giving a publishing tool to the comunity anymore. All development since D8 is meant to make big projects possible.
+
+Also D8 breaks its own API on minor releases.
+
+So they've only exacerberated what was already problematic: maintaining a website that uses D8 costs way more money than did already previous versions of Drupal.
+
+So there's no surprise seeing that most of its comunity is not following anymore. Most ppl using Drupal used to be from backgrounds different than big corporations.
+"""]]

Added a comment: OpenPGP keys
diff --git a/blog/2017-10-16-strategies-offline-pgp-key-storage/comment_1_087cf0343053b416eb71f547fcbecd94._comment b/blog/2017-10-16-strategies-offline-pgp-key-storage/comment_1_087cf0343053b416eb71f547fcbecd94._comment
new file mode 100644
index 00000000..6361cd35
--- /dev/null
+++ b/blog/2017-10-16-strategies-offline-pgp-key-storage/comment_1_087cf0343053b416eb71f547fcbecd94._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="189.48.156.245"
+ claimedauthor="John Bras"
+ url="braselectron.com"
+ subject="OpenPGP keys"
+ date="2017-12-03T19:47:30Z"
+ content="""
+Thanks for sharing. OpenPGP keys is a difficult subject for GNU users in general.  I personally am working this subject as much as I can to better understand all the uses and needs on a daily bases.
+
+For instance, what happens when you HDD crashes or you need to reinstall your system or share your protected docs with different devices (ie. Desktop, laptop, tablet, etc.)?
+
+How can you relate OpenPGP keys to digital certificates which you need to pay a agent to validate; why can't we have a FSF certificate accepted world wide ?
+"""]]

Added a comment: f3
diff --git a/blog/2017-11-30-free-software-activities-november-2017/comment_1_2d5b7284bc8c89140fed92f387c773a8._comment b/blog/2017-11-30-free-software-activities-november-2017/comment_1_2d5b7284bc8c89140fed92f387c773a8._comment
new file mode 100644
index 00000000..963230a2
--- /dev/null
+++ b/blog/2017-11-30-free-software-activities-november-2017/comment_1_2d5b7284bc8c89140fed92f387c773a8._comment
@@ -0,0 +1,14 @@
+[[!comment format=rst
+ ip="158.181.87.43"
+ subject="f3"
+ date="2017-12-03T15:35:35Z"
+ content="""
+Maybe add a note to: 
+
+    make experimental
+
+for:
+
+* f3probe
+* f3fix
+"""]]

add disaster recovery plan
diff --git a/services/backup.mdwn b/services/backup.mdwn
index a1f4332a..86141ba8 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -198,3 +198,40 @@ and is aimed at technical users familiar with the commandline.
     and can be found with the `blkid` command as well."""]]
 
  12. reboot and pray
+
+Disaster recovery
+-----------------
+
+backup plan if all else fails
+
+ 1. GTFO with the backup drives, and at least password manager
+    (laptop/workstation rip out)
+
+ 2. confirm Gandi, park domains on a "Gandi Site" (free, one page)
+ 
+ 3. setup one VPS to restore DNS service, secondary at Gandi
+ 
+ 4. setup second VPS to restore tier-1 services
+ 
+ 5. restore other services as necessary
+
+### Tier 1
+
+DNS: setup 3 primary zones and glue records.
+
+Email: install dovecot + postfix, setup aliases and delivery. Restore
+mailboxes.
+
+Web: install apache2 + restore wiki.
+
+## VPS providers
+
+ * Koumbit: 20$/mth, friends
+
+ * OVH: 4.50$/mth, "local" 100mbps unlimited,
+   [KVM 2.4GHz, 2GB RAM 10GB SSD](https://www.ovh.com/us/vps/vps-ssd.xml)
+
+ * [Prgmr](https://prgmr.com/aup.html): 5$/mth, Xen, no bullshit, ssh
+   console [1.25 GiB RAM, 15 GiB Disk](https://billing.prgmr.com/index.php/order/main/packages/xen/?group_id=10)
+
+ * Gandi: 4$/mth 256MiB RAM, 3GB disk

removed
diff --git a/blog/2017-11-30-free-software-activities-november-2017/comment_1_29ae6ef8baaee2ec861de1885f28a14d._comment b/blog/2017-11-30-free-software-activities-november-2017/comment_1_29ae6ef8baaee2ec861de1885f28a14d._comment
deleted file mode 100644
index cd0fd603..00000000
--- a/blog/2017-11-30-free-software-activities-november-2017/comment_1_29ae6ef8baaee2ec861de1885f28a14d._comment
+++ /dev/null
@@ -1,13 +0,0 @@
-[[!comment format=mdwn
- ip="194.44.209.147"
- claimedauthor="Tina"
- subject="LMS"
- date="2017-12-01T13:38:20Z"
- content="""
-In my business, I actively use cloud software https://voiptimecloud.com/power-dialer.
-This solution helps to make the business even more successful.
-My contact center is the best thanks to the fact that it is possible to handle online leads. LMS (lead-management system) is actively used as a crm system
-It is possible to create projects.
-Cloud solution allows your business to fly in the clouds)
-
-"""]]

Added a comment: LMS
diff --git a/blog/2017-11-30-free-software-activities-november-2017/comment_1_29ae6ef8baaee2ec861de1885f28a14d._comment b/blog/2017-11-30-free-software-activities-november-2017/comment_1_29ae6ef8baaee2ec861de1885f28a14d._comment
new file mode 100644
index 00000000..cd0fd603
--- /dev/null
+++ b/blog/2017-11-30-free-software-activities-november-2017/comment_1_29ae6ef8baaee2ec861de1885f28a14d._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ ip="194.44.209.147"
+ claimedauthor="Tina"
+ subject="LMS"
+ date="2017-12-01T13:38:20Z"
+ content="""
+In my business, I actively use cloud software https://voiptimecloud.com/power-dialer.
+This solution helps to make the business even more successful.
+My contact center is the best thanks to the fact that it is possible to handle online leads. LMS (lead-management system) is actively used as a crm system
+It is possible to create projects.
+Cloud solution allows your business to fly in the clouds)
+
+"""]]

publish monthly report
diff --git a/blog/2017-11-30-free-software-activities-november-2017.mdwn b/blog/2017-11-30-free-software-activities-november-2017.mdwn
new file mode 100644
index 00000000..cd6b016a
--- /dev/null
+++ b/blog/2017-11-30-free-software-activities-november-2017.mdwn
@@ -0,0 +1,369 @@
+[[!meta title="November 2017 report: LTS, standard disclosure, Monkeysphere in
+Python, flash fraud and Goodbye Drupal"]]
+
+[[!toc levels=2]]
+
+Debian Long Term Support (LTS)
+==============================
+
+This is my monthly [Debian LTS][] report. I didn't do as much as I
+wanted this month so a bunch of hours are reported to next month. I
+got frustrated by two difficult packages: exiv2 and libreoffice.
+
+Exiv
+----
+
+For Exiv2 I first [reported the issues upstream](https://github.com/Exiv2/exiv2/issues/174) as requested in
+the [original CVE assignment](http://www.openwall.com/lists/oss-security/2017/06/30/1). Then I went to see if I could
+reproduce the issue. Valgrind didn't find anything, so I went on to
+test the [new ASAN instructions](https://wiki.debian.org/LTS/Development/Asan) that tell us how to build for ASAN
+in LTS. Turns out that
+I [couldn't make that work either](https://lists.debian.org/87shd4u61v.fsf@curie.anarc.at). Fortunately, Roberto was able
+to [build the package properly](https://lists.debian.org/20171128013820.4dnnjypazyeeganx@connexer.com) and confirmed the wheezy version as
+non-vulnerable, so I marked the three CVEs as not-affected and moved
+on.
+
+Libreoffice
+-----------
+
+Next up was LibreOffice. I started backporting the patches to wheezy
+which was a little difficult because any error in the backport takes
+*hours* to find because LibreOffice is so big. The monster takes about
+4 hours to build on my i3-6100U processor - I can't imagine how long
+that would take on a slower machine. Still, I managed to
+get [patches](https://lists.debian.org/87bmjrsjel.fsf@curie.anarc.at) that *mostly* builds. I say *mostly* because while
+most of the code builds, the tests fail to build. Not only do they
+fail to build, but they even *segfault* the linker. At that point, I
+had already spent too many hours working on this frustrating loop of
+"work/build-wait/crash" that I gave up.
+
+I also worked on reproducing a supposed regression associated with the
+last security update. Somehow, I [couldn't reproduce](https://lists.debian.org/8760a1uhbl.fsf@curie.anarc.at) either - the
+description of the regression was very limited and all suggested
+approaches failed to trigger the problems described.
+
+[Debian LTS]: https://www.freexian.com/services/debian-lts.html
+
+OptiPNG
+-------
+
+Finally, a little candy: an easy backport of a simple 2-line patch for
+a simple program, OptiPNG that, ironically, had a vulnerability
+([[!debcve CVE-2017-16938]]) in GIF parsing. I could do hundreds of
+those breezy updates, they are fun and simple, and easy to test. This
+resulted in the trivial [DLA-1196-1](https://lists.debian.org/20171130191701.dbm3rhj3fys7wcim@curie.anarc.at).
+
+Miscellaneous
+-------------
+
+LibreOffice stretched the limits of my development environment. I had
+to figure out how to deal with out of space conditions in the build
+tree (`/build`) something that is really not obvious in [sbuild](https://wiki.debian.org/sbuild). I
+ended up documenting that in a new [troubleshooting section](https://wiki.debian.org/sbuild#Missing_space_in_.2Fbuild) in the
+wiki.
+
+Other free software work
+========================
+
+feed2exec
+---------
+
+I pushed forward with the development of my programmable feed
+reader, [feed2exec](https://feed2exec.readthedocs.io). Last month I mentioned I released the 0.6.0
+beta: since then 4 more releases were published, and we are now at the
+[0.10.0 beta](https://gitlab.com/anarcat/feed2exec/tags/0.10.0). This added a bunch new features:
+
+ * `wayback` plugin to save feed items to
+   the [Wayback Machine on archive.org](http://web.archive.org/)
+ * `archive` plugin to save feed items to the local filesystem
+ * `transmission` plugin to save RSS Torrent feeds to
+   the [Transmission](https://transmissionbt.com/) torrent client
+ * vast expansion of the documentation, now hosted
+   on [ReadTheDocs](https://readthedocs.org/). The design was detailed with a tour of the
+   source code and detailed plugin writing instructions were added to
+   the documentation, also shipped as a [feed2exec-plugins](https://manpages.debian.org/feed2exec-plugins)
+   manpage.
+ * major cleanup and refactoring of the codebase, including standard
+   locations for the configuration files, which moved
+
+The documentation deserves special mention. If you compare
+between [version 0.6](https://feed2exec.readthedocs.io/en/0.6.0/) and the [latest version](https://feed2exec.readthedocs.io/en/latest/) you can see 4 new
+sections:
+
+ * [Plugins](https://feed2exec.readthedocs.io/en/latest/plugins.html) - extensive documentation on plugins use, the design
+   of the plugin system and a full tutorial on how to write new
+   plugins. the tutorial was written while writing the `archive`
+   plugin, which was written as an example plugin just for that
+   purpose and should be readable by novice programmers
+ * [Support](https://feed2exec.readthedocs.io/en/latest/support.html) - a handy guide on how to get technical support for
+   the project, copied over from the [Monkeysign](https://monkeysign.readthedocs.io/en/2.x/support.html) project.
+ * [Code of conduct](https://feed2exec.readthedocs.io/en/latest/code.html) - was originally part of the contribution
+   guide. the idea then was to force people to read the Code when they
+   wanted to contribute, but it wasn't a good idea. The contribution
+   page was overloaded and critical parts were hidden down in the
+   page, after what is essentially boilerplate text. Inversely, the
+   Code was itself *hidden* in the contribution guide. Now it is
+   clearly visible from the top and trolls will see this is an ethical
+   community.
+ * [Contact](https://feed2exec.readthedocs.io/en/latest/contact.html) - another idea from the Monkeysign project. became
+   essential when the security contact was added (see below).
+
+All those changes were backported in the [ecdysis](https://ecdysis.readthedocs.io/en/latest/) template
+documentation and I hope to backport them back into my other projects
+eventually. As part of my documentation work, I also drifted into the
+Sphinx project itself and submitted a [patch to make manpage
+references clickable](https://github.com/sphinx-doc/sphinx/pull/4235) as well.
+
+I now use feed2exec to archive new posts on my website to the internet
+archive, which means I have an ad-hoc offsite backup of all content I
+post online. I think that's pretty neat. I also leverage
+the [Linkchecker](https://github.com/linkcheck/linkchecker/) program to look for dead links in new articles
+published on the site. This is possible thanks to a Ikiwiki-specific
+filter to extract links to changed files from the Recent Changes RSS
+feed.
+
+I'm considering making the `parse` step of the program pluggable. This
+is an idea I had in mind for a while, but it didn't make sense until
+recently. I described the project and someone said "oh that's
+like [IFTTT](https://en.wikipedia.org/wiki/IFTTT)", a tool I wasn't really aware of, which connects
+various "feeds" (Twitter, Facebook, RSS) to each other, using
+triggers. The key concept here is that feed2exec could be made to read
+from Twitter or other feeds, like IFTTT and not just *write* to
+them. This could allow users to bridge social networks by writing only
+to a single one and broadcasting to the other ones.
+
+Unfortunately, this means adding a lot of interface code and I do not
+have a strong use case for this just yet. Furthermore, it may mean
+switching from a "cron" design to a more interactive, interrupt-driven
+design that would run continuously and wake up on certain triggers.
+
+Maybe that could come in a 2.0 release. For now, I'll see to it that
+the current codebase is solid. I'll release a 0.11 release candidate
+shortly, which has seen a major refactoring since 0.10. I again
+welcome beta testers and users to report their issues. I am happy to
+report I got and fixed my [first bug report](https://gitlab.com/anarcat/feed2exec/issues/1) on this project this
+month.
+
+Towards standard security disclosure guidelines
+-----------------------------------------------
+
+When reading the excellent [State of Opensource Security report](https://snyk.io/stateofossecurity/),
+some metrics caught my eye:
+
+ * 75% of vulnerabilities are not discovered by the maintainer
+
+ * 79.5% of maintainers said that they had no public-facing disclosure
+   policy in place
+
+ * 21% of maintainers who do not have a public disclosure policy have
+   been notified privately about a vulnerability
+
+ * 73% of maintainers who do have a public disclosure policy have been
+   notified privately about a vulnerability
+
+In other words, having a public disclosure policy more than triples
+your chances of being notified of a security vulnerability. I was also
+surprised to find that 4 out of 5 projects do not have such a
+page. Then I realized that *none* of my projects had such a page, so I
+decided to fix that and fix my [documentation templates](https://ecdysis.readthedocs.io/en/latest/) (the
+infamously named [ecdysis](https://gitlab.com/anarcat/ecdysis) project) to specifically include
+a [section on security issues](https://ecdysis.readthedocs.io/en/latest/contribute.html#security-issues).
+
+I found that the [HackerOne disclosure guidelines](https://www.hackerone.com/disclosure-guidelines) were pretty
+good, except they require having a bounty for disclosure. I understand
+it's part of their business model, but I have no such money to give
+out - in fact, I don't even pay *myself* for the work of developing
+the project, so I don't quite see why I would pay for disclosures.
+
+I also found that many projects include OpenPGP key fingerprints in
+their contact information. I find that's a little silly: project
+documentation is no place to offer OpenPGP key discovery. If security
+researchers cannot find and verify OpenPGP key fingerprints, I would
+be worried about their capabilities. Adding a fingerprint or key
+material is just bound to create outdated documentation when
+maintainers rotate. Instead, I encourage people to use proper key
+discovery mechanism like the [Web of trust](https://en.wikipedia.org/wiki/Web_of_trust), [WKD](https://wiki.gnupg.org/WKD) or
+obviously [TOFU](https://en.wikipedia.org/wiki/Trust_on_first_use) which is basically what publishing a fingerprint
+does anyways.
+
+Git-Mediawiki
+-------------
+
+After being granted access to the [Git-Mediawiki](https://github.com/Git-Mediawiki/Git-Mediawiki/) project last
+month, I got to work. I fought hard with both Travis and Git, and
+Perl, and MediaWiki, to [add continuous integration](https://github.com/Git-Mediawiki/Git-Mediawiki/pull/50) in the

(fichier de différences tronqué)
fix broken links
diff --git a/blog/2017-11-14-ROCA-return-of-the-coppersmith-attack.mdwn b/blog/2017-11-14-ROCA-return-of-the-coppersmith-attack.mdwn
index df96d5f6..fdef87f0 100644
--- a/blog/2017-11-14-ROCA-return-of-the-coppersmith-attack.mdwn
+++ b/blog/2017-11-14-ROCA-return-of-the-coppersmith-attack.mdwn
@@ -70,7 +70,7 @@ media](https://en.wikipedia.org/wiki/Eesti_Rahvusringh%C3%A4%C3%A4ling)
 (ERR). Indeed, estimates show that cracking a
 single key would cost €80,000 in cloud computing costs. Since then,
 however, the vulnerability was also
-[reviewed](https://blog.cr.yp.to/20171105-infineon.html%20) by
+[reviewed](https://blog.cr.yp.to/20171105-infineon.html) by
 cryptographers Daniel J. Bernstein and Tanja Lange, who found that it
 was possible to improve the performance of the attack: they stopped
 after a  25% improvement, but suspect even further
diff --git a/services/backup.mdwn b/services/backup.mdwn
index 162376a9..a1f4332a 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -11,7 +11,7 @@ hand, monthly.
 Workstation and laptop backups are more irregular, on a separate
 drive.
 
-Most backups are performed with [borg](borgbackup.rtfd.org/) but some offsite backups are
+Most backups are performed with [borg](http://borgbackup.rtfd.org/) but some offsite backups are
 still done with [bup](https://bup.github.io/) for historical reasons but may be migrated to
 another storage system.
 

Archival link:

The above link creates a machine-readable RSS feed that can be used to easily archive new changes to the site. It is used by internal scripts to do sanity checks on new entries in the wiki.

Created . Edited .