Recent changes to this wiki. Not to be confused with my history.

Complete source to the wiki is available on gitweb or by cloning this site.

removed
diff --git a/blog/2015-09-09-bootstrap/comment_11_f6cba97c3e11cb732fc66381853a897c._comment b/blog/2015-09-09-bootstrap/comment_11_f6cba97c3e11cb732fc66381853a897c._comment
deleted file mode 100644
index 58f617ba..00000000
--- a/blog/2015-09-09-bootstrap/comment_11_f6cba97c3e11cb732fc66381853a897c._comment
+++ /dev/null
@@ -1,10 +0,0 @@
-[[!comment format=mdwn
- ip="36.250.175.98"
- claimedauthor="moncler jackets green quarters for sale"
- url="http://www.jlalumieretransport.com/monclersale_en/moncler-jackets-green-quarters-for-sale"
- subject="moncler jackets green quarters for sale"
- date="2018-10-12T18:07:52Z"
- content="""
-<a href=\"http://www.subseacontrolfluid.com/coachonline_en/vintage-coach-drawstring-bag-for-sale\">vintage coach drawstring bag for sale</a><a href=\"http://www.subseacontrolfluid.com/coachoutlet_en/coach-signature-city-tote-bag-dimensions\">coach signature city tote bag dimensions</a><a href=\"http://www.subseacontrolfluid.com/coachsale_en/coach-satchels-bags-va-vanilla\">coach satchels bags va vanilla</a><a href=\"http://www.subseacontrolfluid.com/coachwholesale_en/coach-waverly-bags-under-eyes-instantly\">coach waverly bags under eyes instantly</a>
- <a href=\"http://www.jlalumieretransport.com/monclersale_en/moncler-jackets-green-quarters-for-sale\" >moncler jackets green quarters for sale</a> [url=http://www.jlalumieretransport.com/monclersale_en/moncler-jackets-green-quarters-for-sale]moncler jackets green quarters for sale[/url]
-"""]]

Added a comment: moncler jackets green quarters for sale
diff --git a/blog/2015-09-09-bootstrap/comment_11_f6cba97c3e11cb732fc66381853a897c._comment b/blog/2015-09-09-bootstrap/comment_11_f6cba97c3e11cb732fc66381853a897c._comment
new file mode 100644
index 00000000..58f617ba
--- /dev/null
+++ b/blog/2015-09-09-bootstrap/comment_11_f6cba97c3e11cb732fc66381853a897c._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="36.250.175.98"
+ claimedauthor="moncler jackets green quarters for sale"
+ url="http://www.jlalumieretransport.com/monclersale_en/moncler-jackets-green-quarters-for-sale"
+ subject="moncler jackets green quarters for sale"
+ date="2018-10-12T18:07:52Z"
+ content="""
+<a href=\"http://www.subseacontrolfluid.com/coachonline_en/vintage-coach-drawstring-bag-for-sale\">vintage coach drawstring bag for sale</a><a href=\"http://www.subseacontrolfluid.com/coachoutlet_en/coach-signature-city-tote-bag-dimensions\">coach signature city tote bag dimensions</a><a href=\"http://www.subseacontrolfluid.com/coachsale_en/coach-satchels-bags-va-vanilla\">coach satchels bags va vanilla</a><a href=\"http://www.subseacontrolfluid.com/coachwholesale_en/coach-waverly-bags-under-eyes-instantly\">coach waverly bags under eyes instantly</a>
+ <a href=\"http://www.jlalumieretransport.com/monclersale_en/moncler-jackets-green-quarters-for-sale\" >moncler jackets green quarters for sale</a> [url=http://www.jlalumieretransport.com/monclersale_en/moncler-jackets-green-quarters-for-sale]moncler jackets green quarters for sale[/url]
+"""]]

update vero specs, issues with NuC of course
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index 875ed2de..6999fe74 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -83,16 +83,22 @@ expensive. A replacement drive for Marcos could cost around 200$CAD ([2.5"
 190$CAD. So a replacement would be at least 800$CAD, probably around
 1000$ with memory.
 
+Possible issues:
+
+ * limited disk space
+ * embeded SoC from Intel, with *hardcore* proprietary onboard
+   software (Intel ME and crazy BIOS)
+
 ## Vero
 
 Another target would a home-cinema adapter like the [Vero](https://osmc.tv/vero/) which I
 have heard good things about:
 
- * CPU: Quad Core 1.6GHz 64-bit (ARM?)
+ * SoC: AMLogic S905D Quad Core 1.6GHz 64-bit (ARMv4 `aarch64`)
  * Memory: 2GB DDR3 ram
  * Network:
    * Gigabit ethernet
-   * Wifi a/g
+   * Wifi a/g (realtek 8111)
    * Bluetooth 4
    * IR/RF receiver
  * Storage:
@@ -108,7 +114,10 @@ Possible issues:
  * [stays warm to touch](https://discourse.osmc.tv/t/heat-issues/74897) (60-70C)
  * [possible shipping delays](https://discourse.osmc.tv/t/vero-4k-shipment-details-08-08-18/74112)
  * [only USB 2.0](https://discourse.osmc.tv/t/vero-4k-still-usb-2-0/73800/68)
- * specs are unclear, "openness" debatable
+ * [specs are unclear](https://discourse.osmc.tv/t/full-specifications/75617)
+ * probably not an open design
+ * does not run Linux mainline, needs Debian derivative (OSMC),
+   unsupported kernel (3.14) with proprietary blobs
  * the vero could not possibly take all the load, another server would
    be necessary, if only for storage
 

marcos phase out research
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index 2bff855d..875ed2de 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -65,3 +65,77 @@ upgrading the `xserver-xorg-video-intel`, we'll see how it goes.
 
 Update: still deadlocks. december 2017, tried upgrading the kernel to
 backports.
+
+# Possible phase out
+
+marcos came online in early 2011 so it's heading towards its 8 year of
+service at the time of writing, which is stretching the usual 5-year
+depreciation cycle we usually have for computer hardware. So I started
+looking for replacements.
+
+## NUC
+
+An easy first target is to buy another NUC like the [[curie
+workstation|hardware/curie]] since it's cheap and works well. The
+downside is that it requires laptop (2.5") hard disks which are more
+expensive. A replacement drive for Marcos could cost around 200$CAD ([2.5"
+4TB Seagate at newegg.ca](https://www.newegg.ca/Product/Product.aspx?Item=N82E16822179105)). The 500GB M.2 drives are still around
+190$CAD. So a replacement would be at least 800$CAD, probably around
+1000$ with memory.
+
+## Vero
+
+Another target would a home-cinema adapter like the [Vero](https://osmc.tv/vero/) which I
+have heard good things about:
+
+ * CPU: Quad Core 1.6GHz 64-bit (ARM?)
+ * Memory: 2GB DDR3 ram
+ * Network:
+   * Gigabit ethernet
+   * Wifi a/g
+   * Bluetooth 4
+   * IR/RF receiver
+ * Storage:
+   * 16GB microSD (UHS-1 SDR at 104Mhz, possibly up to 128GB, [source](https://discourse.osmc.tv/t/which-sd-cards-to-buy/39905/))
+ * Connectivity:
+   * 2xUSB 2.0
+   * HDMI-2
+ * Power: 5V, 2A (10W? [source](https://discourse.osmc.tv/t/vero-4k-power-supply-problems/37071/5))
+ * 9 x 9 x 2cm / 3.5 x 3.5 x 0.75 inches, 140 grams / 5 oz
+
+Possible issues:
+
+ * [stays warm to touch](https://discourse.osmc.tv/t/heat-issues/74897) (60-70C)
+ * [possible shipping delays](https://discourse.osmc.tv/t/vero-4k-shipment-details-08-08-18/74112)
+ * [only USB 2.0](https://discourse.osmc.tv/t/vero-4k-still-usb-2-0/73800/68)
+ * specs are unclear, "openness" debatable
+ * the vero could not possibly take all the load, another server would
+   be necessary, if only for storage
+
+## GnuBee
+
+A possible solution is to shift storage to a SAN like the
+[GnuBee](http://gnubee.org/). This would relieve stress on each device to provide large
+amount of storage and considering gigabit is wired everywhere now, we
+can use (abuse?) the local network and store files on a
+SAN. Unfortunately, the protocols still suck: we might be stuck on SSH
+or some similarly nasty interface for security.
+
+Specs:
+
+ * SOC: MediaTek MT7621A, dual core dual thread, 880MHz
+ * Memory: 512 DDR3, soldered
+ * Network:
+   * 2 x Gigabit ethernet
+ * Storage:
+   * microSD <= 64GB
+   * 6x2.5" SATA (3.5" design also available)
+ * Connectivity:
+   * 1 x USB 3.0
+   * 2 x USB 2.0
+   * 3-pin J1 serial jack or audio port
+ * 11.64W board, power adapter: 12VDC @ 3A
+ * all firmware and code FLOSS
+ * 223$USD
+
+[LWN.net review](https://lwn.net/Articles/743609/)

create spec page for curie
diff --git a/hardware/curie.mdwn b/hardware/curie.mdwn
new file mode 100644
index 00000000..ed9855ed
--- /dev/null
+++ b/hardware/curie.mdwn
@@ -0,0 +1,31 @@
+Curie is my workstation, named after [Marie Curie](https://en.wikipedia.org/wiki/Marie_Curie). I bought it
+after a failed search for a [[laptop|hardware/laptop]].
+
+# Specification
+
+* SoC: Intel NUC BOXNUC6I3SYH I3-6100U 2xDDR4-2133 SODIMM Slots
+  2.5&M.2 PCIEx4 Slot Mini-DP HDMI 6XUSB SDXC: $380
+* Memory: Crucial 16GB DDR4 2133 SODIMM PC4-17000 CL15 Dual Ranked
+  1.2V Unbuffered 260PIN Memory: $136
+* Network:
+  * Gigabit ethernet I219-V
+  * Wifi
+  * Bluetooth
+* Storage, internal:
+  * M.2 500GB SSD WDC WDS500G1B0B: 190$
+  * SATA 2.5" 1TB WDC WD10JPLX-00M: 76$ (2018-01-28)
+* Storage, external:
+  * 2TB iOmega HD
+* Mouse: Kensington Expert (?)
+* Keyboard: WASD-V2-87-MXN, 87-keys, Cherry brown switches, custom
+  design: 134$USD
+* Total: 749$ (initial price, tx. inc., 2016-12-28)
+* Grand total: ~1000$
+
+I wrote an [installation report](https://wiki.debian.org/InstallingDebianOn/Intel/NUC6i3SYH#preview) for Debian when I setup the
+machine. The machine was originally installed with Debian stretch but
+has been following Debian buster since September 2018.
+
+It works very well and is generally silent unless I manage to max out
+all CPUs for an extended period of time, in which case a small fan
+noise can be heard.
diff --git a/hardware/laptop.mdwn b/hardware/laptop.mdwn
index d7ce530c..a24bb809 100644
--- a/hardware/laptop.mdwn
+++ b/hardware/laptop.mdwn
@@ -1,6 +1,7 @@
 [[!toc levels=2]]
 
-Update: i didn't buy a laptop, but a NUC. See [installation report](https://wiki.debian.org/InstallingDebianOn/Intel/NUC6i3SYH#preview).
+Update: i didn't buy a laptop, but a NUC. See [[hardware/curie]] for
+details.
 
 Besoins
 =======

add marcos specs
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index b5f27250..2bff855d 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -3,6 +3,28 @@ particulier [[services/mail]] et [[services/backup]].
 
 [[!toc levels=3]]
 
+# Specification
+
+(copied from [[hardware/server/marcos/configuration]])
+
+ * motherboard: [ASUS P5G41-M LE/CSM LGA 775 Intel G41 Micro ATX Intel
+   Motherboard](http://www.newegg.com/Product/Product.aspx?Item=N82E16813131399) 65$ newegg ([processeurs supportés](http://commercial.asus.com/product/detail/18))
+ * case: [Antec Black Aluminum / Steel Fusion Remote Black Micro ATX
+   Media Center / HTPC Case](http://www.newegg.com/Product/Product.aspx?Item=N82E16811129054) 150$ newegg, includes "GD01 MX LCD
+   Display/IR Receiver"
+ * CPU: [Intel Pentium Dual-Core E6500 Wolfdale 2.93GHz 2MB L2 Cache
+   LGA 775 65W Dual-Core Processor](http://www.newegg.com/Product/Product.aspx?Item=N82E16819116093) 80$ newegg ([Bonne explication des différents modèles de cores intel](http://en.wikipedia.org/wiki/Intel_Core))
+ * Memory: 8GB ram (2x4GB DDR2 667MHz, 1.5ns)
+ * Network: AR8114 Gigabit ethernet
+ * Storage, internal:
+   * 500GB Samsung SSD 850
+   * 4TB Seagate HDD ST4000DM000-1F21 5900RPM 3.5"
+   * DVD reader/writer (A  DH16A1P, broken)
+ * Storage, external:
+   * 3TB Western Digital "My Book" 1230 USB-3
+ * USB Bluetooth receiver
+ * cost: 350$CAD on 2011-02-26, not counting storage, BT and memory
+
 # Hardware maintenance
 
 See [[hardware/server/marcos/configuration]] for the initial setup notes.

tweak
diff --git a/blog/2018-10-11-cd-archive.mdwn b/blog/2018-10-11-cd-archive.mdwn
index 9e3771d9..e818062b 100644
--- a/blog/2018-10-11-cd-archive.mdwn
+++ b/blog/2018-10-11-cd-archive.mdwn
@@ -43,9 +43,9 @@ special **needs attention** stack was the "to do" pile, and would get
 sorted through the other piles. each pile was labeled with a sticky
 note and taped together summarily.
 
-This page was printed and attached included in the box, along with a
-post-it linking to the [[blog post|blog/2018-10-11-cd-archive]]
-announcing the work for posterity.
+A post-it pointing to the [[blog post|blog/2018-10-11-cd-archive]] was
+included in the box, along with a printed version of the blog post
+summarizing a snapshot of this inventory.
 
 Here is a summary of what's in the box.
 

copy a part of the report in the blog post, as a snapshot
diff --git a/blog/2018-10-11-cd-archive.mdwn b/blog/2018-10-11-cd-archive.mdwn
index 5e06c208..9e3771d9 100644
--- a/blog/2018-10-11-cd-archive.mdwn
+++ b/blog/2018-10-11-cd-archive.mdwn
@@ -11,4 +11,52 @@ hope to turn this into a more agreeable LWN article eventually.
 I post this here so I can put a note in the box with a permanent URL
 for future reference as well.
 
+Remaining work
+--------------
+
+All the archives created were dumped in the `~/archive` or `~/mp3`
+directories on [[hardware/curie]]. Data needs to be deduplicated,
+replicated, and archived *somewhere* more logical.
+
+Inventory
+---------
+
+I have a bunch of piles:
+
+ * a spindle of disks that consists mostly of TV episodes, movies,
+   distro and Windows images/ghosts. not imported.
+ * a pile of tapes and Zip drives. not imported.
+ * about fourty backup disks. not imported.
+ * about five "books" disks of various sorts. ISOs generated. partly
+   integrated in my collection, others failed to import or were in
+   formats that were considered non-recoverable
+ * a bunch of orange seeds piles
+   * Burn Your TV masters and copies
+   * apparently live and unique samples - mostly imported in `mp3`
+   * really old stuff with tons of dupes - partly sorted through, in
+     `jams4`, reste still in the pile
+ * a pile of unidentified disks
+
+All disks were eventually identified as **trash**, **blanks**,
+**perfect**, **finished**, **defective**, or **not processed**. A
+special **needs attention** stack was the "to do" pile, and would get
+sorted through the other piles. each pile was labeled with a sticky
+note and taped together summarily.
+
+This page was printed and attached included in the box, along with a
+post-it linking to the [[blog post|blog/2018-10-11-cd-archive]]
+announcing the work for posterity.
+
+Here is a summary of what's in the box.
+
+| Type | Count | Note |
+| ---- | ----- | ---- |
+| **trash** | 13 | non-recoverable. not detected by the Linux kernel at all and no further attempt has been made to recover them. |
+| **blanks** | 3 | never written to, still usable |
+| **perfect** | 28 | successfully archived, without errors |
+| **finished** | 4 | almost perfect: but mixed-mode or multi-session |
+| **defective** | 21 | found to have errors but not considered important enough to re-process |
+| **total** | 69 | |
+| **not processed** | ~100 | visual estimate |
+
 [[!tag debian-planet archive short debian data rescue hardware geek documentation meta]]

note I still need to go through the actual data
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index 33dfea63..3869181b 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -504,6 +504,13 @@ the information available in the "Burn Your TV" multimedia disk:
     Joliet with UCS level 3 found
     Rock Ridge signatures version 1 found
 
+Remaining work
+--------------
+
+All the archives created were dumped in the `~/archive` or `~/mp3`
+directories on [[hardware/curie]]. Data needs to be deduplicated,
+replicated, and archived *somewhere* more logical.
+
 Inventory
 ---------
 

more details on archival process
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index d22b2204..33dfea63 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -524,7 +524,16 @@ I have a bunch of piles:
  * a pile of unidentified disks
 
 all disks were eventually identified as **trash**, **blanks**,
-**finished**, or **needs attention**
+**perfect**, **finished**, **defective**, or **not processed**. A
+special **needs attention** stack was the "to do" pile, and would get
+sorted through the other piles. each pile was labeled with a sticky
+note and taped together summarily.
+
+this page was printed and attached included in the box, along with a
+post-it linking to the [[blog post|blog/2018-10-11-cd-archive]]
+announcing the work for posterity.
+
+here is a summary of what's in the box.
 
 | Type | Count | Note |
 | ---- | ----- | ---- |
@@ -536,9 +545,6 @@ all disks were eventually identified as **trash**, **blanks**,
 | **total** | 69 | |
 | **not processed** | ~100 | visual estimate |
 
-Whatever is in the box has either been imported successfully or will
-not be imported.
-
 References
 ==========
 

creating tag page tag/data
diff --git a/tag/data.mdwn b/tag/data.mdwn
new file mode 100644
index 00000000..e0b1077a
--- /dev/null
+++ b/tag/data.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged data"]]
+
+[[!inline pages="tagged(data)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/rescue
diff --git a/tag/rescue.mdwn b/tag/rescue.mdwn
new file mode 100644
index 00000000..451d7798
--- /dev/null
+++ b/tag/rescue.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged rescue"]]
+
+[[!inline pages="tagged(rescue)" actions="no" archive="yes"
+feedshow=10]]

archival complete
diff --git a/blog/2018-10-11-cd-archive.mdwn b/blog/2018-10-11-cd-archive.mdwn
new file mode 100644
index 00000000..5e06c208
--- /dev/null
+++ b/blog/2018-10-11-cd-archive.mdwn
@@ -0,0 +1,14 @@
+[[!meta title="Archived a part of my CD collection"]]
+
+After about three days of work, I've finished archiving a part of my
+old CD collection. There were about 200 CDs in a cardboard box that
+were gathering dust. After reading [Jonathan Dowland's post](https://jmtd.net/log/imaging_discs/) about
+CD archival, I got (rightly) worried it would be damaged beyond rescue
+so I sat down and did some [[research|services/archive/rescue]] on the
+rescue mechanisms. My notes are in [[services/archive/rescue]] and I
+hope to turn this into a more agreeable LWN article eventually.
+
+I post this here so I can put a note in the box with a permanent URL
+for future reference as well.
+
+[[!tag debian-planet archive short debian data rescue hardware geek documentation meta]]
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index a42d2aef..d22b2204 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -526,15 +526,15 @@ I have a bunch of piles:
 all disks were eventually identified as **trash**, **blanks**,
 **finished**, or **needs attention**
 
- * **trash**: those disks are non-recoverable. they are not detected
-   by the Linux kernel at all and no further attempt has been made to
-   recover them. They have been thrown away.
- * **blanks**: those disks were actually never written to and are
-   still usable.
- * **finished**: those disks were successfully archived, without
-   errors.
- * **needs attention**: thoses were processed but had errors. they
-   might end up in the **trash** or **finished** pile.
+| Type | Count | Note |
+| ---- | ----- | ---- |
+| **trash** | 13 | non-recoverable. not detected by the Linux kernel at all and no further attempt has been made to recover them. |
+| **blanks** | 3 | never written to, still usable |
+| **perfect** | 28 | successfully archived, without errors |
+| **finished** | 4 | almost perfect: but mixed-mode or multi-session |
+| **defective** | 21 | found to have errors but not considered important enough to re-process |
+| **total** | 69 | |
+| **not processed** | ~100 | visual estimate |
 
 Whatever is in the box has either been imported successfully or will
 not be imported.

add sections
diff --git a/services/hosting.mdwn b/services/hosting.mdwn
index a3d6865b..5756311d 100644
--- a/services/hosting.mdwn
+++ b/services/hosting.mdwn
@@ -66,6 +66,9 @@ completely. This can be done by adding this to
 
 This was discovered in the [libvirt wiki](https://wiki.libvirt.org/page/Networking#Creating_network_initscripts).
 
+Base image build
+----------------
+
 Then we can build an image using [[!man virt-builder]]:
 
     virt-builder debian-9 --size=10G --format qcow2 \
@@ -103,13 +106,19 @@ If the build fails with this error:
     status 1.
 
 It might be that you ran out of space in `/var/tmp`. You can use
-`TMPDIR` to switch to a larger directory. Then the image can be
-created with:
+`TMPDIR` to switch to a larger directory.
+
+Virtual machine creation
+------------------------
+
+Then the virtual machine can be created and started with:
 
     virt-install --virt-type kvm --name stretch-amd64 --memory 512 \
       --import --disk path=/var/lib/libvirt/images/stretch-amd64.qcow2 \
       --os-variant=debian9 --network bridge=br0 --noautoconsole
 
+### IP address discovery
+
 The VM will be created with an IP address allocated by the DHCP
 server. The latter logs (or `tcpdump -n -i any -s 1500 '(port 67 or
 port 68)'`) will show the IP address, otherwise the root password will

fix links
diff --git a/services/hosting.mdwn b/services/hosting.mdwn
index c83a41a3..a3d6865b 100644
--- a/services/hosting.mdwn
+++ b/services/hosting.mdwn
@@ -66,7 +66,7 @@ completely. This can be done by adding this to
 
 This was discovered in the [libvirt wiki](https://wiki.libvirt.org/page/Networking#Creating_network_initscripts).
 
-Then we can build an image using [[!debman virt-builder]]:
+Then we can build an image using [[!man virt-builder]]:
 
     virt-builder debian-9 --size=10G --format qcow2 \
       -o /var/lib/libvirt/images/stretch-amd64.qcow2 \
@@ -206,7 +206,7 @@ References
    disks, CPU, I/O
  * [nixCraft guide](https://www.cyberciti.biz/faq/install-kvm-server-debian-linux-9-headless-server/) - which gave me the `virt-builder` shortcut
    (instead of installing Debian from scratch using an ISO!)
- * the [[!debman virsh]] manual page is excellent
+ * the [[!man virsh]] manual page is excellent
 
 Container notes
 ===============

remove archives *AGAIN*
diff --git a/archives/2018.mdwn b/archives/2018.mdwn
deleted file mode 100644
index 38d897b8..00000000
--- a/archives/2018.mdwn
+++ /dev/null
@@ -1 +0,0 @@
-[[!calendar type=year year=2018 pages="*"]]
diff --git a/archives/2018/01.mdwn b/archives/2018/01.mdwn
deleted file mode 100644
index e6113229..00000000
--- a/archives/2018/01.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=01 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(01) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/02.mdwn b/archives/2018/02.mdwn
deleted file mode 100644
index 36ec3e1e..00000000
--- a/archives/2018/02.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=02 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(02) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/03.mdwn b/archives/2018/03.mdwn
deleted file mode 100644
index 150ddf34..00000000
--- a/archives/2018/03.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=03 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(03) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/04.mdwn b/archives/2018/04.mdwn
deleted file mode 100644
index 8c047584..00000000
--- a/archives/2018/04.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=04 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(04) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/05.mdwn b/archives/2018/05.mdwn
deleted file mode 100644
index fc3b77de..00000000
--- a/archives/2018/05.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=05 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(05) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/06.mdwn b/archives/2018/06.mdwn
deleted file mode 100644
index 19c3e9e2..00000000
--- a/archives/2018/06.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=06 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(06) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/07.mdwn b/archives/2018/07.mdwn
deleted file mode 100644
index 3213f220..00000000
--- a/archives/2018/07.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=07 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(07) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/08.mdwn b/archives/2018/08.mdwn
deleted file mode 100644
index 201b2bcf..00000000
--- a/archives/2018/08.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=08 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(08) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/09.mdwn b/archives/2018/09.mdwn
deleted file mode 100644
index 08ddb5d3..00000000
--- a/archives/2018/09.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=09 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(09) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/10.mdwn b/archives/2018/10.mdwn
deleted file mode 100644
index 9efd2c1f..00000000
--- a/archives/2018/10.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=10 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(10) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/11.mdwn b/archives/2018/11.mdwn
deleted file mode 100644
index 1933e3c2..00000000
--- a/archives/2018/11.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=11 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(11) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/12.mdwn b/archives/2018/12.mdwn
deleted file mode 100644
index ff50f841..00000000
--- a/archives/2018/12.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=12 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(12) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]

move container notes outside of software
i didn't write the containers software
diff --git a/services/hosting.mdwn b/services/hosting.mdwn
index 3625ca64..c83a41a3 100644
--- a/services/hosting.mdwn
+++ b/services/hosting.mdwn
@@ -207,3 +207,57 @@ References
  * [nixCraft guide](https://www.cyberciti.biz/faq/install-kvm-server-debian-linux-9-headless-server/) - which gave me the `virt-builder` shortcut
    (instead of installing Debian from scratch using an ISO!)
  * the [[!debman virsh]] manual page is excellent
+
+Container notes
+===============
+
+Those are notes and reminders of how to do "things" with containers,
+regardless of technology. The are not a replacement for the official
+documentation and may only be useful for myself.
+
+Docker
+------
+
+To build an image:
+
+    docker build --tag foo
+
+That will create an image named "foo" (even if it says `--tag`, that's
+actually the image name, whatever).
+
+To enter a container:
+
+    docker run --tty --interactive foo /bin/bash
+
+To map volumes to containers, which images pre-define certain
+`VOLUME`, first create a volume:
+
+    docker volume create foo
+
+Then use it in the container:
+
+    docker run --volume foo:/srv/foo /bin/bash
+
+Containers are basically a directory stored in
+`/var/lib/docker/volumes` which can be copied around normally.
+
+To restart a container on reboot, use `--restart=unless-stopped` or
+`--restart=always`, as [documented](https://docs.docker.com/engine/admin/start-containers-automatically/).
+
+Rocket
+------
+
+Running docker containers:
+
+    $ sudo rkt run --insecure-options=image --interactive docker://busybox -- /bin/sh
+
+Those get resolved using the [rkt image resolution](https://coreos.com/rkt/docs/latest/devel/distribution-point.html).
+
+Re-running:
+
+    $ sudo rkt run registry-1.docker.io/library/debian:latest --interactive --exec /bin/bash --net=host
+
+Building images requires using the separate [acbuild](https://github.com/containers/build) command which
+builds "standard" ACI images and not docker images. Other tools are
+available like [Packer](https://www.packer.io/), [umoci](https://github.com/openSUSE/umoci) or [Buildah](https://github.com/projectatomic/buildah), although only
+Buildah can use Dockerfiles to build images.
diff --git a/software/containers.mdwn b/software/containers.mdwn
index dc9b9053..2f1259e6 100644
--- a/software/containers.mdwn
+++ b/software/containers.mdwn
@@ -1,52 +1,2 @@
-[[!meta title="Container notes"]]
-
-Those are notes and reminders of how to do "things" with containers,
-regardless of technology. The are not a replacement for the official
-documentation and may only be useful for myself.
-
-Docker
-------
-
-To build an image:
-
-    docker build --tag foo
-
-That will create an image named "foo" (even if it says `--tag`, that's
-actually the image name, whatever).
-
-To enter a container:
-
-    docker run --tty --interactive foo /bin/bash
-
-To map volumes to containers, which images pre-define certain
-`VOLUME`, first create a volume:
-
-    docker volume create foo
-
-Then use it in the container:
-
-    docker run --volume foo:/srv/foo /bin/bash
-
-Containers are basically a directory stored in
-`/var/lib/docker/volumes` which can be copied around normally.
-
-To restart a container on reboot, use `--restart=unless-stopped` or
-`--restart=always`, as [documented](https://docs.docker.com/engine/admin/start-containers-automatically/).
-
-Rocket
-------
-
-Running docker containers:
-
-    $ sudo rkt run --insecure-options=image --interactive docker://busybox -- /bin/sh
-
-Those get resolved using the [rkt image resolution](https://coreos.com/rkt/docs/latest/devel/distribution-point.html).
-
-Re-running:
-
-    $ sudo rkt run registry-1.docker.io/library/debian:latest --interactive --exec /bin/bash --net=host
-
-Building images requires using the separate [acbuild](https://github.com/containers/build) command which
-builds "standard" ACI images and not docker images. Other tools are
-available like [Packer](https://www.packer.io/), [umoci](https://github.com/openSUSE/umoci) or [Buildah](https://github.com/projectatomic/buildah), although only
-Buildah can use Dockerfiles to build images.
+[[!meta redir="services/hosting"]]
+[[!tag redirection]]

KVM notes
diff --git a/services/hosting.mdwn b/services/hosting.mdwn
new file mode 100644
index 00000000..3625ca64
--- /dev/null
+++ b/services/hosting.mdwn
@@ -0,0 +1,209 @@
+Notes about virtual machine and container hosting.
+
+[[!toc levels=3]]
+
+KVM bootstrap with libvirt
+==========================
+
+I got tired of dealing with [VirtualBox](https://www.virtualbox.org/) and [Vagrant](http://vagrantup.com/): those
+tools work well, but they are too far from datacenter-level hosting
+primitives, which right now converge towards KVM (or maybe Xen, but
+that didn't seem to recover from the Meltdown attacks). VirtualBox was
+also [not shipped in stretch](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=794466) because "upstream doesn't play in a
+really fair mode wrt CVEs" and simply ship updates in bulk.
+
+So I started looking into [KVM](https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine). It seems a common way to get
+started with this without setting up a whole cluster management system
+(e.g. [Ganeti](https://en.wikipedia.org/wiki/Ganeti)) is to use [libvirt](https://libvirt.org/). The instructions here also
+include bridge setup information for Debian stretch since that makes
+it easier to host services inside the virtual machines than a clunky
+NAT setup.
+
+Bridge configuration
+--------------------
+
+Assuming the local Ethernet interface is called `eno1`, the following
+configuration, in `/etc/network/interfaces.d/br0`, enables a bridge on
+the host:
+
+    iface eno1 inet manual
+
+    auto br0
+    iface br0 inet static
+        # really necessary?
+        #hwaddress ether f4:4d:30:66:14:9a
+        address 192.168.0.7
+        netmask 255.255.255.0
+        gateway 192.168.0.1
+        dns-nameservers 8.8.8.8
+
+        bridge_ports eno1
+
+    iface br0 inet6 auto
+
+Then disable other networking interfaces and enable the bridge:
+
+    ifdown eno1
+    service NetworkManager restart
+    ifup br0
+
+Finally, by default Linux bridges disable forwarding through the
+firewall. This works independently of the
+`net.ipv[46].conf.all.forwarding` setting, which should stay turned
+off unless we actually want to route packets for the *network* (as
+opposed to the *guests*). This can be tweaked by talking with
+`iptables` directly:
+
+    iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
+
+Or, preferably, by disabling the firewall on the bridge
+completely. This can be done by adding this to
+`/etc/sysctl.d/br0-nf-disable.conf`:
+
+    net.bridge.bridge-nf-call-ip6tables = 0
+    net.bridge.bridge-nf-call-iptables = 0
+    net.bridge.bridge-nf-call-arptables = 0
+
+This was discovered in the [libvirt wiki](https://wiki.libvirt.org/page/Networking#Creating_network_initscripts).
+
+Then we can build an image using [[!debman virt-builder]]:
+
+    virt-builder debian-9 --size=10G --format qcow2 \
+      -o /var/lib/libvirt/images/stretch-amd64.qcow2 \
+      --update \
+      --firstboot-command "dpkg-reconfigure openssh-server" \
+      --network --edit /etc/network/interfaces:s/ens2/ens3/ \
+      --ssh-inject root:string:'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7CY6+aTLlk6epl1+TK6wIaHg1fageEfmKFgn+Yov+2lKFIhNRkcWznQVcyViVmC7iaZkEIei1gP9+0lrsdhewtTBjvkDNxR18aIORJsiH95FFjFIuJ0HQjrM1jOxiXhQZ0xLlnhFkxxa8j9l52HTutpYUU63e3lvY0CBuqh7QtkH3un7iT6EaqMR34yFa2ym35ag8ugMbczBwnTDJYn3qpL8gKuw3JnIp+qdSQb1sGdLcC4JN02E2/IY7iw8lzM9xVab1IgvemCJwS0C/Bt9LsmhCy9AMpaVFaAYjepgdBpSqIMa/8VcoVOrhdJWfIc7fLtt+njN1qojsPmuhsr1n' \
+      --hostname stretch-amd64 --timezone UTC
+
+This is not ideal, as it fetches the base image from `libguestfs.org`,
+in the clear (as opposed to `debian.org` infrastructure):
+ 
+    [   1.9] Downloading: http://libguestfs.org/download/builder/debian-9.xz
+
+There is, fortunately, an OpenPGP signature on those images but it
+might be better to bootstrap using `debootstrap` (although
+bootstrapping using the above might be much faster).
+
+Also notice how we edit the `interfaces` file to fix the interface
+name. For some reason, the interface detected by `virt-builder` isn't
+the same that shows up when running with `virt-install`, below. The
+[symlink trick](https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/) does not work: adding `--link
+/dev/null:/etc/systemd/network/99-default.link` to the `virt-builder`
+incantation does *not* disable those funky interface names. So we
+simply rewrite the file.
+
+Finally, we inject our SSH key in the root account. The build process
+will show a root password but we won't need it thanks to that.
+
+If the build fails with this error:
+
+    [ 156.9] Resizing (using virt-resize) to expand the disk to 10.0G
+    virt-resize: error: libguestfs error: /usr/bin/supermin exited with error 
+    status 1.
+
+It might be that you ran out of space in `/var/tmp`. You can use
+`TMPDIR` to switch to a larger directory. Then the image can be
+created with:
+
+    virt-install --virt-type kvm --name stretch-amd64 --memory 512 \
+      --import --disk path=/var/lib/libvirt/images/stretch-amd64.qcow2 \
+      --os-variant=debian9 --network bridge=br0 --noautoconsole
+
+The VM will be created with an IP address allocated by the DHCP
+server. The latter logs (or `tcpdump -n -i any -s 1500 '(port 67 or
+port 68)'`) will show the IP address, otherwise the root password will
+be necessary to discover it.
+
+Alternatively, the IPv6 address of the guest can be deduced from the
+IP address of the host's `vnet0` interface. For example, here's the
+interface as viewed from the host:
+
+    45: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN group default qlen 1000
+        link/ether fe:54:00:1e:c2:48 brd ff:ff:ff:ff:ff:ff
+        inet6 fe80::fc54:ff:fe1e:c248/64 scope link 
+           valid_lft forever preferred_lft forever
+
+And from the guest:
+
+    2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
+        link/ether 52:54:00:1e:c2:48 brd ff:ff:ff:ff:ff:ff
+        inet 192.168.0.216/24 brd 192.168.0.255 scope global ens3
+           valid_lft forever preferred_lft forever
+        inet6 fd05:5f2d:569f:0:5054:ff:fe1e:c248/64 scope global mngtmpaddr dynamic 
+           valid_lft 7054sec preferred_lft 1654sec
+        inet6 2607:f2c0:f00f:8f00:5054:ff:fe1e:c248/64 scope global mngtmpaddr dynamic 
+           valid_lft 7054sec preferred_lft 1654sec
+        inet6 fe80::5054:ff:fe1e:c248/64 scope link 
+           valid_lft forever preferred_lft forever
+
+Notice how the MAC addresses are almost identical? Only the prefix
+differ: `fe` on the host and `52` on the guest. This might be used to
+guess the IPv6 IP of the guest to administer the machine. The [local
+segment IPv6 multicast address](https://en.wikipedia.org/wiki/Multicast_address) (`ff02::1`) can be used to confirm
+the IP address:
+
+    # ping6 -I br0 ff02::1
+    ping6: Warning: source address might be selected on device other than br0.
+    PING ff02::1(ff02::1) from :: br0: 56 data bytes
+    [...]
+    64 bytes from fe80::5054:ff:fe1e:c248%br0: icmp_seq=1 ttl=64 time=0.281 ms (DUP!)
+    [...]
+    ^C
+    --- ff02::1 ping statistics ---
+    1 packets transmitted, 1 received, +4 duplicates, 0% packet loss, time 0ms
+    rtt min/avg/max/mdev = 0.049/0.339/0.515/0.166 ms
+
+That latter MAC address is also known by `libvirt` so this command
+will show the right MAC:
+
+    # domiflist stretch-amd64
+    Interface  Type       Source     Model       MAC
+    -------------------------------------------------------
+    vnet0      bridge     br0        virtio      52:54:00:55:44:73
+
+Maintenance
+-----------
+
+List running VMs:
+
+    virsh list
+
+To start a VM:
+
+    virsh start stretch-amd64
+
+To stop a VM:
+
+    virsh shutdown stretch-amd64
+
+To kill a VM that's hung:
+
+    virsh destroy stretch-amd64
+
+To reinstall a VM, the machine needs to be stopped (above) and the
+namespace reclaimed ([source](https://serverfault.com/a/299661/153231)):
+
+    virsh undefine stretch-amd64
+
+Remaining tasks
+---------------
+
+ * `/etc/default/libvirt-guests` defines how guests are started -
+   `virsh autostart` can enable automatic restarts, remains to be
+   tested

(fichier de différences tronqué)
update rescue status
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index 7f8f7d50..a42d2aef 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -510,19 +510,31 @@ Inventory
 I have a bunch of piles:
 
  * a spindle of disks that consists mostly of TV episodes, movies,
-   distro and Windows images/ghosts. not be imported.
- * a pile of tapes and Zip drives. not be imported.
+   distro and Windows images/ghosts. not imported.
+ * a pile of tapes and Zip drives. not imported.
  * about fourty backup disks. not imported.
  * about five "books" disks of various sorts. ISOs generated. partly
-   integrated in my collection, others TBD.
- * a pile of unidentified disks. to be investigated.
- * a pile of "needs attention" disks: thoses were imported but had
-   errors.
+   integrated in my collection, others failed to import or were in
+   formats that were considered non-recoverable
  * a bunch of orange seeds piles
    * Burn Your TV masters and copies
    * apparently live and unique samples - mostly imported in `mp3`
    * really old stuff with tons of dupes - partly sorted through, in
      `jams4`, reste still in the pile
+ * a pile of unidentified disks
+
+all disks were eventually identified as **trash**, **blanks**,
+**finished**, or **needs attention**
+
+ * **trash**: those disks are non-recoverable. they are not detected
+   by the Linux kernel at all and no further attempt has been made to
+   recover them. They have been thrown away.
+ * **blanks**: those disks were actually never written to and are
+   still usable.
+ * **finished**: those disks were successfully archived, without
+   errors.
+ * **needs attention**: thoses were processed but had errors. they
+   might end up in the **trash** or **finished** pile.
 
 Whatever is in the box has either been imported successfully or will
 not be imported.

more notes: multi-session disks, mixed mode
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index e3fc04cf..7f8f7d50 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -136,6 +136,10 @@ Replace `cdimage` with the label on the disk. If there's no label,
 write one! If there's already a filename with the same label,
 increment.
 
+Note that ddrescue does not support multi-session CD-ROMs. Those will
+have to be ripped with `cdrdao` with the `--session` argument, see the
+mixed-mode section below for examples.
+
 Audio disks
 -----------
 
@@ -149,6 +153,88 @@ faithful copy to FLAC files, using this command:
 The flags are optional: `--unknown` allows for disks not present on
 MusicBrainz and `--cdr` allows for copied CDs.
 
+Mixed-mode disks
+----------------
+
+Mixed-mode disks are CD-ROMs that contain *both* audio and data
+tracks. Those are particularly challenging to archive.
+
+Whipper will [fail on mixed-mode discs](https://github.com/JoeLametta/whipper/issues/170), especially if the data
+track is at the beginning, which was the case in all the disks I have
+found, including the original Quake CD-ROM.
+
+`ddrescue` will extract the ISO part of the disk but the kernel will
+return errors for the audio part. The resulting file will be usable,
+but only for the ISO part of things.
+
+According to [this article](http://linuxreviews.org/howtos/cdrecording/), a good way to rip those is using
+`cdrdao` directly, for example:
+
+    cdrdao read-cd --read-raw --datafile data.bin data.toc
+
+The problem there is that this creates only a `data.bin` file covering
+the entire disk, and does no error correction like `ddrescue` does.
+
+The files created by `cdrdao` then needs some post-processing to be
+readable as audio or ISO. The first step is to convert the `.toc` file
+to a `.cue` file:
+
+    toc2cue data.toc data.cue
+
+If `toc2cue` shows this warning:
+
+    ERROR: Cannot convert: toc-file references multiple data files.
+
+This can be corrected by forcing the same datafile to be used in all
+tracks of the toc file:
+
+    sed -i.orig 's/FILE "\([^"]*\)"/FILE "data.bin"/' data.toc
+
+Then the actual data needs to be rewritten. This is done with the
+[[!debpkg bchunk]] package which can convert between cdrdao data files
+and ISO/WAV files. As explained in [this blog post](http://blog.kbresearch.nl/2015/11/13/preserving-optical-media-from-the-command-line/), the processing
+needs to be done separately between the audio and ISO parts. In the
+example, the data tracks were ripped in a different session than the
+audio tracks, which made it possible to use the `--session` argument
+to extract each separately. Unfortunately, that is generally not the
+case. What we're interested in, anyways, is probably more the audio
+files, as the ISO file can be extracted by ddrescue. So to extract the
+audio, you'll need:
+
+    data.bin data.cue data
+
+This will convert all audio tracks to WAV files. Normally, it should
+also convert ISO files, but in my experience those show up as unusable
+`.ugh` files and the `ddrescue` version need to be used there. Then
+the WAV files can be compressed to FLAC files using the `flac`
+command:
+
+    flac --delete-input-file data-*.wav
+
+This usually reduces disk usage by about 30-50% at no loss in
+quality. You should end up with the following files:
+
+    data-01.iso
+    data-02.flac
+    data-03.flac
+    data-04.flac
+    data-05.flac
+    data-06.flac
+    data-07.flac
+    data-08.flac
+    data-09.flac
+    data-10.flac
+    data-11.flac
+    data.bin
+    data.cue
+    data.map
+    data.toc
+
+The `.bin` file is a duplicate but can be used to regenerate the
+others (except the `.iso` file of course).
+
+
+
 Identifying disks
 -----------------
 
@@ -178,7 +264,22 @@ information about the disk but waits for the CD to be ready:
     Last Track           : 27
     Appendable           : no
 
-Then the `discid` command will try to analyze the disk to compute a
+The `cdir` command, from the [[!debpkg cdtool]] package can give a
+summary of the medium is present ([source](https://gist.github.com/bitsgalore/1bea8f015eca21a706e7#automatic-disc-type-detection)):
+
+    $ cdir -d /dev/cdrom
+    unknown cd - 40:39 in 9 tracks
+     16:46.13  1 [DATA] 
+      3:46.73  2 
+      5:34.12  3 
+      3:05.41  4 
+      3:06.36  5 
+      2:02.72  6 
+      2:13.67  7 
+      0:34.67  8 
+      3:26.03  9
+
+Then the `cdrdaro discid` command will try to analyze the disk to compute a
 CDDB disk identifier from [FreeDB][]:
 
     $ cdrdao discid
@@ -433,8 +534,20 @@ I'm following the path blazed by [jmtd](https://jmtd.net/) [here](https://jmtd.n
 [here](https://jmtd.net/log/imaging_discs/2). The [forensics wiki](https://www.forensicswiki.org/) also has [docs on ddrescue](https://www.forensicswiki.org/wiki/Ddrescue)
 which were useful.
 
+Tools used:
+
  * [ddrescue][]
  * [whipper][]
 
+Other tools:
+
+ * wodim's readom is supposedly better to rip "optical media" (hence
+   OM) but in [this post](http://blog.kbresearch.nl/2015/11/13/preserving-optical-media-from-the-command-line/) it says it's not as good as ddrescue to
+   deal with damaged medium
+ * isovfy: to check ISO images, TBD. the [source](https://gist.github.com/bitsgalore/1bea8f015eca21a706e7#verify-iso-image-with-isovfy) seems to say it
+   does not really check anything and so wrote a different tool...
+ * ... called [isolyzer](https://github.com/KBNLresearch/isolyzer): check if the recorded size of an ISO file
+   matches the actual size
+
 [ddrescue]: https://www.gnu.org/software/ddrescue/
 [whipper]: https://github.com/JoeLametta/whipper/

calendar update
diff --git a/archives/2018.mdwn b/archives/2018.mdwn
new file mode 100644
index 00000000..38d897b8
--- /dev/null
+++ b/archives/2018.mdwn
@@ -0,0 +1 @@
+[[!calendar type=year year=2018 pages="*"]]
diff --git a/archives/2018/01.mdwn b/archives/2018/01.mdwn
new file mode 100644
index 00000000..e6113229
--- /dev/null
+++ b/archives/2018/01.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=01 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(01) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/02.mdwn b/archives/2018/02.mdwn
new file mode 100644
index 00000000..36ec3e1e
--- /dev/null
+++ b/archives/2018/02.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=02 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(02) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/03.mdwn b/archives/2018/03.mdwn
new file mode 100644
index 00000000..150ddf34
--- /dev/null
+++ b/archives/2018/03.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=03 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(03) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/04.mdwn b/archives/2018/04.mdwn
new file mode 100644
index 00000000..8c047584
--- /dev/null
+++ b/archives/2018/04.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=04 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(04) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/05.mdwn b/archives/2018/05.mdwn
new file mode 100644
index 00000000..fc3b77de
--- /dev/null
+++ b/archives/2018/05.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=05 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(05) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/06.mdwn b/archives/2018/06.mdwn
new file mode 100644
index 00000000..19c3e9e2
--- /dev/null
+++ b/archives/2018/06.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=06 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(06) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/07.mdwn b/archives/2018/07.mdwn
new file mode 100644
index 00000000..3213f220
--- /dev/null
+++ b/archives/2018/07.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=07 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(07) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/08.mdwn b/archives/2018/08.mdwn
new file mode 100644
index 00000000..201b2bcf
--- /dev/null
+++ b/archives/2018/08.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=08 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(08) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/09.mdwn b/archives/2018/09.mdwn
new file mode 100644
index 00000000..08ddb5d3
--- /dev/null
+++ b/archives/2018/09.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=09 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(09) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/10.mdwn b/archives/2018/10.mdwn
new file mode 100644
index 00000000..9efd2c1f
--- /dev/null
+++ b/archives/2018/10.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=10 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(10) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/11.mdwn b/archives/2018/11.mdwn
new file mode 100644
index 00000000..1933e3c2
--- /dev/null
+++ b/archives/2018/11.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=11 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(11) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/12.mdwn b/archives/2018/12.mdwn
new file mode 100644
index 00000000..ff50f841
--- /dev/null
+++ b/archives/2018/12.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=12 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(12) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]

Added a comment: cat quote
diff --git a/blog/2018-10-01-report/comment_3_d77653aeca31a840d9f03684f77b8bd6._comment b/blog/2018-10-01-report/comment_3_d77653aeca31a840d9f03684f77b8bd6._comment
new file mode 100644
index 00000000..bb58bd93
--- /dev/null
+++ b/blog/2018-10-01-report/comment_3_d77653aeca31a840d9f03684f77b8bd6._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="anarcat"
+ avatar="https://seccdn.libravatar.org/avatar/741655483dd8a0b4df28fb3dedfa7e4c"
+ subject="cat quote"
+ date="2018-10-10T00:53:09Z"
+ content="""
+indeed, i do know about that quote! I've actually had that in my [[fortune database|fortunes.txt]] for about 6 years and i've always wondered if it was an accurate attribution. i've now marked is as apocryphal, thanks! :)
+"""]]

forgot to follow rename
diff --git a/blog/2016-04-21-free-software-activities-april-2016.mdwn b/blog/2016-04-21-free-software-activities-april-2016.mdwn
index 7f4685cb..30c35319 100644
--- a/blog/2016-04-21-free-software-activities-april-2016.mdwn
+++ b/blog/2016-04-21-free-software-activities-april-2016.mdwn
@@ -107,7 +107,7 @@ and engineers like to think of themselves in elitist terms, that they
 are changing the world [every other year][]. But the truth is that
 things have not changed much in the last 4 decades where computing has
 existed, both in terms of [security][] or [functionality][]. Two
-quotes from my little [[quotes collection|signatures.txt]] come to mind
+quotes from my little [[quotes collection|fortunes.txt]] come to mind
 here:
 
 > Software gets slower faster than hardware gets faster.
diff --git a/blog/2016-05-12-email-setup.mdwn b/blog/2016-05-12-email-setup.mdwn
index 69586c52..8eccc342 100644
--- a/blog/2016-05-12-email-setup.mdwn
+++ b/blog/2016-05-12-email-setup.mdwn
@@ -529,8 +529,8 @@ Oh, and there's also customization for Notmuch:
 
     ;; -*- mode: emacs-lisp; auto-recompile: t; -*-
     (custom-set-variables
-     ;; from https://anarc.at/signatures.txt
-     '(fortune-file "/home/anarcat/.mutt/signatures.txt")
+     ;; from https://anarc.at/fortunes.txt
+     '(fortune-file "/home/anarcat/.mutt/fortunes.txt")
      '(message-send-hook (quote (notmuch-message-mark-replied)))
      '(notmuch-address-command "notmuch-address")
      '(notmuch-always-prompt-for-sender t)
diff --git a/communication.mdwn b/communication.mdwn
index 1d37a886..8f876a1e 100644
--- a/communication.mdwn
+++ b/communication.mdwn
@@ -132,4 +132,4 @@ Autres projets
 
  * j'ai eu l'idée de faire un [[documentaire sur le vélo|documentaire vélo]]. à l'état de plan.
  * je garde une liste de citations que j'insère automatiquement à la
-   fin de mes courriels, voir [[signatures.txt]]
+   fin de mes courriels, voir [[fortunes.txt]]

more natural: it is what i came up with first
diff --git a/signatures.txt b/fortunes.txt
similarity index 100%
rename from signatures.txt
rename to fortunes.txt

remove archives, *again*, this time after disabling the calendar plugin
diff --git a/archives/2018.mdwn b/archives/2018.mdwn
deleted file mode 100644
index 38d897b8..00000000
--- a/archives/2018.mdwn
+++ /dev/null
@@ -1 +0,0 @@
-[[!calendar type=year year=2018 pages="*"]]
diff --git a/archives/2018/01.mdwn b/archives/2018/01.mdwn
deleted file mode 100644
index e6113229..00000000
--- a/archives/2018/01.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=01 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(01) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/02.mdwn b/archives/2018/02.mdwn
deleted file mode 100644
index 36ec3e1e..00000000
--- a/archives/2018/02.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=02 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(02) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/03.mdwn b/archives/2018/03.mdwn
deleted file mode 100644
index 150ddf34..00000000
--- a/archives/2018/03.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=03 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(03) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/04.mdwn b/archives/2018/04.mdwn
deleted file mode 100644
index 8c047584..00000000
--- a/archives/2018/04.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=04 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(04) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/05.mdwn b/archives/2018/05.mdwn
deleted file mode 100644
index fc3b77de..00000000
--- a/archives/2018/05.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=05 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(05) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/06.mdwn b/archives/2018/06.mdwn
deleted file mode 100644
index 19c3e9e2..00000000
--- a/archives/2018/06.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=06 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(06) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/07.mdwn b/archives/2018/07.mdwn
deleted file mode 100644
index 3213f220..00000000
--- a/archives/2018/07.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=07 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(07) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/08.mdwn b/archives/2018/08.mdwn
deleted file mode 100644
index 201b2bcf..00000000
--- a/archives/2018/08.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=08 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(08) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/09.mdwn b/archives/2018/09.mdwn
deleted file mode 100644
index 08ddb5d3..00000000
--- a/archives/2018/09.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=09 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(09) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/10.mdwn b/archives/2018/10.mdwn
deleted file mode 100644
index 9efd2c1f..00000000
--- a/archives/2018/10.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=10 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(10) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/11.mdwn b/archives/2018/11.mdwn
deleted file mode 100644
index 1933e3c2..00000000
--- a/archives/2018/11.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=11 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(11) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/12.mdwn b/archives/2018/12.mdwn
deleted file mode 100644
index ff50f841..00000000
--- a/archives/2018/12.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=12 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(12) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]

rename sigs file to make it readable by a web browser
.fortune files get treated as application/octet-stream
diff --git a/blog/2016-04-21-free-software-activities-april-2016.mdwn b/blog/2016-04-21-free-software-activities-april-2016.mdwn
index 82b8b8f1..7f4685cb 100644
--- a/blog/2016-04-21-free-software-activities-april-2016.mdwn
+++ b/blog/2016-04-21-free-software-activities-april-2016.mdwn
@@ -107,7 +107,7 @@ and engineers like to think of themselves in elitist terms, that they
 are changing the world [every other year][]. But the truth is that
 things have not changed much in the last 4 decades where computing has
 existed, both in terms of [security][] or [functionality][]. Two
-quotes from my little [[quotes collection|sigs.fortune]] come to mind
+quotes from my little [[quotes collection|signatures.txt]] come to mind
 here:
 
 > Software gets slower faster than hardware gets faster.
diff --git a/blog/2016-05-12-email-setup.mdwn b/blog/2016-05-12-email-setup.mdwn
index 8b630f24..69586c52 100644
--- a/blog/2016-05-12-email-setup.mdwn
+++ b/blog/2016-05-12-email-setup.mdwn
@@ -529,8 +529,8 @@ Oh, and there's also customization for Notmuch:
 
     ;; -*- mode: emacs-lisp; auto-recompile: t; -*-
     (custom-set-variables
-     ;; from https://anarc.at/sigs.fortune
-     '(fortune-file "/home/anarcat/.mutt/sigs.fortune")
+     ;; from https://anarc.at/signatures.txt
+     '(fortune-file "/home/anarcat/.mutt/signatures.txt")
      '(message-send-hook (quote (notmuch-message-mark-replied)))
      '(notmuch-address-command "notmuch-address")
      '(notmuch-always-prompt-for-sender t)
diff --git a/communication.mdwn b/communication.mdwn
index 74ff6e20..1d37a886 100644
--- a/communication.mdwn
+++ b/communication.mdwn
@@ -132,4 +132,4 @@ Autres projets
 
  * j'ai eu l'idée de faire un [[documentaire sur le vélo|documentaire vélo]]. à l'état de plan.
  * je garde une liste de citations que j'insère automatiquement à la
-   fin de mes courriels, voir [[sigs.fortune]]
+   fin de mes courriels, voir [[signatures.txt]]
diff --git a/sigs.fortune b/signatures.txt
similarity index 100%
rename from sigs.fortune
rename to signatures.txt

talk more about read-toc
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index e0b6b41b..e3fc04cf 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -262,7 +262,21 @@ that is with `read-toc`:
 This is the command called by `whipper` to read the disk metadata. It
 then computes a discid and a [MusicBrainz][] hash on his own. But at this
 point, all this information is shown when running whipper, so the
-`disk-info` command is probably all we need to run here.
+`disk-info` command is probably all we need to run here. I still run
+the `readtoc` command to extract a TOC as sometimes that's the only
+way to fetch the CDTEXT on the disk. It's also useful for archival
+purposes. It will also tell us if the disk is a blank, like so:
+
+    $ cdrdao read-toc --fast-toc tocfile
+    Cdrdao version 1.2.4 - (C) Andreas Mueller <andreas@daneb.de>
+    /dev/sr0: TSSTcorp CDDVDW TS-L633A	Rev: TO01
+    Using driver: Generic SCSI-3/MMC - Version 2.0 (options 0x0000)
+
+    WARNING: Unit not ready, still trying...
+    WARNING: Unit not ready, still trying...
+    WARNING: Unit not ready, still trying...
+    WARNING: Unit not ready, still trying...
+    ERROR: Inserted disk is empty.
 
 To extract disk identifiers however, cdrdao is rather slow. The
 [[!debpkg cd-discid]] command is much faster:

the cat quote is probably apocryphal, but still keep einstein
i'm totally biased there. here's the source, thanks to Robin for
pointing it out:
https://quoteinvestigator.com/2012/02/24/telegraph-cat/
diff --git a/sigs.fortune b/sigs.fortune
index 6b78b5af..44eb7c85 100644
--- a/sigs.fortune
+++ b/sigs.fortune
@@ -346,7 +346,7 @@ Wire telegraph is a kind of a very, very long cat. You pull his tail
 in New York and his head is meowing in Los Angeles. Radio operates
 exactly the same way: you send signals here, they receive them
 there. The only difference is that there is no cat.
-                         - Albert Einstein
+                         - Albert Einstein [apocryphal]
 %
 Celui qui sait jouir du peu qu'il a est toujours assez riche.
                          - Démocrite

Added a comment: There is no cat
diff --git a/blog/2018-10-01-report/comment_2_2cd3a85f5bbbc5c5b7903c1bfba8ea0b._comment b/blog/2018-10-01-report/comment_2_2cd3a85f5bbbc5c5b7903c1bfba8ea0b._comment
new file mode 100644
index 00000000..8a064f0e
--- /dev/null
+++ b/blog/2018-10-01-report/comment_2_2cd3a85f5bbbc5c5b7903c1bfba8ea0b._comment
@@ -0,0 +1,16 @@
+[[!comment format=mdwn
+ ip="96.127.232.203"
+ claimedauthor="Robin"
+ url="http://robin.millette.info/"
+ subject="There is no cat"
+ date="2018-10-09T16:24:54Z"
+ content="""
+You might have heard this quote attributed to Einstein:
+
+> You see, wire telegraph is a kind of a very, very long cat.  You pull his tail in New York and his head is meowing in Los Angeles.  Do you understand this? And radio operates exactly the same way: you send signals here, they receive them there.  The only difference is that there is no cat.
+
+I first heard it when wifidog was born, after nocat. Well, turns out, it might have originated in a \"fortune\": https://quoteinvestigator.com/2012/02/24/telegraph-cat/
+
+Thought you might find this interesting.
+
+"""]]

calendar update
diff --git a/archives/2018.mdwn b/archives/2018.mdwn
new file mode 100644
index 00000000..38d897b8
--- /dev/null
+++ b/archives/2018.mdwn
@@ -0,0 +1 @@
+[[!calendar type=year year=2018 pages="*"]]
diff --git a/archives/2018/01.mdwn b/archives/2018/01.mdwn
new file mode 100644
index 00000000..e6113229
--- /dev/null
+++ b/archives/2018/01.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=01 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(01) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/02.mdwn b/archives/2018/02.mdwn
new file mode 100644
index 00000000..36ec3e1e
--- /dev/null
+++ b/archives/2018/02.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=02 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(02) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/03.mdwn b/archives/2018/03.mdwn
new file mode 100644
index 00000000..150ddf34
--- /dev/null
+++ b/archives/2018/03.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=03 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(03) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/04.mdwn b/archives/2018/04.mdwn
new file mode 100644
index 00000000..8c047584
--- /dev/null
+++ b/archives/2018/04.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=04 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(04) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/05.mdwn b/archives/2018/05.mdwn
new file mode 100644
index 00000000..fc3b77de
--- /dev/null
+++ b/archives/2018/05.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=05 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(05) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/06.mdwn b/archives/2018/06.mdwn
new file mode 100644
index 00000000..19c3e9e2
--- /dev/null
+++ b/archives/2018/06.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=06 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(06) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/07.mdwn b/archives/2018/07.mdwn
new file mode 100644
index 00000000..3213f220
--- /dev/null
+++ b/archives/2018/07.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=07 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(07) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/08.mdwn b/archives/2018/08.mdwn
new file mode 100644
index 00000000..201b2bcf
--- /dev/null
+++ b/archives/2018/08.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=08 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(08) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/09.mdwn b/archives/2018/09.mdwn
new file mode 100644
index 00000000..08ddb5d3
--- /dev/null
+++ b/archives/2018/09.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=09 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(09) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/10.mdwn b/archives/2018/10.mdwn
new file mode 100644
index 00000000..9efd2c1f
--- /dev/null
+++ b/archives/2018/10.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=10 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(10) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/11.mdwn b/archives/2018/11.mdwn
new file mode 100644
index 00000000..1933e3c2
--- /dev/null
+++ b/archives/2018/11.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=11 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(11) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/12.mdwn b/archives/2018/12.mdwn
new file mode 100644
index 00000000..ff50f841
--- /dev/null
+++ b/archives/2018/12.mdwn
@@ -0,0 +1,5 @@
+[[!sidebar content="""
+[[!calendar type=month month=12 year=2018 pages="*"]]
+"""]]
+
+[[!inline pages="creation_month(12) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]

make a new section on archive management
diff --git a/services/archive.mdwn b/services/archive.mdwn
index 6c7cb70e..ed63798d 100644
--- a/services/archive.mdwn
+++ b/services/archive.mdwn
@@ -19,3 +19,64 @@ Data rescue
 I have *some* experience in data recovery, mostly built as I dealt
 with various broken hardware: fake flash cards, old CD-ROMs, dead hard
 drives... My notes on this are in [[rescue]].
+
+Archive management
+==================
+
+Mirroring and restoring data is only part of the problem. Once
+(re)created, the data needs to be properly indexed otherwise it's an
+undecipherable pile of garbage where nothing can be found. Metadata
+need to be created for the content and properly indexed. This can
+include, for each piece of content:
+
+ * who created it
+ * when
+ * *what* type of media is it (a book, official documents, newspaper
+   clippings, music, interview, video, etc)
+ * what is *in* the media (a show? where? a picture of what? etc)
+ * etc
+
+Determining that data is only one part, you also need a way to store
+the information in a meaningful way. Unfortunately, I don't have good
+advice for this but to make sure you name the created folders and
+files correctly. Various storage mediums have support for metadata
+(MP3 tags, Exif tags for photos, etc): use them. Otherwise filenames
+can be used or auxiliary text files.
+
+I mostly use git-annex to manage my archives and make sure I have
+redundant copies. git-annex also supports "scrubbing" copies by
+verifying checksums on the content.
+
+I also use the following software to import, index and browse
+contents:
+
+ * Bookmarks: Wallabag
+ * Books: Zotero
+ * E-books: Calibre
+ * Music: MPD/GMPC, Airsonic, Kodi
+ * Photos: RPD, Darktable, Sigal, Nextcloud (previously: shotwell,
+   f-spot...)
+ * Software: GitLab.com, GitHub.com, Debian.org
+ * Video: RPD, Kodi
+ * Web archives: crawl, pywb, webrecorder.io, archive.org. should
+   evaluate wpull next.
+
+All of those are stored in multiple locations with git-annex, except
+software which is managed through git only and web archives which are
+not replicated and usually stored directly on archive.org.
+
+I do not have good mechanisms for the following:
+
+ * audiobooks
+ * contacts: all over the place. old mutt alias files, VCF exports from phone, phone numbers in agenda. considering [monica](https://github.com/monicahq/monica)
+ * game (ROMs)
+ * podcasts (not archived, browsed with AntennaPod on Android)
+ * scans - considering [paperless](https://github.com/danielquinn/paperless)
+
+I need to evaluate the following tools for archive management:
+
+ * [Access to memory](https://www.accesstomemory.org/)
+ * [Archive space](https://archivesspace.org/)
+ * [Archivematica](https://www.archivematica.org/)
+
+Those come from the [awesome self-hosted list](https://github.com/Kickball/awesome-selfhosted#archiving-and-digital-preservation-dp).
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index e9bebe26..e0b6b41b 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -16,19 +16,9 @@ A few general data recovery principles:
 Also keep in mind that Recovering data is only the first step: think
 about how you will archive what you restore. If it's live data, it's
 easier as it replaces what is already there. But if it's old data, you
-need to manage metadata on the medium you import:
-
- * who created it
- * when
- * *what* type of media is it (a book, official documents, newspaper
-   clippings, music, interview, video, etc)
- * what is *in* the media (a show? where? a picture of what? etc)
-
-Determining that data is only one part, you also need a way to store
-this. Unfortunately, I don't have good advice for this but to make
-sure you name your stuff correctly. Various storage mediums have
-support for metadata (MP3 tags, Exif tags for photos, etc): use
-them. Otherwise filenames can be used or auxiliary text files.
+need to manage metadata on the medium you import. See the parent
+[[archive]] page for a wider discussion on the topic of archive
+management.
 
 ddrescue primer
 ===============

explain a bit more how to manage archive metadata
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index 77a774d3..e9bebe26 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -1,19 +1,35 @@
 [[!meta title="Data rescue operations"]]
 
-General data recovery principles
-================================
+[[!toc]]
 
-A few bits of advice:
+A few general data recovery principles:
 
  * Keep things ordered in neat little stacks: unprocessed, processing,
    completed, failed, incomplete, etc.
  * Label everything as soon as it's identified. Use unambiguous names
-   with incrementing unique numbers, with dates if possible
+   with incrementing unique numbers, with dates if possible.
  * Do not write to the media as much as possible.
  * This means you shouldn't even *mount* a drive that you suspect is
    faulty. 
  * Make a copy of the drive, then try to repair a copy of that copy.
 
+Also keep in mind that Recovering data is only the first step: think
+about how you will archive what you restore. If it's live data, it's
+easier as it replaces what is already there. But if it's old data, you
+need to manage metadata on the medium you import:
+
+ * who created it
+ * when
+ * *what* type of media is it (a book, official documents, newspaper
+   clippings, music, interview, video, etc)
+ * what is *in* the media (a show? where? a picture of what? etc)
+
+Determining that data is only one part, you also need a way to store
+this. Unfortunately, I don't have good advice for this but to make
+sure you name your stuff correctly. Various storage mediums have
+support for metadata (MP3 tags, Exif tags for photos, etc): use
+them. Otherwise filenames can be used or auxiliary text files.
+
 ddrescue primer
 ===============
 

mention --retry
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index c915f86b..77a774d3 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -33,8 +33,10 @@ the same arguments:
 
     ddrescue -d /dev/sdb2 /srv/backup/sda2-media-20181005T135440.iso /srv/backup/sda2-media-20181005T135440.map
 
-The [examples section][] has more details on those procedures. Special
-procedures should be followed for CD-ROMs, detailed below.
+The `--retry-passes` option (`-r`) can be used to specify how many
+times to force ddrescue to retry that process. The [examples
+section][] has more details on those procedures. Special procedures
+should be followed for CD-ROMs, detailed below.
 
 [examples section]: https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html#Examples
 

add screenshots and ddrescue(view) primer
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
index e7a3d26b..c915f86b 100644
--- a/services/archive/rescue.mdwn
+++ b/services/archive/rescue.mdwn
@@ -1,11 +1,111 @@
+[[!meta title="Data rescue operations"]]
+
+General data recovery principles
+================================
+
+A few bits of advice:
+
+ * Keep things ordered in neat little stacks: unprocessed, processing,
+   completed, failed, incomplete, etc.
+ * Label everything as soon as it's identified. Use unambiguous names
+   with incrementing unique numbers, with dates if possible
+ * Do not write to the media as much as possible.
+ * This means you shouldn't even *mount* a drive that you suspect is
+   faulty. 
+ * Make a copy of the drive, then try to repair a copy of that copy.
+
+ddrescue primer
+===============
+
+Most recovery attempts should be performed with [ddrescue][]: it's
+fast for quick restores but can also go very deep with multiple
+retries and checks to ensure a faithful copy.
+
+The [ddrescue manual](https://www.gnu.org/software/ddrescue/manual/) has a nice [examples section][] detailing
+general principles, but a TL;DR: for disk drives is:
+
+    ddrescue -n /dev/sdb2 /srv/backup/sda2-media-20181005T135440.iso /srv/backup/sda2-media-20181005T135440.map
+
+That does a first pass on the drive using a fast algorithm (skip areas
+that have errors without retrying). If there are errors, you can do a
+more thorough pass without `-n` but in "direct I/O" mode but otherwise
+the same arguments:
+
+    ddrescue -d /dev/sdb2 /srv/backup/sda2-media-20181005T135440.iso /srv/backup/sda2-media-20181005T135440.map
+
+The [examples section][] has more details on those procedures. Special
+procedures should be followed for CD-ROMs, detailed below.
+
+[examples section]: https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html#Examples
+
+ddrescueview
+============
+
+The [ddrescueview](https://sourceforge.net/projects/ddrescueview/) utility can be read to display ddrescue log
+files, which may give cues as to what is going on with a drive. With
+automatic refresh, it might show better progress information than the
+commandline output.
+
+For example, this shows ddrescue running with the `--no-scrape`
+argument:
+
+<figure><img
+src="https://paste.anarc.at/snaps/snap-2018.10.05-12.47.19.png" alt="A
+grid of mostly green blocks with, in the middle, stripes of blue
+blocks delimited by red blocks and stripes of yellow blocks."/>
+<figcaption>Screenshot of ddrescueview showing ddrescue in its
+trimming phase.</figcaption> </figure>
+
+Here you see it skipped areas (in blue) that had read errors (in
+red). Those areas were "trimmed", that is: ddrescue tried to get as
+close to the error as possible to see where the faulty sectors are. In
+contrasts the "non-trimmed" areas (in yellow) indicate that a bulk
+read of that area failed but ddrescue does not know which part failed
+exactly.
+
+When we rerun ddrescue without the `-n` argument, ddrescue will retry
+the "non-scraped" area and try to restore what's inside of those
+trimmed blocks as well:
+
+<figure><img
+src="https://paste.anarc.at/snaps/snap-2018.10.05-13.38.11.png" alt="A
+grid of mostly green blocks with, in the middle, scattered red blocks
+mostly aligned in columns."/>
+<figcaption>Screenshot of ddrescueview showing ddrescue after its
+scraping phase.</figcaption> </figure>
+
+Here we see ddrescue was able to restore a lot of content, save a few
+sectors that were completely unreadable. Retrying again might
+eventually save those sectors.
+
+Notice how both images show a typical "moire" pattern typical of
+rotating medium: a scratch will leave such a pattern on the
+data. Those results were obtained on a 16 year old CD-R disk.
+
+Flash memory
+============
+
+Flash memory is especially tricky to recover because SSD drives and SD
+cards are "smart": they have an embeded controller that hides the
+actual storage layer. It's the same reason why it's hard to reliably
+destroy data on those devices as well...
+
+I have so far used ddrescue to restore data from hard drives and flash
+memory is no exception.
+
+When problems occur with flash memory, it's worth testing the card
+with the [Fight Fake Flash](http://oss.digirati.com.br/f3/) (f3) program (debian package: [[!debpkg
+f3]]). I have written documentation on those operations in the
+[stressant manual](https://stressant.readthedocs.io/en/latest/usage.html#testing-memory-cards-with-f3).
+
+CD-ROMs
+=======
+
 Found a pile of CDs in the basement. Was looking for my [old band](http://orangeseeds.org)
 but found much more: photos, samizdat, old backups, old games
 (Quake!), distro images (OpenBSD) and old windows "ghosts". Most of
 this is junk of course, but a few key parts of that are interesting.
 
-Procedure
-=========
-
 Data disks
 ----------
 
@@ -28,11 +128,6 @@ Replace `cdimage` with the label on the disk. If there's no label,
 write one! If there's already a filename with the same label,
 increment.
 
-The [ddrescueview](https://sourceforge.net/projects/ddrescueview/) utility can be read to display ddrescue log
-files, which may give cues as to what is going on with a drive. With
-automatic refresh, it might show better progress information than the
-commandline output.
-
 Audio disks
 -----------
 
@@ -287,7 +382,7 @@ the information available in the "Burn Your TV" multimedia disk:
     Rock Ridge signatures version 1 found
 
 Inventory
-=========
+---------
 
 I have a bunch of piles:
 

fix display of directives (other than sidebar
diff --git a/ikiwiki/directive/sidebar.mdwn b/ikiwiki/directive/sidebar.mdwn
new file mode 100644
index 00000000..3af455f5
--- /dev/null
+++ b/ikiwiki/directive/sidebar.mdwn
@@ -0,0 +1 @@
+[[!inline pages="sidebar" raw=yes]]

remove calendar archives that slow down build
those were not linked anywhere and were broken on display anyways...
checked the visits on any /archives/ pages in the logs in the last
seven days: nothing but bots, no relevant referers (google only).
diff --git a/archives/2018.mdwn b/archives/2018.mdwn
deleted file mode 100644
index 38d897b8..00000000
--- a/archives/2018.mdwn
+++ /dev/null
@@ -1 +0,0 @@
-[[!calendar type=year year=2018 pages="*"]]
diff --git a/archives/2018/01.mdwn b/archives/2018/01.mdwn
deleted file mode 100644
index e6113229..00000000
--- a/archives/2018/01.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=01 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(01) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/02.mdwn b/archives/2018/02.mdwn
deleted file mode 100644
index 36ec3e1e..00000000
--- a/archives/2018/02.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=02 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(02) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/03.mdwn b/archives/2018/03.mdwn
deleted file mode 100644
index 150ddf34..00000000
--- a/archives/2018/03.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=03 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(03) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/04.mdwn b/archives/2018/04.mdwn
deleted file mode 100644
index 8c047584..00000000
--- a/archives/2018/04.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=04 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(04) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/05.mdwn b/archives/2018/05.mdwn
deleted file mode 100644
index fc3b77de..00000000
--- a/archives/2018/05.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=05 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(05) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/06.mdwn b/archives/2018/06.mdwn
deleted file mode 100644
index 19c3e9e2..00000000
--- a/archives/2018/06.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=06 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(06) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/07.mdwn b/archives/2018/07.mdwn
deleted file mode 100644
index 3213f220..00000000
--- a/archives/2018/07.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=07 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(07) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/08.mdwn b/archives/2018/08.mdwn
deleted file mode 100644
index 201b2bcf..00000000
--- a/archives/2018/08.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=08 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(08) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/09.mdwn b/archives/2018/09.mdwn
deleted file mode 100644
index 08ddb5d3..00000000
--- a/archives/2018/09.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=09 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(09) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/10.mdwn b/archives/2018/10.mdwn
deleted file mode 100644
index 9efd2c1f..00000000
--- a/archives/2018/10.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=10 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(10) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/11.mdwn b/archives/2018/11.mdwn
deleted file mode 100644
index 1933e3c2..00000000
--- a/archives/2018/11.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=11 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(11) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]
diff --git a/archives/2018/12.mdwn b/archives/2018/12.mdwn
deleted file mode 100644
index ff50f841..00000000
--- a/archives/2018/12.mdwn
+++ /dev/null
@@ -1,5 +0,0 @@
-[[!sidebar content="""
-[[!calendar type=month month=12 year=2018 pages="*"]]
-"""]]
-
-[[!inline pages="creation_month(12) and creation_year(2018) and *" show=0 feeds=no reverse=yes]]

start working on CD rescue stuff
diff --git a/services/archive/rescue.mdwn b/services/archive/rescue.mdwn
new file mode 100644
index 00000000..e7a3d26b
--- /dev/null
+++ b/services/archive/rescue.mdwn
@@ -0,0 +1,323 @@
+Found a pile of CDs in the basement. Was looking for my [old band](http://orangeseeds.org)
+but found much more: photos, samizdat, old backups, old games
+(Quake!), distro images (OpenBSD) and old windows "ghosts". Most of
+this is junk of course, but a few key parts of that are interesting.
+
+Procedure
+=========
+
+Data disks
+----------
+
+CDROMs are ripped with [ddrescue][]:
+
+    ddrescue -n -b 2048 /dev/cdrom cdimage.iso cdimage.log
+
+ddrescue does no retry by default, so if we're desparate and think
+there's a chance to recover the rest we enable scraping (remove the
+`--no-scrape`, `-n` flag) and retries (`--retry-passes`, `-r`) in
+direct I/O mode (`--idirect`, `-d`):
+
+    ddrescue -d -r 3 -b 2048 /dev/cdrom cdimage.iso cdimage.log
+
+If you are luck and have two identical copies of the same data, you
+can also use the `-r` flag to retry an existing iso file. This is best
+explained in the [official manual](https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html#Optical-media).
+
+Replace `cdimage` with the label on the disk. If there's no label,
+write one! If there's already a filename with the same label,
+increment.
+
+The [ddrescueview](https://sourceforge.net/projects/ddrescueview/) utility can be read to display ddrescue log
+files, which may give cues as to what is going on with a drive. With
+automatic refresh, it might show better progress information than the
+commandline output.
+
+Audio disks
+-----------
+
+It's unclear if (or how well) ddrescue works with audio disks. In my
+tests, it yields empty ISO images on audio CDs. Besides, there are
+other advanced techniques for those. I'm using [whipper][] to do a
+faithful copy to FLAC files, using this command:
+
+    whipper cd rip --unknown --cdr
+
+The flags are optional: `--unknown` allows for disks not present on
+MusicBrainz and `--cdr` allows for copied CDs.
+
+Identifying disks
+-----------------
+
+[[!debpkg cdrdao]] can be used to detect when the CD drive is
+read. A good first command is `disk-info` which gives general
+information about the disk but waits for the CD to be ready:
+
+    $ cdrdao disk-info
+    Cdrdao version 1.2.4 - (C) Andreas Mueller <andreas@daneb.de>
+    /dev/sr0: TSSTcorp CDDVDW TS-L633A	Rev: TO01
+    Using driver: Generic SCSI-3/MMC - Version 2.0 (options 0x0000)
+
+    WARNING: Unit not ready, still trying...
+    WARNING: Unit not ready, still trying...
+    WARNING: Unit not ready, still trying...
+    That data below may not reflect the real status of the inserted medium
+    if a simulation run was performed before. Reload the medium in this case.
+
+    CD-RW                : no
+    Total Capacity       : n/a
+    CD-R medium          : Prodisc Technology Inc.
+                           Short Strategy Type, e.g. Phthalocyanine
+    Recording Speed      : n/a
+    CD-R empty           : no
+    Toc Type             : CD-DA or CD-ROM
+    Sessions             : 1
+    Last Track           : 27
+    Appendable           : no
+
+Then the `discid` command will try to analyze the disk to compute a
+CDDB disk identifier from [FreeDB][]:
+
+    $ cdrdao discid
+    Cdrdao version 1.2.4 - (C) Andreas Mueller <andreas@daneb.de>
+    /dev/sr0: TSSTcorp CDDVDW TS-L633A	Rev: TO01
+    Using driver: Generic SCSI-3/MMC - Version 2.0 (options 0x0000)
+
+
+    Track   Mode    Flags  Start                Length
+    ------------------------------------------------------------
+     1      AUDIO   0      00:00:00(     0)     02:49:71( 12746)
+     2      AUDIO   0      02:49:71( 12746)     04:20:43( 19543)
+     3      AUDIO   0      07:10:39( 32289)     01:32:23(  6923)
+     4      AUDIO   0      08:42:62( 39212)     00:54:16(  4066)
+     5      AUDIO   0      09:37:03( 43278)     05:33:64( 25039)
+     6      AUDIO   0      15:10:67( 68317)     06:08:05( 27605)
+     7      AUDIO   0      21:18:72( 95922)     01:59:06(  8931)
+     8      AUDIO   0      23:18:03(104853)     05:07:13( 23038)
+     9      AUDIO   0      28:25:16(127891)     05:15:16( 23641)
+    10      AUDIO   0      33:40:32(151532)     04:00:38( 18038)
+    11      AUDIO   0      37:40:70(169570)     00:19:28(  1453)
+    12      AUDIO   0      38:00:23(171023)     00:06:02(   452)
+    13      AUDIO   0      38:06:25(171475)     00:06:02(   452)
+    14      AUDIO   0      38:12:27(171927)     00:06:02(   452)
+    15      AUDIO   0      38:18:29(172379)     00:06:02(   452)
+    16      AUDIO   0      38:24:31(172831)     00:06:02(   452)
+    17      AUDIO   0      38:30:33(173283)     00:53:52(  4027)
+    18      AUDIO   0      39:24:10(177310)     00:38:08(  2858)
+    19      AUDIO   0      40:02:18(180168)     00:46:41(  3491)
+    20      AUDIO   0      40:48:59(183659)     00:06:02(   452)
+    21      AUDIO   0      40:54:61(184111)     00:06:02(   452)
+    22      AUDIO   0      41:00:63(184563)     00:06:02(   452)
+    23      AUDIO   0      41:06:65(185015)     00:06:02(   452)
+    24      AUDIO   0      41:12:67(185467)     00:06:02(   452)
+    25      AUDIO   0      41:18:69(185919)     00:44:61(  3361)
+    26      AUDIO   0      42:03:55(189280)     00:38:51(  2901)
+    27      AUDIO   0      42:42:31(192181)     00:51:51(  3876)
+    Leadout AUDIO   0      43:34:07(196057)
+
+    PQ sub-channel reading (audio track) is supported, data format is BCD.
+    Raw P-W sub-channel reading (audio track) is supported.
+    Cooked R-W sub-channel reading (audio track) is supported.
+    Analyzing track 01 (AUDIO): start 00:00:00, length 02:49:71...
+    Analyzing track 02 (AUDIO): start 02:49:71, length 04:20:43...
+    Analyzing track 03 (AUDIO): start 07:10:39, length 01:32:23...
+    Analyzing track 04 (AUDIO): start 08:42:62, length 00:54:16...
+    Analyzing track 05 (AUDIO): start 09:37:03, length 05:33:64...
+    Analyzing track 06 (AUDIO): start 15:10:67, length 06:08:05...
+    Analyzing track 07 (AUDIO): start 21:18:72, length 01:59:06...
+    Analyzing track 08 (AUDIO): start 23:18:03, length 05:07:13...
+    Analyzing track 09 (AUDIO): start 28:25:16, length 05:15:16...
+    Analyzing track 10 (AUDIO): start 33:40:32, length 04:00:38...
+    Analyzing track 11 (AUDIO): start 37:40:70, length 00:19:28...
+    Analyzing track 12 (AUDIO): start 38:00:23, length 00:06:02...
+    Analyzing track 13 (AUDIO): start 38:06:25, length 00:06:02...
+    Analyzing track 14 (AUDIO): start 38:12:27, length 00:06:02...
+    Analyzing track 15 (AUDIO): start 38:18:29, length 00:06:02...
+    Analyzing track 16 (AUDIO): start 38:24:31, length 00:06:02...
+    Analyzing track 17 (AUDIO): start 38:30:33, length 00:53:52...
+    Analyzing track 18 (AUDIO): start 39:24:10, length 00:38:08...
+    Analyzing track 19 (AUDIO): start 40:02:18, length 00:46:41...
+    Analyzing track 20 (AUDIO): start 40:48:59, length 00:06:02...
+    Analyzing track 21 (AUDIO): start 40:54:61, length 00:06:02...
+    Analyzing track 22 (AUDIO): start 41:00:63, length 00:06:02...
+    Analyzing track 23 (AUDIO): start 41:06:65, length 00:06:02...
+    Analyzing track 24 (AUDIO): start 41:12:67, length 00:06:02...
+    Analyzing track 25 (AUDIO): start 41:18:69, length 00:44:61...
+    Analyzing track 26 (AUDIO): start 42:03:55, length 00:38:51...
+    Analyzing track 27 (AUDIO): start 42:42:31, length 00:51:51...
+            	
+    CDDB: Connecting to cddbp://freedb.freedb.org:888 ...
+    CDDB: Ok.
+    No CDDB record found for this toc-file.
+
+The `read-toc` command will also write that data to a file. Note that
+the above does *not* show CDTXT information, the only way to extract
+that is with `read-toc`:
+
+    cdrdao read-toc --fast-toc tocfile
+
+This is the command called by `whipper` to read the disk metadata. It
+then computes a discid and a [MusicBrainz][] hash on his own. But at this
+point, all this information is shown when running whipper, so the
+`disk-info` command is probably all we need to run here.
+
+To extract disk identifiers however, cdrdao is rather slow. The
+[[!debpkg cd-discid]] command is much faster:
+
+    $ cd-discid /dev/sr0
+    9e0af30c 12 150 76757 87524 95692 118024 130633 141869 165637 174714 182592 184870 189598 2805
+
+This returns the old [FreeDB][]-style CDDB disc identifier. A more
+modern version is the [MusicBrainz][]-style checksum, which can be
+read with [[!debpkg flactag]]'s `discid` command, but it's slower than
+`cd-diskid`:
+
+[FreeDB]: http://freedb.org
+[MusicBrainz]: https://musicbrainz.org
+
+    $ discid /dev/cdrom
+    dL5EmwESIWTPowb192SkUw5S7p4-
+
+The above is an audio CD and will not work for data disks. And
+unfortunately, just using `disk-info` does not suffice to identify
+data CDs. For this you need the full `discid` run. Here's an example
+of a home-made data CD:
+
+    $ cdrdao discid
+    Cdrdao version 1.2.4 - (C) Andreas Mueller <andreas@daneb.de>
+    /dev/sr0: TSSTcorp CDDVDW TS-L633A	Rev: TO01
+    Using driver: Generic SCSI-3/MMC - Version 2.0 (options 0x0000)
+
+
+    Track   Mode    Flags  Start                Length
+    ------------------------------------------------------------
+     1      DATA    4      00:00:00(     0)     42:53:34(193009)

(fichier de différences tronqué)
move web archival to a subdirectory to make room for more archive work
diff --git a/services/archive.mdwn b/services/archive.mdwn
index cd0aefa2..6c7cb70e 100644
--- a/services/archive.mdwn
+++ b/services/archive.mdwn
@@ -1,303 +1,21 @@
-[[!meta title="Website mirroring and archival"]]
+[[!meta title="Archival services"]]
 
-For various reasons, I've played with website mirroring and
-archival. Particularly at [Koumbit](https://koumbit.org), when a project is over or
-abandoned, we have tried to keep a static copy of active websites. The
-[koumbit procedure][fossilisation] covers mostly Drupal websites, but might
-still be relevant.
+I am an amateur archivist. I keep an archive of audio (music and
+audiobooks), books (physical and electronic), video (films and TV
+episodes),  and websites as a hobby but also as a librarian: some
+stuff should be preciously kept for future generations (and enjoyed by
+ours as well of course).
 
-This page aims at documenting my experience with some of those
-workflows.
+Web archives
+============
 
-TL;DR: `wget` works for many sites, but not all. Some sites can't be
-mirrored as just static copies of files, as HTTP headers matter. WARC
-files come to the rescue. My last attempt at mirroring a complex site
-was with [crawl][] and very effective. Next tests of a
-Javascript-heavy site should be done with [wpull][] and its PhantomJS
-support.
+I specifically worked on [[archiving web sites|web]] and wrote a [LWN
+article](https://lwn.net/Articles/766374/) ([[local copy|blog/2018-10-04-archiving-web-sites]]) on
+the topic. Detailed documentation is in [[web]].
 
-[[!toc levels=2]]
+Data rescue
+===========
 
-crawl
-=====
-
-Autistici's [crawl][] is "a very simple crawler" that *only* outputs a
-WARC file. Here is how it works:
-
-    crawl https://example.com/
-
-It does say "very simple" in the README. There are some options but
-most defaults are sane: it will fetch page requirements from other
-domains (unless the `-exclude-related` flag is used), but not recurse
-out of the domain. By default, it fires up 10 parallel connections to
-the remote site, so you might want to tweak that down to avoid
-hammering servers too hard, with the `-c` flag. Also, use the `-keep`
-flag to keep a copy of the database to crawl the same site repeatedly.
-
-The resulting WARC file must be loaded in some viewer, as explained
-below. [pywb][] worked well in my tests.
-
-wget
-====
-
-The short version is:
-
-    nice wget --mirror --execute robots=off --no-verbose --convert-links --backup-converted  --page-requisites --adjust-extension --base=./ --directory-prefix=./ --span-hosts --domains=www.example.com,example.com http://www.example.com/
-
-The explanation of each option is best found in the [wget
-manpage][], although some require extra clarification:
-
- * `--mirror` means `-r -N -l inf --no-remove-listing` which means:
-   * `-r` or `--recursive`: recurse into links found in the pages
-   * `-N` or `--timestamping`: do not fetch content if older than
-     local timestamps
-   * `-l inf` or `--level=inf`: infinite recursion
-   * `--no-remove-listing`: do not remove `.listing` files created
-     when listing directories over FTP
- * `--execute robots=off`: turn off `robots.txt` detection
- * `--no-verbose`: only show one line per link. use `--quiet` to turn
-   off all output
- * `--convert-links`: fix links in saved pages to refer to the local mirror
- * `--backup-converted`: keep a backup of the original file so that
-   `--timestamping` (`-N`, implied by `--mirror`) works correctly with
-   `--convert-links`
- * `--page-requisites`: download all files necessary to load the page,
-   including images, stylesheets, etc.
- * `--adjust-extension`: add (for example) `.html` to save filenames,
-   if missing
- * `--base=./` and `--directory-prefix=./` are magic to make sure the
-   links modified by `--convert-links` work correctly
- * `--span-hosts` say it's okay to jump to other hostnames provided
-   they are in the list of `--domains`
-
-The following options might also be useful:
-
- * `--warc=<name>`: will *also* record a WARC
-   file for the crawling of the site in `<name>.warc.gz`. `--warc-cdx`
-   is also useful as it keeps a list of the visited sites, although
-   that file can be recreated from the WARC file later on (see below)
- * `--wait 1 --random-wait` and `--limit-rate=20k` will limit the
-   download speed and artificially wait between requests to avoid
-   overloading the server (and possibly detection)
- * `--reject-regex "(.*)\?(.*)" `: do not crawl URLs with a query
-   string. Those might be infinite loops like calendars or extra
-   parameters that generate the same page.
-
-The query strings problem
--------------------------
-
-A key problem with crawling dynamic websites is that some CMS like to
-add strange query parameters in various places. For example, Wordpress
-might load jQuery like this:
-
-    http://example.com/wp-includes/js/jquery/jquery.js?ver=1.12.4
-
-When that file gets saved locally, its filename ends up being:
-
-    ./example.com/wp-includes/js/jquery/jquery.js?ver=1.12.4
-
-This will break content-type detection in webservers, which rely on
-the file extension to send the right `Content-Type`. Because the
-actual extension is really `.4` in the above, no `Content-Type` is
-sent at all, which confuses web browsers. For example, Chromium will
-complain with:
-
-    Refused to execute script from '<URL>' because its MIME type ('') is not executable, and strict MIME type checking is enabled
-
-Normally, `--adjust-extension` should do the right thing here, but it
-did not work in my last experiment. The `--reject-regex` proposed
-above is ineffective, as it will completely skip those links which
-means components will be missing. A pattern replacement on the URL
-would be necessary to work around this problem, but that is not
-supported by `wget` (or [wget2][], for that matter) at the time of
-writing. The solution for this is to use WARC files instead, but the
-[pywb][] viewer has trouble rendering those generated by wget (see
-[bug #294][].
-
-See also the [koumbit wiki][fossilisation] for wget-related instructions.
-
-httrack
-=======
-
-The [httrack][] program is explicitely designed to create offline
-copies of websites, so its use is slightly more intuitive than
-wget. For example, here's a sample interactive session:
-
-    $ httrack 
-
-    Welcome to HTTrack Website Copier (Offline Browser) 3.49-2
-    Copyright (C) 1998-2017 Xavier Roche and other contributors
-    To see the option list, enter a blank line or try httrack --help
-
-    Enter project name :Example website
-
-    Base path (return=/home/anarcat/websites/) :/home/anarcat/mirror/example/
-
-    Enter URLs (separated by commas or blank spaces) :https://example.com/
-
-    Action:
-    (enter)	1	Mirror Web Site(s)
-    	2	Mirror Web Site(s) with Wizard
-    	3	Just Get Files Indicated
-    	4	Mirror ALL links in URLs (Multiple Mirror)
-    	5	Test Links In URLs (Bookmark Test)
-    	0	Quit
-    : 2     
-
-    Proxy (return=none) :
-
-    You can define wildcards, like: -*.gif +www.*.com/*.zip -*img_*.zip
-    Wildcards (return=none) :
-
-    You can define additional options, such as recurse level (-r<number>), separated by blank spaces
-    To see the option list, type help
-    Additional options (return=none) :
-
-    ---> Wizard command line: httrack https://example.com/ -W -O "/home/anarcat/mirror/example/Example website"  -%v  
-
-    Ready to launch the mirror? (Y/n) :
-
-    Mirror launched on Wed, 29 Aug 2018 14:49:16 by HTTrack Website Copier/3.49-2 [XR&CO'2014]
-    mirroring https://example.com/ with the wizard help..
-
-Other than the dialog, httrack is then silent as it logs into
-`~/mirror/example/Example website/hts-log.txt`, and even there, only
-errors are logged.
-
-Some options that might be important:
-
- * `--update`: resume an interrupted runr
- * `--verbose`: start an interactive session which will show transfers
-   in progress and ask questions for URLs it's unsure what to do for
- * `-s0`: never follow `robots.txt` and related tags. This is
-   important if the website explicitely blocks crawlers.
-
-HTTrack has a nicer user interface than wget, but lacks WARC support
-which makes archiving more dynamic sites more difficult as it requires
-post-processing. See [the query strings problem](#the-query-strings-problem) above for details.
-
-WARC files
-==========
-
-The Web ARChive ([WARC](https://en.wikipedia.org/wiki/Web_ARChive)) format "*specifies a method for combining

(fichier de différences tronqué)
removed
diff --git a/blog/2015-09-09-bootstrap/comment_11_10a8fdeb8c554044b5b9d61286db8fa4._comment b/blog/2015-09-09-bootstrap/comment_11_10a8fdeb8c554044b5b9d61286db8fa4._comment
deleted file mode 100644
index fce179a6..00000000
--- a/blog/2015-09-09-bootstrap/comment_11_10a8fdeb8c554044b5b9d61286db8fa4._comment
+++ /dev/null
@@ -1,9 +0,0 @@
-[[!comment format=mdwn
- ip="36.250.182.38"
- claimedauthor="coach purse saddle bag ebay"
- url="http://www.mistymornllamas.com/coachsale_en/coach-purse-saddle-bag-ebay"
- subject="coach purse saddle bag ebay"
- date="2018-10-05T10:04:45Z"
- content="""
-<a href=\"http://www.purchasepropertyinmexico.com/canadagooseoutlet_en/canada-goose-whistler-bordeaux-nc\">canada goose whistler bordeaux nc</a><a href=\"http://www.purchasepropertyinmexico.com/canadagoosesale_en/canada-goose-snow-mantra-outlet-wiring\">canada goose snow mantra outlet wiring</a><a href=\"http://www.purchasepropertyinmexico.com/canadagoosewholesale_en/canada-goose-trillium-dam-yarn\">canada goose trillium dam yarn</a><a href=\"http://www.purchasepropertyinmexico.com/coachclearance_en/coach-messenger-bags-sale-reviews\">coach messenger bags sale reviews</a>
-"""]]

Added a comment: coach purse saddle bag ebay
diff --git a/blog/2015-09-09-bootstrap/comment_11_10a8fdeb8c554044b5b9d61286db8fa4._comment b/blog/2015-09-09-bootstrap/comment_11_10a8fdeb8c554044b5b9d61286db8fa4._comment
new file mode 100644
index 00000000..fce179a6
--- /dev/null
+++ b/blog/2015-09-09-bootstrap/comment_11_10a8fdeb8c554044b5b9d61286db8fa4._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ ip="36.250.182.38"
+ claimedauthor="coach purse saddle bag ebay"
+ url="http://www.mistymornllamas.com/coachsale_en/coach-purse-saddle-bag-ebay"
+ subject="coach purse saddle bag ebay"
+ date="2018-10-05T10:04:45Z"
+ content="""
+<a href=\"http://www.purchasepropertyinmexico.com/canadagooseoutlet_en/canada-goose-whistler-bordeaux-nc\">canada goose whistler bordeaux nc</a><a href=\"http://www.purchasepropertyinmexico.com/canadagoosesale_en/canada-goose-snow-mantra-outlet-wiring\">canada goose snow mantra outlet wiring</a><a href=\"http://www.purchasepropertyinmexico.com/canadagoosewholesale_en/canada-goose-trillium-dam-yarn\">canada goose trillium dam yarn</a><a href=\"http://www.purchasepropertyinmexico.com/coachclearance_en/coach-messenger-bags-sale-reviews\">coach messenger bags sale reviews</a>
+"""]]

removed
diff --git a/blog/2005-11-23-comment-la-tunisie-censure-linternet/comment_18_3a4f155ecc3a420ff67ea734fb64562f._comment b/blog/2005-11-23-comment-la-tunisie-censure-linternet/comment_18_3a4f155ecc3a420ff67ea734fb64562f._comment
deleted file mode 100644
index 8c57da63..00000000
--- a/blog/2005-11-23-comment-la-tunisie-censure-linternet/comment_18_3a4f155ecc3a420ff67ea734fb64562f._comment
+++ /dev/null
@@ -1,10 +0,0 @@
-[[!comment format=mdwn
- ip="36.248.165.140"
- claimedauthor="wholesale new pandora charms nz"
- url="http://www.jacquelinelane.net/pandora/wholesale-new-pandora-charms-nz"
- subject="wholesale new pandora charms nz"
- date="2018-10-04T21:08:45Z"
- content="""
-<a href=\"http://www.drmshah.net/coachoutlet/coach-kristin-handbags-uk-kansas\">coach kristin handbags uk kansas</a><a href=\"http://www.drmshah.net/coachoutletstore/coach-shoulder-bag-pink-jeans\">coach shoulder bag pink jeans</a><a href=\"http://www.drmshah.net/coachpurse/coach-crossbody-bag-singapore-outlet\">coach crossbody bag singapore outlet</a><a href=\"http://www.drmshah.net/coupon/adidas-springblade-kids-boy\">adidas springblade kids boy</a>
- <a href=\"http://www.jacquelinelane.net/pandora/wholesale-new-pandora-charms-nz\" >wholesale new pandora charms nz</a> [url=http://www.jacquelinelane.net/pandora/wholesale-new-pandora-charms-nz]wholesale new pandora charms nz[/url]
-"""]]

Added a comment: wholesale new pandora charms nz
diff --git a/blog/2005-11-23-comment-la-tunisie-censure-linternet/comment_18_3a4f155ecc3a420ff67ea734fb64562f._comment b/blog/2005-11-23-comment-la-tunisie-censure-linternet/comment_18_3a4f155ecc3a420ff67ea734fb64562f._comment
new file mode 100644
index 00000000..8c57da63
--- /dev/null
+++ b/blog/2005-11-23-comment-la-tunisie-censure-linternet/comment_18_3a4f155ecc3a420ff67ea734fb64562f._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="36.248.165.140"
+ claimedauthor="wholesale new pandora charms nz"
+ url="http://www.jacquelinelane.net/pandora/wholesale-new-pandora-charms-nz"
+ subject="wholesale new pandora charms nz"
+ date="2018-10-04T21:08:45Z"
+ content="""
+<a href=\"http://www.drmshah.net/coachoutlet/coach-kristin-handbags-uk-kansas\">coach kristin handbags uk kansas</a><a href=\"http://www.drmshah.net/coachoutletstore/coach-shoulder-bag-pink-jeans\">coach shoulder bag pink jeans</a><a href=\"http://www.drmshah.net/coachpurse/coach-crossbody-bag-singapore-outlet\">coach crossbody bag singapore outlet</a><a href=\"http://www.drmshah.net/coupon/adidas-springblade-kids-boy\">adidas springblade kids boy</a>
+ <a href=\"http://www.jacquelinelane.net/pandora/wholesale-new-pandora-charms-nz\" >wholesale new pandora charms nz</a> [url=http://www.jacquelinelane.net/pandora/wholesale-new-pandora-charms-nz]wholesale new pandora charms nz[/url]
+"""]]

typos
diff --git a/blog/2018-10-04-archiving-web-sites.mdwn b/blog/2018-10-04-archiving-web-sites.mdwn
index 1546e79c..d9b4e40c 100644
--- a/blog/2018-10-04-archiving-web-sites.mdwn
+++ b/blog/2018-10-04-archiving-web-sites.mdwn
@@ -267,11 +267,11 @@ itself][].
 > Archive](https://archive.org/details/pamplemousse-ca), it might end up in the wayback machine at some point if
 > the Archive curators think it is worth it.
 >
-> Another example of a crawl is [this archive of two Bloomberg articles][]
-> which the "save page now" feature of the Internet archive wasn't able
-> to save correctly (but [webrecorder.io][]) could! Those pages can be
-> seen in the [web recorder player][] to get a better feel of how faithful
-> a WARC file really is.
+> Another example of a crawl is [this archive of two Bloomberg
+> articles][] which the "save page now" feature of the Internet
+> archive wasn't able to save correctly. But [webrecorder.io][] could!
+> Those pages can be seen in the [web recorder player][] to get a
+> better feel of how faithful a WARC file really is.
 >
 > Finally, this article was originally written as a set of notes and
 > documentation in the [[services/archive]] page which may also be of

moar links
diff --git a/blog/2018-10-04-archiving-web-sites.mdwn b/blog/2018-10-04-archiving-web-sites.mdwn
index c7428d8e..1546e79c 100644
--- a/blog/2018-10-04-archiving-web-sites.mdwn
+++ b/blog/2018-10-04-archiving-web-sites.mdwn
@@ -263,8 +263,23 @@ itself][].
 > I also want to personally thank the folks in the #archivebot channel
 > for their assistance and letting me play with their toys.
 >
-> Finally, the Pamplemousse crawl is now [available on the Internet
+> The Pamplemousse crawl is now [available on the Internet
 > Archive](https://archive.org/details/pamplemousse-ca), it might end up in the wayback machine at some point if
 > the Archive curators think it is worth it.
+>
+> Another example of a crawl is [this archive of two Bloomberg articles][]
+> which the "save page now" feature of the Internet archive wasn't able
+> to save correctly (but [webrecorder.io][]) could! Those pages can be
+> seen in the [web recorder player][] to get a better feel of how faithful
+> a WARC file really is.
+>
+> Finally, this article was originally written as a set of notes and
+> documentation in the [[services/archive]] page which may also be of
+> interest to my readers.
+
+  [this archive of two Bloomberg articles]: https://archive.org/details/anarcat-bloomberg
+  [webrecorder.io]: https://webrecorder.io/
+  [web recorder player]: https://webrecorder.io/anarcat/bloomberg/index
+
 
 [[!tag debian-planet lwn archive web warc python-planet]]

creating tag page tag/web
diff --git a/tag/web.mdwn b/tag/web.mdwn
new file mode 100644
index 00000000..22374a1a
--- /dev/null
+++ b/tag/web.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged web"]]
+
+[[!inline pages="tagged(web)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/archive
diff --git a/tag/archive.mdwn b/tag/archive.mdwn
new file mode 100644
index 00000000..661d7475
--- /dev/null
+++ b/tag/archive.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged archive"]]
+
+[[!inline pages="tagged(archive)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/warc
diff --git a/tag/warc.mdwn b/tag/warc.mdwn
new file mode 100644
index 00000000..0d7a5e03
--- /dev/null
+++ b/tag/warc.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged warc"]]
+
+[[!inline pages="tagged(warc)" actions="no" archive="yes"
+feedshow=10]]

add related issues
diff --git a/blog/2018-10-04-archiving-web-sites.mdwn b/blog/2018-10-04-archiving-web-sites.mdwn
index 4d234f8d..c7428d8e 100644
--- a/blog/2018-10-04-archiving-web-sites.mdwn
+++ b/blog/2018-10-04-archiving-web-sites.mdwn
@@ -250,4 +250,21 @@ itself][].
 [first appeared]: https://lwn.net/Articles/766374/
 [Linux Weekly News]: http://lwn.net/
 
+>  As usual, here's the list of issues and patches generated while researching this article:
+>
+> * [fix broken link to WARC specification](https://github.com/iipc/warc-specifications/pull/45)
+> * [sample Apache configuration](https://github.com/webrecorder/pywb/pull/374) for pywb
+> * [make job status less chatty](https://github.com/ArchiveTeam/ArchiveBot/pull/326) in ArchiveBot
+> * [Debian packaging](https://github.com/jjjake/internetarchive/issues/270) of the `ia` commandline tool
+> * [document the --large flag](https://github.com/ArchiveTeam/ArchiveBot/pull/330) in ArchiveBot
+> * [mention collections](https://github.com/jjjake/internetarchive/pull/272) in the `ia` documentation
+> * [fix warnings in docs builds](https://github.com/jjjake/internetarchive/pull/273) of `ia`
+>
+> I also want to personally thank the folks in the #archivebot channel
+> for their assistance and letting me play with their toys.
+>
+> Finally, the Pamplemousse crawl is now [available on the Internet
+> Archive](https://archive.org/details/pamplemousse-ca), it might end up in the wayback machine at some point if
+> the Archive curators think it is worth it.
+
 [[!tag debian-planet lwn archive web warc python-planet]]

rename article
diff --git a/blog/archive.mdwn b/blog/2018-10-04-archiving-web-sites.mdwn
similarity index 100%
rename from blog/archive.mdwn
rename to blog/2018-10-04-archiving-web-sites.mdwn

add tags
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index 9b6d7061..4d234f8d 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -250,4 +250,4 @@ itself][].
 [first appeared]: https://lwn.net/Articles/766374/
 [Linux Weekly News]: http://lwn.net/
 
-[[!tag debian-planet lwn]]
+[[!tag debian-planet lwn archive web warc python-planet]]

article online
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index 739a938b..9b6d7061 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -1,11 +1,8 @@
 [[!meta title="Archiving web sites"]]
 
-\[LWN subscriber-only content\]
--------------------------------
+[[!meta date="2018-09-25T00:00:00+0000"]]
 
-[[!meta date="2018-09-23T00:00:00+0000"]]
-
-[[!meta updated="2018-09-25T09:17:23-0400"]]
+[[!meta updated="2018-10-04T13:46:25-0400"]]
 
 [[!toc levels=2]]
 
@@ -227,6 +224,8 @@ through that trouble, the Internet Archive seems to be here to stay and
 Archive Team is obviously [working on a backup of the Internet Archive
 itself][].
 
+------------------------------------------------------------------------
+
   [resources]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem
   [Wpull]: https://github.com/chfoo/wpull
   [PhantomJS]: http://phantomjs.org/

test two more extensions
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index fba5d52a..3f491850 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -52,11 +52,14 @@ I am testing those and they might make it to the top list once I'm happy:
    package"]], [source](https://github.com/browserpass/browserpass)) - super fast access to my passwords. use
    some magic mumble-jumble message passing thing which feels a bit
    creepy.
+ * [display anchors](https://addons.mozilla.org/en-US/firefox/addon/display-_anchors/) (no deb, [source](https://github.com/Rob--W/display-anchors))
  * [Multi-account containers][] (no deb, [source](https://github.com/mozilla/multi-account-containers/)) - kind of
    useful, but also a bit strange: impossible to assign an existing
    tab to a container, UI is very clikety (can't open a
    container-specific tab from the keyboard), etc. need to click-hold
    on the "+" tab button to choose container.
+ * [Open in Browser](https://addons.mozilla.org/en-US/firefox/addon/open-in-browser/) (no deb, [source](https://github.com/Rob--W/open-in-browser)) - reopen the file in the
+   browser instead of downloading
  * [URL to QR Code](https://addons.mozilla.org/en-US/firefox/addon/url-to-qrcode/?src=search) - (no debian package, [source](https://github.com/smoqadam/url-to-qrcode-firefox-addon)) after
    kicking out that proprietary spyware (!! see below), I found about
    6 different alternatives (this one and [1](https://addons.mozilla.org/en-US/firefox/addon/qr-code-util/), [2](https://addons.mozilla.org/en-US/firefox/addon/fxqrl/), [3](https://addons.mozilla.org/en-US/firefox/addon/ffqrcoder/),

don't trim URLs
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index aec02ca8..fba5d52a 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -253,6 +253,7 @@ I have set the following configuration options:
      with the Yubikey and other 2FA tokens
    * `security.webauth.webauthn` - enable [WebAuthN](https://www.w3.org/TR/webauthn/) support, not
      sure what that's for but it sounds promising
+ * `browser.urlbar.trimURLs`: false. show protocol regardless of URL
 
 I also set privacy parameters following this [user.js](https://gitlab.com/anarcat/scripts/blob/master/firefox-tmp#L7) config
 which, incidentally, is injected in temporary profiles started with

two new WNPPs
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index fdcbd1f0..aec02ca8 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -24,10 +24,10 @@ or have used in the past.
 
 I have those extensions installed and use them very frequently:
 
- * [GhostText][] (no debian package, [source](https://github.com/GhostText/GhostText))- "It's all text" replacement
+ * [GhostText][] (no debian package, [#910289](https://bugs.debian.org/910289), [source](https://github.com/GhostText/GhostText))- "It's all text" replacement
  * [uBlock Origin][] ([[!debpkg webext-ublock-origin desc="debian
    package"]], [source](https://github.com/gorhill/uBlock)) - making the web sane again
- * [uMatrix][] (no debian package, [source](https://github.com/gorhill/uMatrix)) - making the web
+ * [uMatrix][] (no debian package, [#891859](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=891859), [source](https://github.com/gorhill/uMatrix)) - making the web
    somewhat safe again
  * [Wallabager][] (no debian package, [source](https://github.com/wallabag/wallabagger)) - to YOLO a bunch
    of links in a pile outside my web browser that I can read offline

extend toc
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index a36dd89c..fdcbd1f0 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -17,7 +17,12 @@ buster or later, the quantum version is available as a Debian package
 Extensions
 ----------
 
-I have those extensions installed:
+This section documents the [Firefox add-ons](https://addons.mozilla.org/) I am using, testing,
+or have used in the past.
+
+### Installed
+
+I have those extensions installed and use them very frequently:
 
  * [GhostText][] (no debian package, [source](https://github.com/GhostText/GhostText))- "It's all text" replacement
  * [uBlock Origin][] ([[!debpkg webext-ublock-origin desc="debian
@@ -37,7 +42,11 @@ I have those extensions installed:
 [site engagement]: https://www.chromium.org/developers/design-documents/site-engagement
 [infamous privacy intrusions]: https://lwn.net/Articles/648392/
 
-I am testing those:
+Ideally, all of those should be packaged for Debian.
+
+### In testing
+
+I am testing those and they might make it to the top list once I'm happy:
 
  * [browserpass-ce](https://addons.mozilla.org/en-US/firefox/addon/browserpass-ce/) ([[!debpkg webext-browserpass desc="debian
    package"]], [source](https://github.com/browserpass/browserpass)) - super fast access to my passwords. use
@@ -72,7 +81,14 @@ I am testing those:
 
 [Multi-account containers]: https://github.com/mozilla/multi-account-containers/
 
-I removed those:
+Those should probably not be packaged in Debian until they make it to
+the top list.
+
+### Previously used
+
+I once used those but eventually removed them for various
+reasons. Some are unsupported, non-free software, inconvenient, too
+hard to use or simply irrelevant.
 
  * [Cookie autodelete](https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete/) - even though uMatrix stops most cookies
    from being sent, it actually stores them locally. it would be great
@@ -135,6 +151,7 @@ I removed those:
    ecosystem|services/archive]].
 
 [it's all text!]: https://addons.mozilla.org/en-US/firefox/addon/its-all-text/
+
 Surviving the XULocalypse
 -------------------------
 

notes about possible business cards
diff --git a/communication/cards.mdwn b/communication/cards.mdwn
new file mode 100644
index 00000000..cf40d165
--- /dev/null
+++ b/communication/cards.mdwn
@@ -0,0 +1,19 @@
+# ideas for business cards
+
+* [Debian template](https://people.debian.org/~hmh/debian-business-card/)
+* [another source](https://www.debian.org/devel/misc/card.tex)
+
+Another idea: use a simple circle A + cAt like the badge, with a nice
+fonts.
+
+Some font ideas:
+
+ * https://fonts.google.com/specimen/Sanchez
+ * https://fonts.google.com/specimen/Playfair+Display (in texlive)
+ * https://fonts.google.com/specimen/Libre+Baskerville
+
+There's also a GSF baskerville font in texlive which may be different
+from the above but still interesting.
+
+Then make an embosser with [this](http://www.thestampmaker.com/Products/Pocket-Embosser-with-Your-Text__EMBOSSER_WITH_TEXT_POCKET.aspx), maybe with "PROPERTY IS THEFT /
+LA PROPRIÉTÉ C'EST DU VOL" around with the circle A in the middle.

replace wayback machine plugin with a simpler more versatile plugin
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index 17baccd0..a36dd89c 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -66,11 +66,12 @@ I am testing those:
  * [Switch container](https://addons.mozilla.org/en-US/firefox/addon/switch-container/) (no deb, [source](https://gitlab.com/mjanetmars/switch-container)) - fixes *one* of the
    issues with multi-account containers (ie. moving tab to another
    container)
- * [Wayback machine](https://addons.mozilla.org/en-US/firefox/addon/wayback-machine_new/) (no deb, [source](https://github.com/internetarchive/wayback-machine-chrome)?) - i also have
-   bookmarklets, but this could work better! Unfortunately, it doesn't
-   work with other archival sites like archive.is or Google's cache.
+ * [View Page Archive & Cache](https://addons.mozilla.org/en-US/firefox/addon/view-page-archive/) (no deb, [source](https://github.com/dessant/view-page-archive/)) - load page in
+   one or many page archives. No "save" button unfortunately, but is
+   good enough for my purposes.
 
 [Multi-account containers]: https://github.com/mozilla/multi-account-containers/
+
 I removed those:
 
  * [Cookie autodelete](https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete/) - even though uMatrix stops most cookies
@@ -115,6 +116,14 @@ I removed those:
  * [U2F Support](https://addons.mozilla.org/en-US/firefox/addon/u2f-support-add-on/), is now unnecessary as it is builtin, starting
    with FF 57 (see [issue #59](https://github.com/prefiks/u2f4moz/issues/59#issuecomment-325768286)). the upstream issue
    was [#1065729](https://bugzilla.mozilla.org/show_bug.cgi?id=1065729)
+ * [Wayback machine](https://addons.mozilla.org/en-US/firefox/addon/wayback-machine_new/) (no deb, [source](https://github.com/internetarchive/wayback-machine-chrome)?) - i also have
+   bookmarklets, but this could work better! Unfortunately, it doesn't
+   work with other archival sites like archive.is or Google's
+   cache. It also tries to be too smart about broken sites: it will
+   try to show the archive.org version when it "thinks" the website is
+   down, but it often fails to notice when a site is down or think
+   it's down when it isn't. Replaced with [View Page Archive &
+   Cache][].
  * [zotero](https://www.zotero.org/) is in a bad shape in Debian. The "XUL" extension is
    gone from Zotero 5.0, and the 4.0 extension will stop working
    because upstream will drop support in 2018. Debian is scrambling to

remove broken link refering to private email
diff --git a/blog/2018-10-01-report.mdwn b/blog/2018-10-01-report.mdwn
index 00f68ad9..992e1100 100644
--- a/blog/2018-10-01-report.mdwn
+++ b/blog/2018-10-01-report.mdwn
@@ -82,7 +82,7 @@ updates next.
 I worked more on the GnuTLS research as a [short followup](https://lists.debian.org/87va6l8rsc.fsf@curie.anarc.at) to our
 [previous discussion](https://lists.debian.org/871saexlbf.fsf@curie.anarc.at).
 
-I [wrote the researchers](https://lists.debian.org/87efddmlru.fsf@curie.anarc.at) who "still stand behind what is written
+I wrote the researchers who "still stand behind what is written
 in the paper" and believe the current fix in GnuTLS is
 incomplete. GnuTLS upstream seems to agree, more or less, but [point
 out](https://gitlab.com/gnutls/gnutls/issues/456#note_105621260) that the fix, even if incomplete, greatly reduces the scope of

Added a comment: Firefox privacy resources
diff --git a/blog/2018-10-01-report/comment_1_21403d3d544a9cbf4bc88107ad60c139._comment b/blog/2018-10-01-report/comment_1_21403d3d544a9cbf4bc88107ad60c139._comment
new file mode 100644
index 00000000..b86fef19
--- /dev/null
+++ b/blog/2018-10-01-report/comment_1_21403d3d544a9cbf4bc88107ad60c139._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ ip="96.127.232.203"
+ claimedauthor="Robin"
+ url="http://robin.millette.info/"
+ subject="Firefox privacy resources"
+ date="2018-10-01T21:24:26Z"
+ content="""
+There's the addon [Privacy Settings](https://addons.mozilla.org/fr/firefox/addon/privacy-settings/) by Jeremy Schomery but it was last updated 9 months ago.
+
+Mozilla's wiki has an exhautive page on [privacy tweeks](https://wiki.mozilla.org/Privacy/Privacy_Task_Force/firefox_about_config_privacy_tweeks) but I agree with the whack-a-mole image.
+
+Another page with similar info is in form of a [github gist](https://gist.github.com/0XDE57/fbd302cef7693e62c769).
+
+Finally, there's this ongoing comprehensive [user.js template for configuring and hardening Firefox privacy, security and anti-fingerprinting](https://github.com/ghacksuserjs/ghacks-user.js) and [wiki](https://github.com/ghacksuserjs/ghacks-user.js/wiki).
+
+Most of these links come from the <https://restoreprivacy.com/firefox-privacy/> page and do not reflect my personnal experience.
+"""]]

fix headings, add refs for nm
diff --git a/blog/2018-10-01-report.mdwn b/blog/2018-10-01-report.mdwn
index aac64d26..00f68ad9 100644
--- a/blog/2018-10-01-report.mdwn
+++ b/blog/2018-10-01-report.mdwn
@@ -122,7 +122,7 @@ Other free software work
 I have, this month again, been quite spread out on many unrelated
 projects unfortunately.
 
-### Mastodon
+## Mastodon
 
 I've played around with the latest attempt from the free software
 community to come up with a "federation" model to replace Twitter and
@@ -183,7 +183,7 @@ to enforce two-way sharing between followers, the approach taken by
 Only time will tell, I guess, but Mastodon does look like a promising
 platform, at least in terms of raw numbers of users...
 
-### The ultimate paste bin?
+## The ultimate paste bin?
 
 I've started switching towards [ptpb.pw](https://ptpb.pw/) as a pastebin. Besides the
 unfortunate cryptic name, it's a great tool: multiple pastes are
@@ -196,7 +196,7 @@ I like the simplistic approach to the API that makes it easy to use
 from any client. I've submitted the above feature request and a
 [trivial patch](https://github.com/ptpb/pb_cli/pull/9) so far.
 
-### ELPA packaging work
+## ELPA packaging work
 
 I've done a few reviews and sponsoring of Emacs List Packages ("ELPA")
 for Debian, mostly for packages I requested myself but who were so
@@ -215,7 +215,7 @@ and should support multiple languages. It seems we are constantly
 solving this problem for each ecosystem while the issues are
 similar...
 
-### Firefox privacy issues
+## Firefox privacy issues
 
 I went down another rabbit hole after learning about Mozilla's plan to
 force more or less [mandatory telemetry](https://dustri.org/b/mozilla-is-still-screwing-around-with-privacy-in-firefox.html) in future versions of
@@ -236,7 +236,7 @@ things. Instead, Mozilla is forcing us to play "whack-a-mole" as they
 pop out another undocumented configuration item with every other
 release.
 
-### Other work
+## Other work
 
  * migrated from [once](https://0xacab.org/guido/once) to [pass-otp](https://github.com/tadfisher/pass-otp) for one time password
    storage
@@ -266,11 +266,11 @@ release.
    be orphaned long.
 
  * discussed with the [[!debpkg notmuch]] upstream regarding
-   "background OpenPGP key updates" and "attachment checks"
-   (notmuchmail.org is down at the moment, links coming)
+   [background OpenPGP key updates](https://nmbug.notmuchmail.org/nmweb/show/871saawphm.fsf%40curie.anarc.at) and [attachment checks](https://nmbug.notmuchmail.org/nmweb/show/20180903175711.16141-1-anarcat%40debian.org), now
+   picked up by David Edmondson, thanks!
 
  * [researched](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=223988#17) the history of the venerable `fortune` command and
-   how it could be improved (!)
+   how it could be improved
 
  * pushed a few commits to [[!debpkg monkeysign]] to try and fix the
    numerous RC bugs threatening its inclusion in Debian Buster, still

creating tag page tag/mastodon
diff --git a/tag/mastodon.mdwn b/tag/mastodon.mdwn
new file mode 100644
index 00000000..1e25ce9d
--- /dev/null
+++ b/tag/mastodon.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged mastodon"]]
+
+[[!inline pages="tagged(mastodon)" actions="no" archive="yes"
+feedshow=10]]

report
diff --git a/blog/2018-10-01-report.mdwn b/blog/2018-10-01-report.mdwn
new file mode 100644
index 00000000..aac64d26
--- /dev/null
+++ b/blog/2018-10-01-report.mdwn
@@ -0,0 +1,280 @@
+[[!meta title="October 2018 report: LTS, Mastodon, Firefox privacy, etc"]]
+
+[[!toc levels=2]]
+
+Debian Long Term Support (LTS)
+==============================
+
+This is my monthly [Debian LTS][] report. 
+
+[Debian LTS]: https://www.freexian.com/services/debian-lts.html
+
+## Python updates
+
+Uploaded [DLA-1519-1](https://lists.debian.org/20180925234743.GA24642@curie.anarc.at) and [DLA-1520-1](https://lists.debian.org/20180926002639.GA28718@curie.anarc.at) to fix
+[CVE-2018-1000802](https://security-tracker.debian.org/tracker/CVE-2018-1000802), [CVE-2017-1000158](https://security-tracker.debian.org/tracker/CVE-2017-1000158), [CVE-2018-1061](https://security-tracker.debian.org/tracker/CVE-2018-1061) and
+[CVE-2018-1060](https://security-tracker.debian.org/tracker/CVE-2018-1060) in Python 2.7 and 3.4. The latter three were
+originally marked as `no-dsa` but the fix was trivial to backport. I
+also found that CVE-2017-1000158 was actually relevant for 3.4 even
+though it was not marked as such in the tracker.
+        
+[CVE-2018-1000030](https://security-tracker.debian.org/tracker/CVE-2018-1000030) was skipped because the fix was too intrusive
+and unclear.
+
+## Enigmail investigations
+
+Security support for Thunderbird and Firefox versions from jessie has
+stopped upstream. Considering that the Debian security team bit the
+bullet and updated those in stretch, the consensus seems to be that
+the versions in jessie will also be updated, which will break
+third-party extensions in jessie.
+
+One of the main victims of the
+[[XULocalypse|software/desktop/firefox/#surviving-the-xulocalypse]] is
+Enigmail, which completely [stopped working](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=909000) after the stretch
+update. I looked at how we could handle this. I first [proposed](https://lists.debian.org/871s9fps8e.fsf@curie.anarc.at) to
+wait before trying to patch the Enigmail version in jessie since it
+would break when the Thunderbird updates will land. I then detailed
+[five options](https://lists.debian.org/debian-lts/2018/09/msg00067.html) for the Enigmail security update:
+
+ 1. update GnuPG 2 in jessie-security to work with Enigmail, which
+    could break unrelated things
+
+ 2. same as 1, but in jessie-backports-sloppy
+ 
+ 3. package the JavaScript dependencies to ship Enigmail with
+    OpenPGP.js correctly.
+
+ 4. remove Enigmail from jessie
+
+ 5. backport only some patches to GPG 2 in jessie
+
+I then looked at helping the Enigmail maintainers by reviewing the
+[OpenPGP.js packaging](https://bugs.debian.org/787774) through which I found a bug in the
+JavaScript packaging toolchain, which diverged into a [patch](https://github.com/LeoIannacone/npm2deb/pull/122) in
+npm2deb to fix source package detection and an Emacs [function](https://github.com/bbatsov/crux/issues/62) to
+write to multiple files. (!!) That work was not directly useful to
+Jessie, I must admit, but it did end up clarifying which dependencies
+were missing for OpenPGP to land, which were clearly out of reach of a
+LTS update.
+
+Switching gears, I tried to help the maintainer untangle the
+JavaScript mess between multiple copies of code in TB, FF (with
+itself), and Enigmail's process handling routines; to call GPG
+properly with multiple file descriptors for password, clear-text,
+`statusfd`, and output; to have Autocrypt be able to handle "Autocrypt
+Setup Messages" (ASM) properly ([bug #908510](https://bugs.debian.org/908510)); to finally make the
+test suite pass. The alternative here would be to simply rip Autocrypt
+out of Enigmail for the jessie update, but this would mean diverging
+significantly from the upstream version.
+
+Reports of Enigmail working with older versions of GPG are deceiving,
+as that configuration introduces unrelated security issues ([T4017](https://dev.gnupg.org/T4017)
+and [T4018](https://dev.gnupg.org/T4017) in upstream's bugtracker).
+
+So much more work remains on backporting Enigmail, but I might work
+for the stable/unstable updates to complete before pushing that work
+further. Instead, I might focus on the Thunderbird and Firefox
+updates next.
+
+### GnuTLS
+
+I worked more on the GnuTLS research as a [short followup](https://lists.debian.org/87va6l8rsc.fsf@curie.anarc.at) to our
+[previous discussion](https://lists.debian.org/871saexlbf.fsf@curie.anarc.at).
+
+I [wrote the researchers](https://lists.debian.org/87efddmlru.fsf@curie.anarc.at) who "still stand behind what is written
+in the paper" and believe the current fix in GnuTLS is
+incomplete. GnuTLS upstream seems to agree, more or less, but [point
+out](https://gitlab.com/gnutls/gnutls/issues/456#note_105621260) that the fix, even if incomplete, greatly reduces the scope of
+those vulnerabilities and a [long-term fix](https://gitlab.com/gnutls/gnutls/issues/503) is underway.
+
+Next step, therefore, is deciding if we backport the patches or just
+upgrade to the latest 3.3.x series, as the ABI/API changes are minor
+(only additions).
+
+## Other work
+
+ * completed the work on gdm3 and git-annex by [uploading
+   DLA-1494-1](https://lists.debian.org/20180905182849.GA30901@curie.anarc.at) and [DLA-1495-1](https://lists.debian.org/20180905192850.GA21301@curie.anarc.at)
+
+ * fixed [[!debbug 908062]] in devscripts to make `dch` generate proper
+   version numbers since jessie was released
+
+ * [checked](https://lists.debian.org/87efdhpcny.fsf@curie.anarc.at) with the Spamassessin maintainer regarding the LTS
+   update and whether we just use 3.4.2 across all suites
+
+ * [reviewed and tested](https://lists.debian.org/87sh1wnrgr.fsf@curie.anarc.at) [Hugo's work](https://lists.debian.org/20180915160402.GA2082@hle-laptop.local) on [[!debpkg
+   389-ds]]. That involved getting familiar with that "other" slapd
+   server (apart from OpenLDAP) which I did not know about.
+
+ * checked that kdepim doesn't load external content so it is not
+   vulnerable to [EFAIL](https://security-tracker.debian.org/tracker/CVE-2017-17689) by default. The proposed upstream patch
+   changes the API so that work is postponed.
+   
+ * triaged the Xen security issues by severity
+
+ * filed bugs about Docker security issues ([[!debcve CVE-2017-14992]]
+   and [[!debcve CVE-2018-10892]])
+
+Other free software work
+========================
+
+I have, this month again, been quite spread out on many unrelated
+projects unfortunately.
+
+### Mastodon
+
+I've played around with the latest attempt from the free software
+community to come up with a "federation" model to replace Twitter and
+other social networks, [Mastodon](https://joinmastodon.org/). I've had an account for a while
+but I haven't talked about it much here yet.
+
+My [Mastodon account](https://social.weho.st/@anarcat) is linked with my [Twitter account](https://twitter.com/theanarcat)
+through some unofficial [Twitter cross-posting app](https://crossposter.masto.donte.com.br/) which more or
+less works. Another "app" I use is the [toot client](https://github.com/ihabunek/toot) to connect my
+website with Mastodon through [feed2exec](https://feed2exec.readthedocs.io/).
+
+And because all of this social networking stuff is just IRC 2.0, I
+read it all through my IRC client, thanks to [Bitlbee](https://www.bitlbee.org/) and Mastodon
+is (thankfully) no exception. Unfortunately, there's a problem in my
+hosting provider's configuration which has made it impossible to read
+Mastodon status from Bitlbee for a while. I've created a test profile
+on the main Mastodon instance to double-check, and indeed, Bitlbee
+works fine there.
+
+Before I figured that out, I tried upgrading the [Bitlbee Mastodon
+bridge](https://alexschroeder.ch/software/Bitlbee_Mastodon) (for which I also filed a [RFP](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=909952)) and found a
+[regression](https://alexschroeder.ch/software/segfault_after_upgrade) has been introduced somewhere after 1.3.1. On the plus
+side, the [feature request](https://web.archive.org/web/20180212220543/https://github.com/kensanata/bitlbee-mastodon/issues/17) I filed to allow for custom visibility
+statuses from Bitlbee has been accepted, which means it's now possible
+to send "private" messages from Bitlbee.
+
+Those messages, unfortunately, are not *really* private: they are
+visible to all followers, which, in the social networking world, means
+a lot of people. In my case, I have already accepted over a dozen
+followers before realizing how that worked, and I do not really know
+or trust most of those people. I have still 15 pending follow requests
+which I don't want to approve until there's a better solution, which
+would probably involve [two levels of followship](https://github.com/tootsuite/mastodon/issues/5686). There's at least
+[one proposal](https://github.com/tootsuite/mastodon/pull/8682) to fix this already.
+
+Another thing I'm concerned about with Mastodon is [account
+migration](https://github.com/tootsuite/mastodon/issues/177): what happens if I'm unhappy with my current host? Or if
+I prefer to host it myself? My online identity is strongly tied with
+that hostname and there doesn't seem to be good mechanisms to support
+moving around Mastodon instances. OpenID had this concept of
+delegation where the real OpenID provider could be discovered and
+redirected, keeping a consistent identity. Mastodon's proposed
+solutions seem to aim at using [redirections](https://github.com/tootsuite/mastodon/issues/8465) or at least
+[informing users your account has moved](https://github.com/tootsuite/mastodon/issues/8003) which isn't as nice, but
+might be an acceptable long-term compromise.
+
+Finally, it seems that Mastodon will likely end up in the same space
+as email with regards to abuse: we are already seeing [block lists](https://github.com/dzuk-mutant/blockchain)
+show up to deal with abusive servers, which is horribly reminiscent of
+the early days of spam fighting, where you could keep such lists (as
+opposed to bayesian or machine learning). Fundamentally, I'm worried
+about the viability of this ecosystem, just like I'm concerned about
+the amount of fake news, spam, and harassment that takes place on
+commercial platforms. One theory is that the only way to fix this is
+to enforce two-way sharing between followers, the approach taken by
+[Manyverse](https://staltz.com/early-days-in-the-manyverse.html) and [Scuttlebutt](https://www.scuttlebutt.nz/).
+
+Only time will tell, I guess, but Mastodon does look like a promising
+platform, at least in terms of raw numbers of users...
+
+### The ultimate paste bin?
+
+I've started switching towards [ptpb.pw](https://ptpb.pw/) as a pastebin. Besides the
+unfortunate cryptic name, it's a great tool: multiple pastes are
+deduplicated, large pastes are allowed, there is a (limited)
+server-side viewing mechanism (allowing for some multimedia), etc. The
+only things missing are "burn after reading" (one-shot links) and
+[client-side encryption](https://github.com/ptpb/pb/issues/230) yet the latter is planned.
+

(fichier de différences tronqué)
yolo packages
diff --git a/software/packages.yml b/software/packages.yml
index 73137e90..109f72c7 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -75,7 +75,7 @@
       - irssi-scripts
       - mutt
       - neomutt
-      - offlineimap
+      - nullmailer
       - syncmaildir
  
   - name: install desktop packages
@@ -106,6 +106,7 @@
       - git-mediawiki
       - gobby
       - gnutls-bin
+      - gucharmap
       - hledger
       - i3
       - jmtpfs
@@ -121,7 +122,6 @@
       - monkeysign
       - monkeysphere
       - mpd
-      - msmtp-mta
       - mumble
       - mutt
       - ncdu
@@ -154,6 +154,7 @@
       - rxvt-unicode
       - scdaemon
       - slop
+      - sm
       - surfraw
       - sxiv
       - taffybar
@@ -196,9 +197,11 @@
       - apt-listbugs
       - aptitude
       - bats
+      - binwalk
       - bzr
       - build-essential
       - cdbs
+      - cloc
       - curl
       - colordiff
       - cvs
@@ -339,6 +342,7 @@
       - dia
       - dispcalgui
       - feh
+      - geeqie
       - gimp
       - inkscape
       - rapid-photo-downloader
@@ -406,19 +410,24 @@
       - debsums
       - dnsutils
       - dstat
+      - duff
       - etckeeper
       - f3
       - git
       - goaccess
       - gparted
+      - hashdeep
       - hdparm
       - hopenpgp-tools
+      - htop
+      - hwinfo
       - i7z
       - iftop
       - intel-microcode
       - ioping
       - ipcalc
       - iperf3
+      - iptraf
       - libnss3-tools
       - libu2f-host0
       - memtest86+
@@ -426,6 +435,8 @@
       - mtr-tiny
       - netcat
       - netcat-openbsd
+      - netdata
+      - nethogs
       - nmap
       - oping
       - passwdqc
@@ -434,7 +445,7 @@
       - pwgen
       - rcs
       - reptyr
-      - restic
+      - rmlint
       - rsync
       - screen
       - sdparm
@@ -449,6 +460,7 @@
       - tor
       - tuptime
       - ttyrec
+      - undistract-me
       - whois
       - wireguard-dkms
       - wireguard-tools

un peu de recherches sur les telescopes
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 7a3329da..62e1b996 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -214,6 +214,10 @@ Gogosses:
  2. un *vrai* doubleur, le [Fujinon Teleconverter XF2X TC WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf2x_tc_wr/) - un
     vrai doubleur, probablement plus fiable, mais un gros [450$ USD
     chez B&H](https://www.bhphotovideo.com/c/product/1254242-REG/fujifilm_16516271_xf_2x_tc_wr.html) et rien chez Lozeau (juste le 1.4x)
+ 3. de la meilleure photo astronomique. peut-être avec un un adapteur
+    à téléscope. [80$USD](https://www.telescopeadapters.com/best-sellers/522-2-ultrawide-true-2-prime-focus-adapter.html) pour un adapteur 2", [exemples plus ou
+    moins concluants](https://www.lost-infinity.com/fujifilm-x-t1-2-telescope-adapter/). certains prennent de bonnes poses [sans
+    aucun adapteur](https://www.dpreview.com/forums/thread/3656867)
  5. un holder a lentilles "lens flipper" [75$USD @ B&H](https://www.bhphotovideo.com/c/product/1203066-REG/gowing_8809416750118_lens_flipper_for_mount.html)
  6. un tube macro [MCEX-11 ou MCEX-16](http://www.fujifilm.com/products/digital_cameras/accessories/lens/#mountadapter), [this table](http://www.fujifilm.com/products/digital_cameras/accessories/pdf/mcex_01.pdf) shows the
     magnification/distance for various lens. MCEX-16, for example,

fait quelques achats
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 4ad80278..7a3329da 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -209,18 +209,11 @@ Reference
 
 Gogosses:
 
- 1. lens cap holder: [regular sensei](https://www.bhphotovideo.com/c/product/834774-REG/Sensei_CK_L_Cap_Keeper_for_Lens.html?sts=pi) seems fine, 2USD x 5
- 2. UV filter for zoom ø62mm, [9-50$ on B&H](https://www.bhphotovideo.com/c/search?setNs=p_OVER_ALL_RATE%7c1&Ns=p_OVER_ALL_RATE%7c1&ci=112&fct=fct_circular-sizes_27%7c62mm%2bfct_filter-type_39%7cuv&srtclk=sort&N=4026728358&), e.g. [B+W 62mm UV
-    Haze SC 010 Filter](https://www.bhphotovideo.com/c/product/11969-REG/B_W_65070127_62mm_Ultraviolet_UV_Filter.html) at 20$
- 3. [bigger eyecup](https://www.bhphotovideo.com/c/product/1088372-REG/vello_epf_xtl_long_rubber_eye_piece.html) 10$ (or [14$](https://www.bhphotovideo.com/c/product/1046759-REG/fujifilm_ec_xt_l_x_t1_extended_eyecup.html)?)
- 3. [Spare cover kit](https://www.bhphotovideo.com/c/product/1263618-REG/fujifilm_16519522_x_t2_cover_kit.html) (yes, I already lost the flash sync terminal
+ 1. [Spare cover kit](https://www.bhphotovideo.com/c/product/1263618-REG/fujifilm_16519522_x_t2_cover_kit.html) (yes, I already lost the flash sync terminal
     cover), 9$USD, B/O
- 4. un doubleur:
-    * [Vivitar 62mm 2.2x](https://www.bhphotovideo.com/c/product/1150442-REG/vivitar_viv_62t_62mm_2_2x_telephoto_attachment.html) - cheapo?, 38$USD
-    * [Bower VLB3558 3.5x](https://www.bhphotovideo.com/c/product/700003-REG/Bower_VLB3558_VLB3558_3_5x_Telephoto_Lens.html) - chromatic aberration? 28$USD
-    * [Fujinon Teleconverter XF2X TC WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf2x_tc_wr/) - un vrai doubleur,
-      probablement plus fiable, mais un gros [450$ USD chez B&H](https://www.bhphotovideo.com/c/product/1254242-REG/fujifilm_16516271_xf_2x_tc_wr.html) et
-      rien chez Lozeau (juste le 1.4x)
+ 2. un *vrai* doubleur, le [Fujinon Teleconverter XF2X TC WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf2x_tc_wr/) - un
+    vrai doubleur, probablement plus fiable, mais un gros [450$ USD
+    chez B&H](https://www.bhphotovideo.com/c/product/1254242-REG/fujifilm_16516271_xf_2x_tc_wr.html) et rien chez Lozeau (juste le 1.4x)
  5. un holder a lentilles "lens flipper" [75$USD @ B&H](https://www.bhphotovideo.com/c/product/1203066-REG/gowing_8809416750118_lens_flipper_for_mount.html)
  6. un tube macro [MCEX-11 ou MCEX-16](http://www.fujifilm.com/products/digital_cameras/accessories/lens/#mountadapter), [this table](http://www.fujifilm.com/products/digital_cameras/accessories/pdf/mcex_01.pdf) shows the
     magnification/distance for various lens. MCEX-16, for example,
@@ -232,7 +225,6 @@ Gogosses:
     Henry's](https://www.henrys.com/88489-FUJIFILM-X-MOUNT-16MM-EXTENSION-TUBE.aspx), [110$CAD a lozeau](https://lozeau.com/produits/fr/fujifilm/tube-d-extension-macro-fujifilm-mcex-16-p25463/), [good comparison between the
     two lenses](https://www.fujivsfuji.com/mcex-11-vs-mcex-16/). might not be worth getting the MCE-16 but it's
     cheaper??
- 2. [cleaning pen](https://www.bhphotovideo.com/c/product/1051483-REG/lenspen_nlp1_c_nlp1c_lens_pen.html): ~10USD
 
 Lentilles:
 
@@ -273,6 +265,18 @@ bias. (Direct quote: "the Fuji X-Mount Lenses are all extraordinary.")
 He is linked above because he's one of the few reviewers that has good
 coverage of almost the whole Fujinon X series.
 
+Acheté:
+
+ 1. lens cap holder: [regular sensei](https://www.bhphotovideo.com/c/product/834774-REG/Sensei_CK_L_Cap_Keeper_for_Lens.html?sts=pi) seems fine, 2USD x 5
+ 2. UV filter for zoom ø62mm, [9-50$ on B&H](https://www.bhphotovideo.com/c/search?setNs=p_OVER_ALL_RATE%7c1&Ns=p_OVER_ALL_RATE%7c1&ci=112&fct=fct_circular-sizes_27%7c62mm%2bfct_filter-type_39%7cuv&srtclk=sort&N=4026728358&), e.g. [B+W 62mm UV
+    Haze SC 010 Filter](https://www.bhphotovideo.com/c/product/11969-REG/B_W_65070127_62mm_Ultraviolet_UV_Filter.html) at 20$
+ 3. [bigger eyecup](https://www.bhphotovideo.com/c/product/1088372-REG/vello_epf_xtl_long_rubber_eye_piece.html) 10$ (or [14$](https://www.bhphotovideo.com/c/product/1046759-REG/fujifilm_ec_xt_l_x_t1_extended_eyecup.html)?)
+ 4. [cleaning pen](https://www.bhphotovideo.com/c/product/1051483-REG/lenspen_nlp1_c_nlp1c_lens_pen.html): ~10USD
+ 5. un doubleur cheap, le [Vivitar 62mm 2.2x](https://www.bhphotovideo.com/c/product/1150442-REG/vivitar_viv_62t_62mm_2_2x_telephoto_attachment.html) à 28$. Le [Bower
+    VLB3558 3.5x](https://www.bhphotovideo.com/c/product/700003-REG/Bower_VLB3558_VLB3558_3_5x_Telephoto_Lens.html) semblait intéressant, mais il n'est plus en vente
+    chez B&H
+
+
 2013-2017 shopping
 ==================
 

restructurer
diff --git a/pleinair/liste.mdwn b/pleinair/liste.mdwn
index 3c2ba107..54e3d4f3 100644
--- a/pleinair/liste.mdwn
+++ b/pleinair/liste.mdwn
@@ -292,31 +292,6 @@ Médicaments:
  [Épinéphrine]: https://en.wikipedia.org/wiki/Epinephrine
  [discussion sur wikipedia]: https://en.wikipedia.org/wiki/Talk:Anaphylaxis#contradiction_with_Benadryl_.2F_Diphenhydramine_article
 
-## Références
-
-Les différentes sources qui a permis de créer cette page.
-
- * [Liste personnelle de Antoine][]
- * Liste personnelle de [SuperOli][]
- * Fiche technique, [La traversée de Charlevoix][]
- * Randonnée pédestre au Québec, Guide Ulysse, 1997
- * [AMC: The Right Stuff… for winter][]
- * [Liste d’expédition du Parc national de la Mauricie][]
- * ["MégaListe" du groupe de plein air Koumbit][]
- * [Liste pour un voyage de canot-camping avec 2 jours d'approche en vélo][]
- * [Équipement requis de Alexhike.com][]
- * [Trousse d'urgence du MSPQ][]
-
- [Liste personnelle de Antoine]: https://anarc.at/pleinair/liste/
- [SuperOli]: https://wiki.koumbit.net/SuperOli
- [La traversée de Charlevoix]: http://www.charlevoix.net/traverse/
- [AMC: The Right Stuff… for winter]: http://www.outdoors.org/publications/outdoors/2002/2002-winter-gear.cfm
- [Liste d’expédition du Parc national de la Mauricie]: http://www.pc.gc.ca/pn-np/qc/mauricie/index_f.asp
- ["MégaListe" du groupe de plein air Koumbit]: https://wiki.koumbit.net/PleinAir/MégaListe
- [Équipement requis de Alexhike.com]: http://www.alexhike.com/informer/equipements-requis/
- [Liste pour un voyage de canot-camping avec 2 jours d'approche en vélo]: https://wiki.koumbit.net/PleinAir/ListeCanotCamping
- [Trousse d'urgence du MSPQ]: https://www.securitepublique.gouv.qc.ca/securite-civile/se-preparer-aux-sinistres/plan-familial-1/trousse-urgence.html
-
 ## Notes
 
 ### Consommation d'eau
@@ -332,7 +307,7 @@ jour. Le MSPQ (Ministère de la Sécurité Publique du Québec) dit de
 prévoir 2L par personne par jour, pour trois jours, dans les cas
 d'urgence ([source][Trousse d'urgence du MSPQ]).
 
-Pour purifier l'eau:
+#### Pour purifier l'eau:
 
  * Les pompes à eau fonctionnent bien et sont très fiables, mais
    doivent être nettoyées périodiquement, ce qui peut être
@@ -348,7 +323,7 @@ Pour purifier l'eau:
    utilisée, 6 gouttes (0.3mL) 2% par litre + 30 minutes, dans l'eau
    tiède. Efficacité limitée contre Giardia ([source](https://portail-plein-air.weebly.com/traitement-de-leau.html)).
 
-Quelques expériences:
+#### Quelques expériences:
 
  * séjour à un chalet avec électricité sans eau courante, 5 jours, 11
    personnes: 3 cruche de 5 gallons (1L/pers/jour)
@@ -370,3 +345,28 @@ amener. Voici quelques expériences que j'ai noté:
 
  [source]: http://www.cascadedesigns.com/msr/stoves/simple-cooking/whisperlite-universal/product#specs
  [ce site]: http://www.summitpost.org/fuel-consumption-how-much-fuel-to-bring/754460
+
+## Références
+
+Les différentes sources qui a permis de créer cette page.
+
+ * [Liste personnelle de Antoine][]
+ * Liste personnelle de [SuperOli][]
+ * Fiche technique, [La traversée de Charlevoix][]
+ * Randonnée pédestre au Québec, Guide Ulysse, 1997
+ * [AMC: The Right Stuff… for winter][]
+ * [Liste d’expédition du Parc national de la Mauricie][]
+ * ["MégaListe" du groupe de plein air Koumbit][]
+ * [Liste pour un voyage de canot-camping avec 2 jours d'approche en vélo][]
+ * [Équipement requis de Alexhike.com][]
+ * [Trousse d'urgence du MSPQ][]
+
+ [Liste personnelle de Antoine]: https://anarc.at/pleinair/liste/
+ [SuperOli]: https://wiki.koumbit.net/SuperOli
+ [La traversée de Charlevoix]: http://www.charlevoix.net/traverse/
+ [AMC: The Right Stuff… for winter]: http://www.outdoors.org/publications/outdoors/2002/2002-winter-gear.cfm
+ [Liste d’expédition du Parc national de la Mauricie]: http://www.pc.gc.ca/pn-np/qc/mauricie/index_f.asp
+ ["MégaListe" du groupe de plein air Koumbit]: https://wiki.koumbit.net/PleinAir/MégaListe
+ [Équipement requis de Alexhike.com]: http://www.alexhike.com/informer/equipements-requis/
+ [Liste pour un voyage de canot-camping avec 2 jours d'approche en vélo]: https://wiki.koumbit.net/PleinAir/ListeCanotCamping
+ [Trousse d'urgence du MSPQ]: https://www.securitepublique.gouv.qc.ca/securite-civile/se-preparer-aux-sinistres/plan-familial-1/trousse-urgence.html

s/sources/références/
diff --git a/pleinair/liste.mdwn b/pleinair/liste.mdwn
index 035efe9d..3c2ba107 100644
--- a/pleinair/liste.mdwn
+++ b/pleinair/liste.mdwn
@@ -292,7 +292,9 @@ Médicaments:
  [Épinéphrine]: https://en.wikipedia.org/wiki/Epinephrine
  [discussion sur wikipedia]: https://en.wikipedia.org/wiki/Talk:Anaphylaxis#contradiction_with_Benadryl_.2F_Diphenhydramine_article
 
-## Sources
+## Références
+
+Les différentes sources qui a permis de créer cette page.
 
  * [Liste personnelle de Antoine][]
  * Liste personnelle de [SuperOli][]

prendre comme référence la trousse d'urgence du MSPQ
diff --git a/pleinair/liste.mdwn b/pleinair/liste.mdwn
index cda52fd6..035efe9d 100644
--- a/pleinair/liste.mdwn
+++ b/pleinair/liste.mdwn
@@ -303,6 +303,7 @@ Médicaments:
  * ["MégaListe" du groupe de plein air Koumbit][]
  * [Liste pour un voyage de canot-camping avec 2 jours d'approche en vélo][]
  * [Équipement requis de Alexhike.com][]
+ * [Trousse d'urgence du MSPQ][]
 
  [Liste personnelle de Antoine]: https://anarc.at/pleinair/liste/
  [SuperOli]: https://wiki.koumbit.net/SuperOli
@@ -312,6 +313,7 @@ Médicaments:
  ["MégaListe" du groupe de plein air Koumbit]: https://wiki.koumbit.net/PleinAir/MégaListe
  [Équipement requis de Alexhike.com]: http://www.alexhike.com/informer/equipements-requis/
  [Liste pour un voyage de canot-camping avec 2 jours d'approche en vélo]: https://wiki.koumbit.net/PleinAir/ListeCanotCamping
+ [Trousse d'urgence du MSPQ]: https://www.securitepublique.gouv.qc.ca/securite-civile/se-preparer-aux-sinistres/plan-familial-1/trousse-urgence.html
 
 ## Notes
 
@@ -324,7 +326,9 @@ mais demande évidemment du carburant pour fondre.
 
 L'été, les demandes d'eau sont plus grandes. Selon la température, on
 compte au moins 1.5L voire deux litres d'eau à boire par personne par
-jour.
+jour. Le MSPQ (Ministère de la Sécurité Publique du Québec) dit de
+prévoir 2L par personne par jour, pour trois jours, dans les cas
+d'urgence ([source][Trousse d'urgence du MSPQ]).
 
 Pour purifier l'eau:
 

update shopping list
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 06b8750f..4ad80278 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -209,9 +209,10 @@ Reference
 
 Gogosses:
 
- 1. lens cap holder: [regular sensei](https://www.bhphotovideo.com/c/product/834774-REG/Sensei_CK_L_Cap_Keeper_for_Lens.html?sts=pi) seems fine, 2USD
+ 1. lens cap holder: [regular sensei](https://www.bhphotovideo.com/c/product/834774-REG/Sensei_CK_L_Cap_Keeper_for_Lens.html?sts=pi) seems fine, 2USD x 5
  2. UV filter for zoom ø62mm, [9-50$ on B&H](https://www.bhphotovideo.com/c/search?setNs=p_OVER_ALL_RATE%7c1&Ns=p_OVER_ALL_RATE%7c1&ci=112&fct=fct_circular-sizes_27%7c62mm%2bfct_filter-type_39%7cuv&srtclk=sort&N=4026728358&), e.g. [B+W 62mm UV
     Haze SC 010 Filter](https://www.bhphotovideo.com/c/product/11969-REG/B_W_65070127_62mm_Ultraviolet_UV_Filter.html) at 20$
+ 3. [bigger eyecup](https://www.bhphotovideo.com/c/product/1088372-REG/vello_epf_xtl_long_rubber_eye_piece.html) 10$ (or [14$](https://www.bhphotovideo.com/c/product/1046759-REG/fujifilm_ec_xt_l_x_t1_extended_eyecup.html)?)
  3. [Spare cover kit](https://www.bhphotovideo.com/c/product/1263618-REG/fujifilm_16519522_x_t2_cover_kit.html) (yes, I already lost the flash sync terminal
     cover), 9$USD, B/O
  4. un doubleur:
@@ -229,13 +230,9 @@ Gogosses:
     subject, which might be impractical for some subjects, especially
     for lighting!), [94$USD](https://www.bhphotovideo.com/c/product/1102440-REG/fujifilm_mcex_11_macro_extension_tubes.html#!)/[87$USD at B&H](https://www.bhphotovideo.com/c/product/1102439-REG/fujifilm_mcex_16_macro_extension_tubes.html), [120$CAD at
     Henry's](https://www.henrys.com/88489-FUJIFILM-X-MOUNT-16MM-EXTENSION-TUBE.aspx), [110$CAD a lozeau](https://lozeau.com/produits/fr/fujifilm/tube-d-extension-macro-fujifilm-mcex-16-p25463/), [good comparison between the
-    two lenses](https://www.fujivsfuji.com/mcex-11-vs-mcex-16/). might not be worth getting the MCE-16
- 2. cleaning gear, not necessary as I found my old gear already:
-    * [cleaning pen](https://www.bhphotovideo.com/c/product/1051483-REG/lenspen_nlp1_c_nlp1c_lens_pen.html): ~10USD. haven't looked at alternative brushes
-      and the blower i have has a brush. still looks interesting.
-    * blower are apparently the best solution to clear sensors,
-      e.g. [blower on B&H](https://www.bhphotovideo.com/c/buy/Blowers-Compressed-Air/ci/18806/N/4077634545?origSearch=blower), 5-15$. a [red one](https://www.bhphotovideo.com/c/product/838821-REG/sensei_bl_014_bulb_air_blower_cleaning_system.html) is easier to find
-      in a bag (8$USD). i already have a blower, so not necessary.
+    two lenses](https://www.fujivsfuji.com/mcex-11-vs-mcex-16/). might not be worth getting the MCE-16 but it's
+    cheaper??
+ 2. [cleaning pen](https://www.bhphotovideo.com/c/product/1051483-REG/lenspen_nlp1_c_nlp1c_lens_pen.html): ~10USD
 
 Lentilles:
 
@@ -263,9 +260,12 @@ Second appareil:
 
 Écarté:
 
- 6. [50mm f/2 R WR ø46](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf50mmf2_r_wr/), not many reviews. 480$ kijiji, 600$
+ * [50mm f/2 R WR ø46](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf50mmf2_r_wr/), not many reviews. 480$ kijiji, 600$
     Lozeau, cheaper slower version of the 56mm, [not good for
     macro](https://www.imaging-resource.com/lenses/fujinon/xf-50mm-f2-r-wr/review/) as small magnification and not much closeup (39cm min)
+  * blower are apparently the best solution to clear sensors,
+    e.g. [blower on B&H](https://www.bhphotovideo.com/c/buy/Blowers-Compressed-Air/ci/18806/N/4077634545?origSearch=blower), 5-15$. a [red one](https://www.bhphotovideo.com/c/product/838821-REG/sensei_bl_014_bulb_air_blower_cleaning_system.html) is easier to find
+    in a bag (8$USD). i already have a blower, so not necessary.
 
 PS: It looks like Rockwell considers almost all Fujifilm lenses to be
 "extraordinary" in some way, so be warned of the potential

document the --large flag
diff --git a/services/archive.mdwn b/services/archive.mdwn
index 9daab2c2..cd0aefa2 100644
--- a/services/archive.mdwn
+++ b/services/archive.mdwn
@@ -300,3 +300,4 @@ Submitted issues:
  * [sample Apache configuration](https://github.com/webrecorder/pywb/pull/374) for pywb
  * [make job status less chatty](https://github.com/ArchiveTeam/ArchiveBot/pull/326) in ArchiveBot
  * [Debian packaging](https://github.com/jjjake/internetarchive/issues/270) of the `ia` commandline tool
+ * [document the --large flag](https://github.com/ArchiveTeam/ArchiveBot/pull/330) in ArchiveBot

misattributed gandi quote
See https://quoteinvestigator.com/2017/10/23/be-change/ for a full analysis.
diff --git a/sigs.fortune b/sigs.fortune
index e64f87eb..6b78b5af 100644
--- a/sigs.fortune
+++ b/sigs.fortune
@@ -961,8 +961,8 @@ La démocratie c'est cause toujours!
 Vivre tous simplement pour que tous puissent simplement vivre.
                         - Gandhi
 %
-Incarnez les changements que vous voulez voir se produire dans le monde.
-                        - Gandhi
+Be the change you want to see happen.
+                        - Arleen Lorrance, 1974
 %
 La propriété est un piège: ce que nous croyons posséder nous possède.
                         - Alphonse Karr

more details on G-III
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index ace0abee..06b8750f 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -705,8 +705,27 @@ con:
 Ricoh
 -----
 
-They make nice smaller cameras. The Ricoh G-III has good comments from
-friends. Turns out Ricoh is also the first camera I've ever used!
+They make nice smaller cameras. Turns out Ricoh is also the first
+camera I've ever used!
+
+### Ricoh G-III
+
+The Ricoh G-III has good comments from friends. The camera was
+officially [announced on Sept. 25th](http://news.ricoh-imaging.co.jp/rim_info2/2018/20180925_026135.html) after many rumours and is
+scheduled to be on sale in early 2019.
+
+ * 28mm equiv. F2.8-16 fixed lens
+ * 3-axis OIS sensor
+ * 24 mpixels, APS-C sensor
+ * 1600 ISO max, 1/4000 - 30s
+ * 3" screen, 1037 kpixels, touchscreen
+ * SDHC, internal storage
+ * USB-C
+ * hotshoe flash
+ * 257g
+ * 109 x 62 x 33 mm
+ * wifi
+ * 1080/60p video
 
 Olympus
 -------

mention two more container builders and their problems
diff --git a/software/debian-development.mdwn b/software/debian-development.mdwn
index 1fe93d50..379910b8 100644
--- a/software/debian-development.mdwn
+++ b/software/debian-development.mdwn
@@ -783,7 +783,10 @@ currently packaged as an official Debian package</del> (it is now, see
 [[!debpkg whalebuilder]]) and lacks certain
 features (like [passing custom arguments to dpkg-buildpackage][]) so I
 don't feel it is quite ready yet. For now, if you need better
-isolation, look towards [qemubuilder][] or possibly kvmtool.
+isolation, look towards [qemubuilder][] or possibly kvmtool. There are
+also *two* other container-based builders now: [conbuilder](https://salsa.debian.org/federico/conbuilder) and
+[docker-buildpackage](https://github.com/metux/docker-buildpackage). None of those solutions are implemented as a
+[sbuild](https://lists.debian.org/debian-devel/2018/08/msg00005.html) plugin, which would greatly reduce their complexity.
 
 [qemubuilder]: https://wiki.debian.org/qemubuilder
 [passing custom arguments to dpkg-buildpackage]: https://gitlab.com/uhoreg/whalebuilder/issues/4

final changes from LWN
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index f8d8c259..739a938b 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -5,19 +5,19 @@
 
 [[!meta date="2018-09-23T00:00:00+0000"]]
 
-[[!meta updated="2018-09-23T20:21:41-0400"]]
+[[!meta updated="2018-09-25T09:17:23-0400"]]
 
 [[!toc levels=2]]
 
-I recently took a deep dive into web site archival for friends who
-were worried about losing control over the hosting of their work
-online in the face of poor system administration or hostile
-removal. This makes website archival an essential instrument in the
-toolbox of any respectable system administrator. As it turns out, some
-sites are much harder to archive than others. This article goes
-through the process of archiving traditional web sites and shows how
-it falls short when confronted with the latest fashions in the
-single-page applications that are bloating the modern web.
+I recently took a deep dive into web site archival for friends who were
+worried about losing control over the hosting of their work online in
+the face of poor system administration or hostile removal. This makes
+web site archival an essential instrument in the toolbox of any system
+administrator. As it turns out, some sites are much harder to archive
+than others. This article goes through the process of archiving
+traditional web sites and shows how it falls short when confronted with
+the latest fashions in the single-page applications that are bloating
+the modern web.
 
 Converting simple sites
 -----------------------
@@ -26,39 +26,38 @@ The days of handcrafted HTML web sites are long gone. Now web sites are
 dynamic and built on the fly using the latest JavaScript, PHP, or Python
 framework. As a result, the sites are more fragile: a database crash,
 spurious upgrade, or unpatched vulnerability might lose data. In my
-previous life as web developer, I had to come to terms
-with the idea that customers expect web sites to basically work forever.
-This expectation matches poorly with "move fast and break things"
-attitude of web development. Working with the [Drupal][]
-content-management system (CMS) was particularly challenging in that
-regard as major upgrades deliberately break compatibility with
-third-party modules, which implies a costly upgrade process that clients
-could seldom afford. The solution was to archive those sites: take a
-living, dynamic web site and turn it into plain HTML files that any web
-server can serve forever. This process is useful for your own dynamic
-sites but also for third-party sites that are outside of your control
-and you might want to safeguard.
+previous life as web developer, I had to come to terms with the idea
+that customers expect web sites to basically work forever. This
+expectation matches poorly with "move fast and break things" attitude
+of web development. Working with the [Drupal][] content-management
+system (CMS) was particularly challenging in that regard as major
+upgrades deliberately break compatibility with third-party modules,
+which implies a costly upgrade process that clients could seldom afford.
+The solution was to archive those sites: take a living, dynamic web site
+and turn it into plain HTML files that any web server can serve forever.
+This process is useful for your own dynamic sites but also for
+third-party sites that are outside of your control and you might want to
+safeguard.
 
 For simple or static sites, the venerable [Wget][] program works well.
 The incantation to mirror a full web site, however, is byzantine:
 
         $ nice wget --mirror --execute robots=off --no-verbose --convert-links \
-                           --backup-converted --page-requisites --adjust-extension \
-                           --base=./ --directory-prefix=./ --span-hosts \
-                   --domains=www.example.com,example.com http://www.example.com/
+                    --backup-converted --page-requisites --adjust-extension \
+                    --base=./ --directory-prefix=./ --span-hosts \
+                    --domains=www.example.com,example.com http://www.example.com/
 
 The above downloads the content of the web page, but also crawls
 everything within the specified domains. Before you run this against
 your favorite site, consider the impact such a crawl might have on the
-target site. The above commandline deliberately ignores
-[robots.txt](https://en.wikipedia.org/wiki/Robots_exclusion_standard) rules, as is now [common practice for archivists](https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/),
-and hammer the website as far as it can. Most crawlers have options to
-pause between hits and limit bandwidth usage to avoid overwhelming the
-target.
-
-The above will also fetch "page requisites" like style sheets (CSS),
-images, and scripts. The downloaded page contents are modified so that
-links point to the local copy as well. Any web server can host the
+site. The above command line deliberately ignores [`robots.txt`][]
+rules, as is now [common practice for archivists][], and hammer the
+website as fast as it can. Most crawlers have options to pause between
+hits and limit bandwidth usage to avoid overwhelming the target site.
+
+The above command will also fetch "page requisites" like style sheets
+(CSS), images, and scripts. The downloaded page contents are modified so
+that links point to the local copy as well. Any web server can host the
 resulting file set, which results in a static copy of the original web
 site.
 
@@ -77,6 +76,8 @@ from the original site as well.
 
   [Drupal]: https://drupal.org
   [Wget]: https://www.gnu.org/software/wget/
+  [`robots.txt`]: https://en.wikipedia.org/wiki/Robots_exclusion_standard
+  [common practice for archivists]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/
 
 JavaScript doom
 ---------------
@@ -95,10 +96,10 @@ Traditional archival methods sometimes fail in the dumbest way. When
 trying to build an offsite backup of a local newspaper
 ([pamplemousse.ca][]), I found that WordPress adds query strings (e.g.
 `?ver=1.12.4`) at the end of JavaScript includes. This confuses
-content-type detection in web servers serving the archive, which rely on the file
-extension to send the right `Content-Type` header. When such an archive
-is loaded in a web browser, it fails to load scripts, which breaks
-dynamic websites.
+content-type detection in the web servers that serve the archive, which
+rely on the file extension to send the right `Content-Type` header. When
+such an archive is loaded in a web browser, it fails to load scripts,
+which breaks dynamic websites.
 
 As the web moves toward using the browser as a virtual machine to run
 arbitrary code, archival methods relying on pure HTML parsing need to
@@ -186,11 +187,11 @@ allow downloading more complex JavaScript sites and streaming
 multimedia, respectively. The software is the basis for an elaborate
 archival tool called [ArchiveBot][], which is used by the "*loose
 collective of rogue archivists, programmers, writers and loudmouths*"
-at [ArchiveTeam][] in its struggle to "*save the history before it's lost
-forever*". It seems that PhantomJS integration does not work as well as
-the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools
-to mirror more complex sites. For example, [snscrape][] will crawl a
-social media profile to generate a list of pages to send into
+at [ArchiveTeam][] in its struggle to "*save the history before it's
+lost forever*". It seems that PhantomJS integration does not work as
+well as the team wants, so ArchiveTeam also uses a rag-tag bunch of
+other tools to mirror more complex sites. For example, [snscrape][] will
+crawl a social media profile to generate a list of pages to send into
 ArchiveBot. Another tool the team employs is [crocoite][], which uses
 the Chrome browser in headless mode to archive JavaScript-heavy sites.
 
@@ -220,12 +221,11 @@ along with full HTML but, unfortunately, no WARC file that would allow
 an even more faithful replay.
 
 The sad truth of my experiences with mirrors and archival is that data
-dies. As much as we try, inertia is against us and entropy creeps into
-everything: it's a [fundamental law of nature][]. Fortunately, amateur
-archivists have tools at their disposal to keep interesting content
-alive online. For those who do not want to go through that trouble, the
-Internet Archive seems to be here to stay and Archive Team is obviously
-[working on a backup of the Internet Archive itself][].
+dies. Fortunately, amateur archivists have tools at their disposal to
+keep interesting content alive online. For those who do not want to go
+through that trouble, the Internet Archive seems to be here to stay and
+Archive Team is obviously [working on a backup of the Internet Archive
+itself][].
 
   [resources]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem
   [Wpull]: https://github.com/chfoo/wpull
@@ -244,7 +244,6 @@ Internet Archive seems to be here to stay and Archive Team is obviously
   [fails to parse the article]: https://github.com/wallabag/wallabag/issues/2914
   [bookmark-archiver]: https://pirate.github.io/bookmark-archiver/
   [reminiscence]: https://github.com/kanishka-linux/reminiscence
-  [fundamental law of nature]: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics
   [working on a backup of the Internet Archive itself]: http://iabak.archiveteam.org
 
 > *This article [first appeared][] in the [Linux Weekly News][].*

document film camera dates
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index be8b2ef8..ace0abee 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -71,7 +71,9 @@ Camera dates
 This is a more exhaustive list of which camera was used during which
 period.
 
- * 2004: random camera (Canon Powershot A70)
+ * 1988: first camera, Ricoh one take AF 39mm 3.9
+ * mid-1990s: first reflex camera, a Minolta SRT-200
+ * 2004: first digital camera (Canon Powershot A70)
  * 2005: random cameras (HP Photosmart C200, Canon PowerShot S1 IS,
    Canon PowerShot S30,  E4600 (?), 2005  FinePix2600Zoom,  FinePix
    A330,  PhotoSmart C200,  QSS-29_31)

yet another issue
diff --git a/services/archive.mdwn b/services/archive.mdwn
index 7cab6a71..9daab2c2 100644
--- a/services/archive.mdwn
+++ b/services/archive.mdwn
@@ -299,3 +299,4 @@ Submitted issues:
  * [fix broken link to specification](https://github.com/iipc/warc-specifications/pull/45)
  * [sample Apache configuration](https://github.com/webrecorder/pywb/pull/374) for pywb
  * [make job status less chatty](https://github.com/ArchiveTeam/ArchiveBot/pull/326) in ArchiveBot
+ * [Debian packaging](https://github.com/jjjake/internetarchive/issues/270) of the `ia` commandline tool

add the ricoh
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 9c521e0a..be8b2ef8 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -256,6 +256,8 @@ Second appareil:
     [1450$ lozeau](https://lozeau.com/produits/fr/photo/appareils-sans-miroir-hybrides/fujifilm/fujifilm/ensemble-fujifilm-x-e3-noir-avec-23mm-f-2-r-wr-p31164c74c77c101/?limit=100), similar size than the x100f but interchangeable
     lenses and cheaper. especially relevant with the 27mm pancake
  5. [X100f ø49](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x100f/) 1200-1600$ on kijiji, [1650$ lozeau](https://lozeau.com/produits/fr/fujifilm/fujifilm-x100f-argent-p30174/), nostalgia!
+ 6. possibly the Ricoh G-II or G-III because it is smaller and
+    recommended by a friend.
 
 Écarté:
 
@@ -698,6 +700,12 @@ con:
  * may have autofocus issues
  * no cursor
 
+Ricoh
+-----
+
+They make nice smaller cameras. The Ricoh G-III has good comments from
+friends. Turns out Ricoh is also the first camera I've ever used!
+
 Olympus
 -------
 

one last change that was missing
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index d34c71d6..f8d8c259 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -54,11 +54,13 @@ target site. The above commandline deliberately ignores
 [robots.txt](https://en.wikipedia.org/wiki/Robots_exclusion_standard) rules, as is now [common practice for archivists](https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/),
 and hammer the website as far as it can. Most crawlers have options to
 pause between hits and limit bandwidth usage to avoid overwhelming the
-target. The above will also fetch "page requisites" like style sheets
-(CSS), images, and scripts. The downloaded page contents are modified
-so that links point to the local copy as well. Any web server can host
-the resulting file set, which results in a static copy of the original
-web site.
+target.
+
+The above will also fetch "page requisites" like style sheets (CSS),
+images, and scripts. The downloaded page contents are modified so that
+links point to the local copy as well. Any web server can host the
+resulting file set, which results in a static copy of the original web
+site.
 
 That is, when things go well. Anyone who has ever worked with a computer
 knows that things seldom go according to plan; all sorts of things can

more tweaks for jon
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index e7984f85..d34c71d6 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -11,14 +11,13 @@
 
 I recently took a deep dive into web site archival for friends who
 were worried about losing control over the hosting of their work
-online. Poor system administration, politics, untested backups, and
-entropy are surprisingly effective at destroying precious data. This
-makes website archival an essential instrument in the toolbox of any
-respectable system administrator. As it turns out, some sites are much
-harder to archive than others. This article goes through the process
-of archiving traditional web sites and shows how it falls short when
-confronted with the latest fashions in the single-page applications
-that are bloating the modern web.
+online in the face of poor system administration or hostile
+removal. This makes website archival an essential instrument in the
+toolbox of any respectable system administrator. As it turns out, some
+sites are much harder to archive than others. This article goes
+through the process of archiving traditional web sites and shows how
+it falls short when confronted with the latest fashions in the
+single-page applications that are bloating the modern web.
 
 Converting simple sites
 -----------------------
@@ -26,8 +25,8 @@ Converting simple sites
 The days of handcrafted HTML web sites are long gone. Now web sites are
 dynamic and built on the fly using the latest JavaScript, PHP, or Python
 framework. As a result, the sites are more fragile: a database crash,
-spurious upgrade, or unpatched vulnerability can destroy those sites
-forever. In my previous life as web developer, I had to come to terms
+spurious upgrade, or unpatched vulnerability might lose data. In my
+previous life as web developer, I had to come to terms
 with the idea that customers expect web sites to basically work forever.
 This expectation matches poorly with "move fast and break things"
 attitude of web development. Working with the [Drupal][]
@@ -51,16 +50,15 @@ The incantation to mirror a full web site, however, is byzantine:
 The above downloads the content of the web page, but also crawls
 everything within the specified domains. Before you run this against
 your favorite site, consider the impact such a crawl might have on the
-target site. Some crawlers can be too aggressive and might constitute
-a denial of service attack on the target, which might respond
-accordinly. For this, the `--wait 1`, `--random-wait`, and
-`--limit-rate` parameters can be useful to limit the speed of the
-crawl and artificially wait between requests to avoid overloading the
-server (and possibly detection). The above will also fetch "page
-requisites" like style sheets (CSS), images, and scripts. The
-downloaded page contents are modified so that links point to the local
-copy as well. Any web server can host the resulting file set, which
-results in a static copy of the original web site.
+target site. The above commandline deliberately ignores
+[robots.txt](https://en.wikipedia.org/wiki/Robots_exclusion_standard) rules, as is now [common practice for archivists](https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/),
+and hammer the website as far as it can. Most crawlers have options to
+pause between hits and limit bandwidth usage to avoid overwhelming the
+target. The above will also fetch "page requisites" like style sheets
+(CSS), images, and scripts. The downloaded page contents are modified
+so that links point to the local copy as well. Any web server can host
+the resulting file set, which results in a static copy of the original
+web site.
 
 That is, when things go well. Anyone who has ever worked with a computer
 knows that things seldom go according to plan; all sorts of things can
@@ -95,7 +93,7 @@ Traditional archival methods sometimes fail in the dumbest way. When
 trying to build an offsite backup of a local newspaper
 ([pamplemousse.ca][]), I found that WordPress adds query strings (e.g.
 `?ver=1.12.4`) at the end of JavaScript includes. This confuses
-content-type detection in web servers, that rely on the file
+content-type detection in web servers serving the archive, which rely on the file
 extension to send the right `Content-Type` header. When such an archive
 is loaded in a web browser, it fails to load scripts, which breaks
 dynamic websites.

expand lead, warn about DOS, thanks jon
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index 3a0306ee..e7984f85 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -9,11 +9,16 @@
 
 [[!toc levels=2]]
 
-I recently took a deep dive into web site archival; as it turns out,
-some sites are much harder to archive than others. This article goes
-through the process of archiving traditional web sites and shows how it
-falls short when confronted with the latest fashions in the single-page
-applications that are bloating the modern web.
+I recently took a deep dive into web site archival for friends who
+were worried about losing control over the hosting of their work
+online. Poor system administration, politics, untested backups, and
+entropy are surprisingly effective at destroying precious data. This
+makes website archival an essential instrument in the toolbox of any
+respectable system administrator. As it turns out, some sites are much
+harder to archive than others. This article goes through the process
+of archiving traditional web sites and shows how it falls short when
+confronted with the latest fashions in the single-page applications
+that are bloating the modern web.
 
 Converting simple sites
 -----------------------
@@ -44,7 +49,14 @@ The incantation to mirror a full web site, however, is byzantine:
                    --domains=www.example.com,example.com http://www.example.com/
 
 The above downloads the content of the web page, but also crawls
-everything within the specified domains. It will also fetch "page
+everything within the specified domains. Before you run this against
+your favorite site, consider the impact such a crawl might have on the
+target site. Some crawlers can be too aggressive and might constitute
+a denial of service attack on the target, which might respond
+accordinly. For this, the `--wait 1`, `--random-wait`, and
+`--limit-rate` parameters can be useful to limit the speed of the
+crawl and artificially wait between requests to avoid overloading the
+server (and possibly detection). The above will also fetch "page
 requisites" like style sheets (CSS), images, and scripts. The
 downloaded page contents are modified so that links point to the local
 copy as well. Any web server can host the resulting file set, which

corrections from LWN
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index abe055d1..3a0306ee 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -83,7 +83,7 @@ Traditional archival methods sometimes fail in the dumbest way. When
 trying to build an offsite backup of a local newspaper
 ([pamplemousse.ca][]), I found that WordPress adds query strings (e.g.
 `?ver=1.12.4`) at the end of JavaScript includes. This confuses
-content-type detection in web servers, which relies on the file
+content-type detection in web servers, that rely on the file
 extension to send the right `Content-Type` header. When such an archive
 is loaded in a web browser, it fails to load scripts, which breaks
 dynamic websites.
@@ -123,7 +123,7 @@ directly, so a viewer or some conversion is necessary to access the
 archive. The simplest such viewer I have found is [pywb][], a Python
 package that runs a simple webserver to offer a Wayback-Machine-like
 interface to browse the contents of WARC files. The following set of
-commands will render a WARC file on http://localhost:8080/:
+commands will render a WARC file on `http://localhost:8080/`:
 
         $ pip install pywb
         $ wb-manager init example
@@ -169,14 +169,14 @@ Future work and alternatives
 There are plenty more [resources][] for using WARC files. In particular,
 there's a Wget drop-in replacement called [Wpull][] that is
 specifically designed for archiving web sites. It has experimental
-support for [PhantomJS][] and [youtube-dl][] integration which should
+support for [PhantomJS][] and [youtube-dl][] integration that should
 allow downloading more complex JavaScript sites and streaming
 multimedia, respectively. The software is the basis for an elaborate
 archival tool called [ArchiveBot][], which is used by the "*loose
 collective of rogue archivists, programmers, writers and loudmouths*"
-at [ArchiveTeam][] in its struggle "*save the history before it's lost
+at [ArchiveTeam][] in its struggle to "*save the history before it's lost
 forever*". It seems that PhantomJS integration does not work as well as
-the team wants, so ArchiveTeam also use a rag-tag bunch of other tools
+the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools
 to mirror more complex sites. For example, [snscrape][] will crawl a
 social media profile to generate a list of pages to send into
 ArchiveBot. Another tool the team employs is [crocoite][], which uses

auto-dict accepted
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index 31f83003..9b775f8d 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -31,7 +31,7 @@ Package | Emacs | Debian | Description
 ------- | ----- | ------ | -----------
 anzu | [0.62](https://stable.melpa.org/#/anzu) | [0.62-2](https://tracker.debian.org/elpa-anzu) | Show number of matches in mode-line while searching
 atomic-chrome | [2.0.0](https://stable.melpa.org/#/atomic-chrome) | [#909336](http://bugs.debian.org/909336) | Edit Chrome text area with Emacs using Atomic Chrome
-auto-dictionary | [1.1](https://stable.melpa.org/#/auto-dictionary) | [#909133](http://bugs.debian.org/909133) | automatic dictionary switcher for flyspell
+auto-dictionary | [1.1](https://stable.melpa.org/#/auto-dictionary) | [1.1-1](https://tracker.debian.org/elpa-auto-dictionary) | automatic dictionary switcher for flyspell
 company | [0.9.6](https://stable.melpa.org/#/company) | [0.9.6-1](https://tracker.debian.org/elpa-company) | Modular text completion framework
 company-go | [20170907](https://stable.melpa.org/#/company-go) | [20170907-3](https://tracker.debian.org/elpa-company-go) | company-mode backend for Go (using gocode)
 crux | [0.3.0](https://stable.melpa.org/#/crux) | [#909337](http://bugs.debian.org/909337) | A Collection of Ridiculously Useful eXtensions

link to startup script instead of hardcoding settings here
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index 738ceed9..17baccd0 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -228,27 +228,11 @@ I have set the following configuration options:
    * `security.webauth.webauthn` - enable [WebAuthN](https://www.w3.org/TR/webauthn/) support, not
      sure what that's for but it sounds promising
 
-Privacy parameters:
-
- * `privacy.donottrackheader.enabled`: true (maybe futile)
- * `browser.safebrowsing.enabled`: true (this downloads a list of sites
-   in Mozille products, doesn't report indivudual sites to google...)
- * `privacy.trackingprotection.enabled` ([ref](https://wiki.mozilla.org/Security/Tracking_protection)): basic blocking of
-   online trackers, overlaps with uBlock and uMatrix extensions
- * `extensions.pocket.enabled` - disable the pocket extension that I
-   have no use for ([ref](https://support.mozilla.org/en-US/kb/remove-pocket-button-firefox))
- * `browser.aboutHomeSnippets.updateUrl`: false, disable home page
-   snippets prefetching
- * `browser.search.geoip.url`: false, geolocation for search engines
-   on startup (!)
- * `extensions.getAddons.cache.enabled`: false, checks for related
-   add-ons automatically
- * `browser.startup.homepage_override.mstone`: "", disabled the
-   "what's new" page
- * `browser.search.geoip.url`: "", Geolocation for search engines
- * `security.OCSP.enabled`: `false`
-
-See also this [list of possible parameters](https://wiki.debian.org/Firefox#Automatic_connections).
+I also set privacy parameters following this [user.js](https://gitlab.com/anarcat/scripts/blob/master/firefox-tmp#L7) config
+which, incidentally, is injected in temporary profiles started with
+this [firefox-tmp](https://gitlab.com/anarcat/scripts/blob/master/firefox-tmp) script I use to replace `chromium
+--temp-profile`. This is part of the effort to [sanitize default
+Firefox behavior in Debian](https://wiki.debian.org/Firefox#Automatic_connections).
 
 I also override certain site's stylesheets in my
 `~/.mozilla/firefox/*/chrome/userContent.css` CSS file. For example,

slight changes from jake before internal review
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index 27b810e5..abe055d1 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -5,15 +5,15 @@
 
 [[!meta date="2018-09-23T00:00:00+0000"]]
 
-[[!meta updated="2018-09-23T14:36:29-0400"]]
+[[!meta updated="2018-09-23T20:21:41-0400"]]
 
 [[!toc levels=2]]
 
-I recently took a deep dive into web site archival; as
-it turns out, some sites are much harder to archive than others. This
-article goes through the process of archiving traditional web sites and
-shows how it falls short when confronted with the latest fashions in the
-single-page applications that are bloating the modern web.
+I recently took a deep dive into web site archival; as it turns out,
+some sites are much harder to archive than others. This article goes
+through the process of archiving traditional web sites and shows how it
+falls short when confronted with the latest fashions in the single-page
+applications that are bloating the modern web.
 
 Converting simple sites
 -----------------------
@@ -31,9 +31,9 @@ regard as major upgrades deliberately break compatibility with
 third-party modules, which implies a costly upgrade process that clients
 could seldom afford. The solution was to archive those sites: take a
 living, dynamic web site and turn it into plain HTML files that any web
-server can serve forever. This process is useful for our own dynamic
-sites but also for third-party sites that are outside of our control
-and that we might want to safeguard.
+server can serve forever. This process is useful for your own dynamic
+sites but also for third-party sites that are outside of your control
+and you might want to safeguard.
 
 For simple or static sites, the venerable [Wget][] program works well.
 The incantation to mirror a full web site, however, is byzantine:
@@ -113,9 +113,7 @@ ARChive") [specification][] that was released as an ISO standard in
 established to coordinate efforts to preserve internet content for the
 future*", according to Wikipedia; it includes members such as the US
 Library of Congress and the Internet Archive. The latter uses the WARC
-format internally in its Java-based [Heritrix crawler][] although a
-significant part of the "Wayback Machine" is actually crawled by
-[Alexa Internet][].
+format internally in its Java-based [Heritrix crawler][].
 
 A WARC file aggregates multiple resources like HTTP headers, file
 contents, and other metadata in a single compressed archive.
@@ -158,7 +156,6 @@ all, the resulting WARC files load perfectly in pywb.
   [specification]: https://iipc.github.io/warc-specifications/
   [International Internet Preservation Consortium]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium
   [Heritrix crawler]: https://github.com/internetarchive/heritrix3/wiki
-  [Alexa Internet]: https://en.wikipedia.org/wiki/Alexa_Internet
   [pywb]: https://github.com/webrecorder/pywb
   [Webrecorder]: https://webrecorder.io/
   [followed]: https://github.com/webrecorder/pywb/issues/294
@@ -188,8 +185,8 @@ the Chrome browser in headless mode to archive JavaScript-heavy sites.
 This article would also not be complete without a nod to the [HTTrack][]
 project, the "website copier". Working similarly to Wget, HTTrack
 creates local copies of remote web sites but unfortunately does not
-support WARC output. Its interactive aspects might be of more
-interest to novice users unfamiliar with the command line.
+support WARC output. Its interactive aspects might be of more interest
+to novice users unfamiliar with the command line.
 
 In the same vein, during my research I found a full rewrite of Wget
 called [Wget2][] that has support for multi-threaded operation, which
@@ -201,14 +198,14 @@ support.
 Finally, my personal dream for these kinds of tools would be to have
 them integrated with my existing bookmark system. I currently keep
 interesting links in [Wallabag][], a self-hosted "read it later"
-service designed as a free-software alternative to (now Mozilla's)
-[Pocket][]. But Wallabag, by design, creates only a "readable" version
-of the article instead of a full copy. In some cases, the "readable
-version" is actually [unreadable][] and Wallabag sometimes [fails to
-parse the article][]. Instead, other tools like [bookmark-archiver][] or
-[reminiscence][] save a screenshot of the page along with full HTML but,
-unfortunately, no WARC file that would allow an even more faithful
-replay.
+service designed as a free-software alternative to [Pocket][] (now owned
+by Mozilla). But Wallabag, by design, creates only a "readable"
+version of the article instead of a full copy. In some cases, the
+"readable version" is actually [unreadable][] and Wallabag sometimes
+[fails to parse the article][]. Instead, other tools like
+[bookmark-archiver][] or [reminiscence][] save a screenshot of the page
+along with full HTML but, unfortunately, no WARC file that would allow
+an even more faithful replay.
 
 The sad truth of my experiences with mirrors and archival is that data
 dies. As much as we try, inertia is against us and entropy creeps into

more privacy parameters
diff --git a/software/desktop/firefox.mdwn b/software/desktop/firefox.mdwn
index 267f7a31..738ceed9 100644
--- a/software/desktop/firefox.mdwn
+++ b/software/desktop/firefox.mdwn
@@ -211,9 +211,6 @@ I have set the following configuration options:
  * `browser.tabs.loadDivertedInBackground` ([ref](http://kb.mozillazine.org/About:config_entries)):
    true (fixes an issue where focus would change to the firefox window
    (and workspace!) when clicking links in other apps
- * `privacy.donottrackheader.enabled`: true (maybe futile)
- * `browser.safebrowsing.enabled`: true (this downloads a list of sites
-   in Mozille products, doesn't report indivudual sites to google...)
  * `browser.search.defaultenginename`: [searx.me](https://searx.me/)
    (default search engine)
  * `browser.startup.page` ([ref](http://kb.mozillazine.org/Browser.startup.page)):
@@ -225,15 +222,33 @@ I have set the following configuration options:
  * `middlemouse.contentLoadURL` ([ref](http://kb.mozillazine.org/Middlemouse.contentLoadURL)):
    false (got used to chromium not doing that, and it seems too risky:
    passwords can leak in DNS too easily if you miss the field)
- * `privacy.trackingprotection.enabled` ([ref](https://wiki.mozilla.org/Security/Tracking_protection)): basic blocking of
-   online trackers, overlaps with uBlock and uMatrix extensions
  * [U2F configuration](https://wiki.mozilla.org/Security/CryptoEngineering#Using_U2F_.2F_WebAuthn):
    * `security.webauth.u2f` - enable U2F token support, to use 2FA
      with the Yubikey and other 2FA tokens
    * `security.webauth.webauthn` - enable [WebAuthN](https://www.w3.org/TR/webauthn/) support, not
      sure what that's for but it sounds promising
+
+Privacy parameters:
+
+ * `privacy.donottrackheader.enabled`: true (maybe futile)
+ * `browser.safebrowsing.enabled`: true (this downloads a list of sites
+   in Mozille products, doesn't report indivudual sites to google...)
+ * `privacy.trackingprotection.enabled` ([ref](https://wiki.mozilla.org/Security/Tracking_protection)): basic blocking of
+   online trackers, overlaps with uBlock and uMatrix extensions
  * `extensions.pocket.enabled` - disable the pocket extension that I
    have no use for ([ref](https://support.mozilla.org/en-US/kb/remove-pocket-button-firefox))
+ * `browser.aboutHomeSnippets.updateUrl`: false, disable home page
+   snippets prefetching
+ * `browser.search.geoip.url`: false, geolocation for search engines
+   on startup (!)
+ * `extensions.getAddons.cache.enabled`: false, checks for related
+   add-ons automatically
+ * `browser.startup.homepage_override.mstone`: "", disabled the
+   "what's new" page
+ * `browser.search.geoip.url`: "", Geolocation for search engines
+ * `security.OCSP.enabled`: `false`
+
+See also this [list of possible parameters](https://wiki.debian.org/Firefox#Automatic_connections).
 
 I also override certain site's stylesheets in my
 `~/.mozilla/firefox/*/chrome/userContent.css` CSS file. For example,

monthly package list checkin
diff --git a/software/packages.yml b/software/packages.yml
index 9f10f215..73137e90 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -52,6 +52,7 @@
       - dict-vera
       - dict-wn
       - dictd
+      - dictionary-el
       - epubcheck
       - elpa-writegood-mode
       - gv
@@ -73,7 +74,9 @@
       - irssi-plugin-xmpp
       - irssi-scripts
       - mutt
+      - neomutt
       - offlineimap
+      - syncmaildir
  
   - name: install desktop packages
     # Shitload of stuff that doesn't fit anywhere else.
@@ -86,10 +89,7 @@
       - calibre
       - chromium
       - diceware
-      - dict
-      - dictionary-el
       - electrum
-      - elpa-writegood-mode
       - emacs
       - exiftool
       - feed2exec
@@ -148,11 +148,13 @@
       - python-certifi
       - qalculate
       - qalculate-gtk
+      - ranger
       - redshift-gtk
       - rofi
       - rxvt-unicode
       - scdaemon
       - slop
+      - surfraw
       - sxiv
       - taffybar
       - thunar
@@ -179,7 +181,8 @@
       - xscreensaver
       - xscreensaver-screensaver-bsod
       - xterm
-      - xul-ext-zotero
+      - webext-browserpass
+      - webext-ublock-origin
       - yubikey-personalization
       - zotero-standalone
 
@@ -264,6 +267,7 @@
       - multitime
       - myrepos
       - ncdu
+      - npm
       - org-mode
       - org-mode-doc
       - pastebinit
@@ -398,6 +402,7 @@
       - cu
       - curl
       - debian-goodies
+      - deborphan
       - debsums
       - dnsutils
       - dstat
@@ -431,6 +436,7 @@
       - reptyr
       - restic
       - rsync
+      - screen
       - sdparm
       - siege
       - sipcalc
@@ -442,6 +448,7 @@
       - tcpdump
       - tor
       - tuptime
+      - ttyrec
       - whois
       - wireguard-dkms
       - wireguard-tools

small tweaks i found while reviewing jake changes
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index 49c8f5c8..27b810e5 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -9,7 +9,7 @@
 
 [[!toc levels=2]]
 
-I recently took a deep dive into web site archival and fossilization; as
+I recently took a deep dive into web site archival; as
 it turns out, some sites are much harder to archive than others. This
 article goes through the process of archiving traditional web sites and
 shows how it falls short when confronted with the latest fashions in the
@@ -33,7 +33,7 @@ could seldom afford. The solution was to archive those sites: take a
 living, dynamic web site and turn it into plain HTML files that any web
 server can serve forever. This process is useful for our own dynamic
 sites but also for third-party sites that are outside of our control
-that we might want to safeguard.
+and that we might want to safeguard.
 
 For simple or static sites, the venerable [Wget][] program works well.
 The incantation to mirror a full web site, however, is byzantine:
@@ -188,7 +188,7 @@ the Chrome browser in headless mode to archive JavaScript-heavy sites.
 This article would also not be complete without a nod to the [HTTrack][]
 project, the "website copier". Working similarly to Wget, HTTrack
 creates local copies of remote web sites but unfortunately does not
-support WARC-file output. Its interactive aspects might be of more
+support WARC output. Its interactive aspects might be of more
 interest to novice users unfamiliar with the command line.
 
 In the same vein, during my research I found a full rewrite of Wget

LWN jake review
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index fbfe1adc..49c8f5c8 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -1,200 +1,246 @@
-Archival in the single-page web
-===============================
+[[!meta title="Archiving web sites"]]
 
-I recently took a deep dive into web site archival and fossilization:
-as it turns out, some sites are much harder to archive than
-others. This article goes through the process of archiving traditional
-web sites and how it falls short when confronted with the latest
-fashions in single-page applications bloating the modern web.
+\[LWN subscriber-only content\]
+-------------------------------
+
+[[!meta date="2018-09-23T00:00:00+0000"]]
+
+[[!meta updated="2018-09-23T14:36:29-0400"]]
+
+[[!toc levels=2]]
+
+I recently took a deep dive into web site archival and fossilization; as
+it turns out, some sites are much harder to archive than others. This
+article goes through the process of archiving traditional web sites and
+shows how it falls short when confronted with the latest fashions in the
+single-page applications that are bloating the modern web.
 
 Converting simple sites
 -----------------------
 
 The days of handcrafted HTML web sites are long gone. Now web sites are
-dynamic and built on the fly from using the latest JavaScript, PHP, or
-Python framework. As a result, the sites are more fragile: a database
-crash, spurious upgrade, or unpatched vulnerability can destroy those
-precious snowflakes forever. In my previous life as web developer, I
-had to come to terms with the idea that customers expect web sites to
-work basically forever. This expectation matches poorly with "move
-fast and break things" attitude of web development. Working the
-[Drupal](https://drupal.org) Content Management System (CMS) was particularly
-challenging in that regard as major upgrades deliberately break
-third-party compatibility which implies a costly upgrade process that
-clients could seldom afford. Our solution was to archive those sites:
-take a living, dynamic web site and turn it into plain HTML files that
-any web server can serve forever. This process is useful for dynamic
-sites but also for third-party sites outside our control we might want
-to safeguard.
-
-For simple or static sites, the venerable [wget][] program works
-well. The incantation to mirror a full web site, however, is byzantine:
-
-    nice wget --mirror --execute robots=off --no-verbose --convert-links --backup-converted --page-requisites --adjust-extension --base=./ --directory-prefix=./ --span-hosts --domains=www.example.com,example.com http://www.example.com/
+dynamic and built on the fly using the latest JavaScript, PHP, or Python
+framework. As a result, the sites are more fragile: a database crash,
+spurious upgrade, or unpatched vulnerability can destroy those sites
+forever. In my previous life as web developer, I had to come to terms
+with the idea that customers expect web sites to basically work forever.
+This expectation matches poorly with "move fast and break things"
+attitude of web development. Working with the [Drupal][]
+content-management system (CMS) was particularly challenging in that
+regard as major upgrades deliberately break compatibility with
+third-party modules, which implies a costly upgrade process that clients
+could seldom afford. The solution was to archive those sites: take a
+living, dynamic web site and turn it into plain HTML files that any web
+server can serve forever. This process is useful for our own dynamic
+sites but also for third-party sites that are outside of our control
+that we might want to safeguard.
+
+For simple or static sites, the venerable [Wget][] program works well.
+The incantation to mirror a full web site, however, is byzantine:
+
+        $ nice wget --mirror --execute robots=off --no-verbose --convert-links \
+                           --backup-converted --page-requisites --adjust-extension \
+                           --base=./ --directory-prefix=./ --span-hosts \
+                   --domains=www.example.com,example.com http://www.example.com/
 
 The above downloads the content of the web page, but also crawls
 everything within the specified domains. It will also fetch "page
 requisites" like style sheets (CSS), images, and scripts. The
-downloaded page contents modified so that links point to the local
-copy as well. Any web server can host the resulting file set and we
-have a static copy of the original web site.
-
-That is, when things go well. Anyone who ever worked on a computer
-knows that things seldom go according to plan however: all sorts of
-things can make the procedure derail in interesting ways. For example,
-it was trendy for a while to have calendar blocks in web sites. A CMS
-would generate those on the fly and make crawlers go into an infinite
-loop on the site. Crafty archivers can resort to regular expressions
-(e.g. wget has `--reject-regex` option) to ignore problematic
-resources. Another option, if the administration interface for the
-web site is accessible, is to disable calendars, login forms, comment
-formas, and other dynamic areas. Once the site becomes static, those
-will stop working anyways so it makes sense to remove such clutter
+downloaded page contents are modified so that links point to the local
+copy as well. Any web server can host the resulting file set, which
+results in a static copy of the original web site.
+
+That is, when things go well. Anyone who has ever worked with a computer
+knows that things seldom go according to plan; all sorts of things can
+make the procedure derail in interesting ways. For example, it was
+trendy for a while to have calendar blocks in web sites. A CMS would
+generate those on the fly and make crawlers go into an infinite loop
+trying to retrieve all of the pages. Crafty archivers can resort to
+regular expressions (e.g. Wget has a `--reject-regex` option) to ignore
+problematic resources. Another option, if the administration interface
+for the web site is accessible, is to disable calendars, login forms,
+comment forms, and other dynamic areas. Once the site becomes static,
+those will stop working anyway, so it makes sense to remove such clutter
 from the original site as well.
 
+  [Drupal]: https://drupal.org
+  [Wget]: https://www.gnu.org/software/wget/
+
 JavaScript doom
 ---------------
 
-Unfortunately, some web sites are built with much more than pure
-HTML. In single-page sites for example, the web browser builds the
-content itself by executing a small JavaScript program. A simple user
-agent like wget will struggle to reconstruct a meaningful static copy
-those sites as it does not support JavaScript at all. In theory, web
-sites should be using [progressive enhancement](https://en.wikipedia.org/wiki/Progressive_enhancement) to have content and
+Unfortunately, some web sites are built with much more than pure HTML.
+In single-page sites, for example, the web browser builds the content
+itself by executing a small JavaScript program. A simple user agent like
+Wget will struggle to reconstruct a meaningful static copy of those
+sites as it does not support JavaScript at all. In theory, web sites
+should be using [progressive enhancement][] to have content and
 functionality available without JavaScript but those directives are
-rarely followed, as anyone using plugins like [noScript](https://noscript.net/) or
-[uMatrix](https://github.com/gorhill/uMatrix) will confirm.
+rarely followed, as anyone using plugins like [NoScript][] or
+[uMatrix][] will confirm.
 
-Traditional archival method sometimes fail in the dumbest way. When
+Traditional archival methods sometimes fail in the dumbest way. When
 trying to build an offsite backup of a local newspaper
-([pamplemousse.ca](https://pamplemousse.ca/)), I found that Wordpress adds query strings
-(`?ver=1.12.4`) at the end of JavaScript includes. This confuses
-content-type detection in webservers, which rely on the file extension
-to send the right `Content-Type`. When such an archive is loaded in a
-web browser, it fails to load scripts which breaks dynamic websites.
-
-As the web moves towards using the browser as a virtual machine to run
+([pamplemousse.ca][]), I found that WordPress adds query strings (e.g.
+`?ver=1.12.4`) at the end of JavaScript includes. This confuses
+content-type detection in web servers, which relies on the file
+extension to send the right `Content-Type` header. When such an archive
+is loaded in a web browser, it fails to load scripts, which breaks
+dynamic websites.
+
+As the web moves toward using the browser as a virtual machine to run
 arbitrary code, archival methods relying on pure HTML parsing need to
-adapt. The solution for such problems is to record (and replay) the
-HTTP headers delivered by the server during the crawl and indeed
-professionnal archivists use such an approach.
+adapt. The solution for such problems is to record (and replay) the HTTP
+headers delivered by the server during the crawl and indeed professional
+archivists use just such an approach.
+
+  [progressive enhancement]: https://en.wikipedia.org/wiki/Progressive_enhancement
+  [NoScript]: https://noscript.net/
+  [uMatrix]: https://github.com/gorhill/uMatrix
+  [pamplemousse.ca]: https://pamplemousse.ca/
 
 Creating and displaying WARC files
 ----------------------------------
 
-At the [Internet Archive](https://archive.org), Brewster Kahle and Mike Burner designed
-the [ARC](http://www.archive.org/web/researcher/ArcFileFormat.php) (for "ARChive") file format in 1996 to provide a way to
-aggregate the million of small files produced by their archival
+At the [Internet Archive][], Brewster Kahle and Mike Burner designed the
+[ARC][] (for "ARChive") file format in 1996 to provide a way to
+aggregate the millions of small files produced by their archival
 efforts. The format was eventually standardized as the WARC ("Web
-ARChive") [specification](https://iipc.github.io/warc-specifications/) released as an ISO standard in 2009 and
-revised in 2017, an effort led by the [International Internet
-Preservation Consortium](https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium) (IIPC). The IIPC is an "international
-organization of libraries and other organizations established to
-coordinate efforts to preserve internet content for the future"
-according to Wikipedia, which includes members such as the Library of
-Congress and the Internet Archive. The latter uses the WARC format
-internally in its Java-based [Heritrix crawler](https://github.com/internetarchive/heritrix3/wiki) although a
-significant part of the "wayback machine" are actually crawled by
-[Alexa Internet](https://en.wikipedia.org/wiki/Alexa_Internet).
+ARChive") [specification][] that was released as an ISO standard in
+2009 and revised in 2017. The standardization effort was led by the
+[International Internet Preservation Consortium][] (IIPC), which is an
+"*international organization of libraries and other organizations
+established to coordinate efforts to preserve internet content for the
+future*", according to Wikipedia; it includes members such as the US
+Library of Congress and the Internet Archive. The latter uses the WARC
+format internally in its Java-based [Heritrix crawler][] although a
+significant part of the "Wayback Machine" is actually crawled by
+[Alexa Internet][].
 
 A WARC file aggregates multiple resources like HTTP headers, file
-contents, and other metadata in a single compressed

(fichier de différences tronqué)
one more issue related to that article
diff --git a/services/archive.mdwn b/services/archive.mdwn
index d4d2a877..7cab6a71 100644
--- a/services/archive.mdwn
+++ b/services/archive.mdwn
@@ -298,3 +298,4 @@ Submitted issues:
 
  * [fix broken link to specification](https://github.com/iipc/warc-specifications/pull/45)
  * [sample Apache configuration](https://github.com/webrecorder/pywb/pull/374) for pywb
+ * [make job status less chatty](https://github.com/ArchiveTeam/ArchiveBot/pull/326) in ArchiveBot

take same suggestions from jake and rewrite mid-section
diff --git a/blog/archive.mdwn b/blog/archive.mdwn
index 6be6712a..fbfe1adc 100644
--- a/blog/archive.mdwn
+++ b/blog/archive.mdwn
@@ -1,34 +1,34 @@
 Archival in the single-page web
 ===============================
 
-I recently took a deep dive into website archival and fossilization:
+I recently took a deep dive into web site archival and fossilization:
 as it turns out, some sites are much harder to archive than
 others. This article goes through the process of archiving traditional
-websites and how it falls short when confronted with the latest
+web sites and how it falls short when confronted with the latest
 fashions in single-page applications bloating the modern web.
 
 Converting simple sites
 -----------------------
 
-The days of handcrafted HTML websites are long gone. Now websites are
+The days of handcrafted HTML web sites are long gone. Now web sites are
 dynamic and built on the fly from using the latest JavaScript, PHP, or
 Python framework. As a result, the sites are more fragile: a database
 crash, spurious upgrade, or unpatched vulnerability can destroy those
 precious snowflakes forever. In my previous life as web developer, I
-had to come to terms with the idea that customers expect websites to
+had to come to terms with the idea that customers expect web sites to
 work basically forever. This expectation matches poorly with "move
 fast and break things" attitude of web development. Working the
 [Drupal](https://drupal.org) Content Management System (CMS) was particularly
 challenging in that regard as major upgrades deliberately break
 third-party compatibility which implies a costly upgrade process that
 clients could seldom afford. Our solution was to archive those sites:
-take a living, dynamic website and turn it into plain HTML files that
+take a living, dynamic web site and turn it into plain HTML files that
 any web server can serve forever. This process is useful for dynamic
 sites but also for third-party sites outside our control we might want
 to safeguard.
 
 For simple or static sites, the venerable [wget][] program works
-well. The incantation to mirror a full website, however, is byzantine:
+well. The incantation to mirror a full web site, however, is byzantine:
 
     nice wget --mirror --execute robots=off --no-verbose --convert-links --backup-converted --page-requisites --adjust-extension --base=./ --directory-prefix=./ --span-hosts --domains=www.example.com,example.com http://www.example.com/
 
@@ -37,17 +37,17 @@ everything within the specified domains. It will also fetch "page
 requisites" like style sheets (CSS), images, and scripts. The
 downloaded page contents modified so that links point to the local
 copy as well. Any web server can host the resulting file set and we
-have a static copy of the original website.
+have a static copy of the original web site.
 
 That is, when things go well. Anyone who ever worked on a computer
 knows that things seldom go according to plan however: all sorts of
 things can make the procedure derail in interesting ways. For example,
-it was trendy for a while to have calendar blocks in websites. A CMS
+it was trendy for a while to have calendar blocks in web sites. A CMS
 would generate those on the fly and make crawlers go into an infinite
 loop on the site. Crafty archivers can resort to regular expressions
 (e.g. wget has `--reject-regex` option) to ignore problematic
 resources. Another option, if the administration interface for the
-website is accessible, is to disable calendars, login forms, comment
+web site is accessible, is to disable calendars, login forms, comment
 formas, and other dynamic areas. Once the site becomes static, those
 will stop working anyways so it makes sense to remove such clutter
 from the original site as well.
@@ -55,70 +55,45 @@ from the original site as well.
 JavaScript doom
 ---------------
 
-Unfortunately, some websites are not built only with HTML anymore. In
-single-page sites for example, the web browser itself builds the
+Unfortunately, some web sites are built with much more than pure
+HTML. In single-page sites for example, the web browser builds the
 content itself by executing a small JavaScript program. A simple user
-agent like wget might have difficulty constructing a meaningful static
-copy those sites as it does not support JavaScript. In theory, web
+agent like wget will struggle to reconstruct a meaningful static copy
+those sites as it does not support JavaScript at all. In theory, web
 sites should be using [progressive enhancement](https://en.wikipedia.org/wiki/Progressive_enhancement) to have content and
 functionality available without JavaScript but those directives are
-rarely followed. Anyone using plugins like [noScript](https://noscript.net/) or
-[uMatrix](https://github.com/gorhill/uMatrix) know well the pain of browsing the web with JavaScript
-disabled. As the web moves towards using the browser as a virtual
-machine to run arbitrary code, archival methods relying on pure HTML
-parsing need to adapt.
-
-The traditional method may fail in the dumbest way. When trying to
-build an offsite backup of a local newspaper ([pamplemousse.ca](https://pamplemousse.ca/)), I
-found that Wordpress actually adds query strings at the end of
-JavaScript includes. For example, the URL to load [jQuery](https://jquery.com/) might be:
-
-    http://example.com/wp-includes/js/jquery/jquery.js?ver=1.12.4
-
-That URL results in the following local path when parsed by wget:
-
-    ./example.com/wp-includes/js/jquery/jquery.js?ver=1.12.4
-
-This breaks content-type detection in webservers, which rely on the
-file extension to send the right `Content-Type`. In my tests, Apache
-sends no `Content-Type` at all, which confuses web browsers. For
-example, Chromium will complain with:
-
-    Refused to execute script from '<URL>' because its MIME type ('') is not executable, and strict MIME type checking is enabled
-
-The `--adjust-extension` parameter in wget should normally work around
-those issues but somehow it does not catch this peculiar case. It
-could be possible to fix this through post-processing, by renaming
-those files and doing pattern-replacement on the contents. Experienced
-system administrators might also invoke [Apache rewrite rules](https://httpd.apache.org/docs/current/mod/mod_rewrite.html)
-magic to work around the problem. But those solutions are error-prone
-and difficult to implement reliably. Ideally, it should be possible to
-do pattern replacement on URLs to remove irrelevant query parameters
-(like `?ver=1.12.4`) while keeping important ones (like, say,
-`?page=1`) but that is not supported by `wget` at the time of writing.
-
-The real solution for this specific problem is to record (and replay)
-the HTTP headers delivered by the server during the crawl. While
-`wget` supports writing those in the saved files with the
-`--save-headers` parameter, that behavior is non-standard so it
-pollutes the file contents which implies more post-processing or
-special webserver support. Archival professionnals use another
-approach.
+rarely followed, as anyone using plugins like [noScript](https://noscript.net/) or
+[uMatrix](https://github.com/gorhill/uMatrix) will confirm.
+
+Traditional archival method sometimes fail in the dumbest way. When
+trying to build an offsite backup of a local newspaper
+([pamplemousse.ca](https://pamplemousse.ca/)), I found that Wordpress adds query strings
+(`?ver=1.12.4`) at the end of JavaScript includes. This confuses
+content-type detection in webservers, which rely on the file extension
+to send the right `Content-Type`. When such an archive is loaded in a
+web browser, it fails to load scripts which breaks dynamic websites.
+
+As the web moves towards using the browser as a virtual machine to run
+arbitrary code, archival methods relying on pure HTML parsing need to
+adapt. The solution for such problems is to record (and replay) the
+HTTP headers delivered by the server during the crawl and indeed
+professionnal archivists use such an approach.
 
 Creating and displaying WARC files
 ----------------------------------
 
 At the [Internet Archive](https://archive.org), Brewster Kahle and Mike Burner designed
-the [ARC file format](http://www.archive.org/web/researcher/ArcFileFormat.php) in 1996 to provide a way to aggregate the
-million of small files produced by their archival efforts. The format
-was eventually standardized as the [WARC specification](https://iipc.github.io/warc-specifications/) released as
-an ISO standard in 2009 and revised in 2017, an effort led by the
-[International Internet Preservation Consortium](https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium) (IIPC). The IIPC
-is an "international organization of libraries and other organizations
-established to coordinate efforts to preserve internet content for the
-future" according to Wikipedia, which includes members such as the
-Library of Congress and the Internet Archive. The latter uses the WARC
-format internally in its Java-based [Heritrix crawler](https://github.com/internetarchive/heritrix3/wiki) although a
+the [ARC](http://www.archive.org/web/researcher/ArcFileFormat.php) (for "ARChive") file format in 1996 to provide a way to
+aggregate the million of small files produced by their archival
+efforts. The format was eventually standardized as the WARC ("Web
+ARChive") [specification](https://iipc.github.io/warc-specifications/) released as an ISO standard in 2009 and
+revised in 2017, an effort led by the [International Internet
+Preservation Consortium](https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium) (IIPC). The IIPC is an "international
+organization of libraries and other organizations established to
+coordinate efforts to preserve internet content for the future"
+according to Wikipedia, which includes members such as the Library of
+Congress and the Internet Archive. The latter uses the WARC format
+internally in its Java-based [Heritrix crawler](https://github.com/internetarchive/heritrix3/wiki) although a
 significant part of the "wayback machine" are actually crawled by
 [Alexa Internet](https://en.wikipedia.org/wiki/Alexa_Internet).
 
@@ -174,7 +149,7 @@ Future work and alternatives
 
 There are plenty more [resources][archive-team-warc] surrounding WARC files. In
 particular, there's a wget drop-in replacement called [wpull][]
-specifically designed for archiving websites, with experimental
+specifically designed for archiving web sites, with experimental
 support for [PhantomJS](http://phantomjs.org/) and [youtube-dl](https://youtube-dl.org/) integration which
 should allow downloading more complex JavaScript sites and streaming
 multimedia, respectively. The software is the basis for an elaborate
@@ -188,17 +163,16 @@ crawl a social media profile to generate a list of pages to send into
 ArchiveBot. Another tool ArchiveTeam uses is [crocoite](https://github.com/PromyLOPh/crocoite) which uses
 the Chrome browser in headless mode to archive JavaScript-heavy sites.
 
-{This article would also not be complete without a nod to the
-[httrack][] project, the "website copier". Working similarly to wget,
-httrack creates local copies of remote websites but unfortunately does
+This article would also not be complete without a nod to the
+[httrack][] project, the "web site copier". Working similarly to wget,
+httrack creates local copies of remote web sites but unfortunately does
 not support WARC files output. Its interactive aspects might be more
 interesting for novice users unfamiliar with the command line. In the
 same vein, during my research I found a full rewrite of wget called
 [wget2][] with support for multi-threaded operation, which might make
 it faster than its predecessor. It is [missing some features](https://gitlab.com/gnuwget/wget2/wikis/home) from
 wget however, most notably reject patterns, WARC, and FTP support but
-adds RSS, DNS caching and improved TLS support.}
-{remove^?}
+adds RSS, DNS caching and improved TLS support.
 
 Finally, my personal dream for those kind of tools would be to have
 them integrated with my existing bookmark system. I currently keep

explain which packages need hand modification
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index e7633df9..31f83003 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -63,3 +63,16 @@ packages into the script, I used this mighty pipeline:
     check-emacs-packages $( ( grep '^(use-package' ~/.emacs | sed 's/.* //' ; \
     grep -A2 packages ~/.emacs-custom  | tail -1 | sed 's/[()]//g;s/ /\n/g' ) \
     | sort -u )
+
+Some packages are edited by hand:
+
+ * `dictionary-el` does not follow the emacs team naming convention
+ * `ein` has a false positive in WNPP
+ * `gnus-alias` is not in stable MELPA
+ * `org-mode` is not on any archive?
+ * `rainbow-mode` is in GNU ELPA, not MELPA
+ * `vc` is part of Emacs even though I load it with use-package so it
+   is not listed here
+
+This might be part of an override file instead of having to hand-craft
+this...

document how to actually rewrite the whole emacs table
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index 382d10e4..e7633df9 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -60,6 +60,6 @@ with some manual modifications for packages on the main ELPA archive
 (as opposed to MELPA, which is surprisingly rare). To feed the list of
 packages into the script, I used this mighty pipeline:
 
-    ( grep '^(use-package' ~/.emacs | sed 's/.* //' ; \
+    check-emacs-packages $( ( grep '^(use-package' ~/.emacs | sed 's/.* //' ; \
     grep -A2 packages ~/.emacs-custom  | tail -1 | sed 's/[()]//g;s/ /\n/g' ) \
-    | sort -u
+    | sort -u )

switch to stable elpa to ease comparisons
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index 5e51907b..382d10e4 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -29,31 +29,31 @@ Here's the list of packages I currently use.
 
 Package | Emacs | Debian | Description
 ------- | ----- | ------ | -----------
-anzu | [20161017.1607](https://melpa.org/#/anzu) | [0.62-2](https://tracker.debian.org/elpa-anzu) | Show number of matches in mode-line while searching
-atomic-chrome | [20180617.724](https://melpa.org/#/atomic-chrome) | [#909336](https://bugs.debian.org/909336) | Edit Chrome text area with Emacs using Atomic Chrome
-auto-dictionary | [20150410.1610](https://melpa.org/#/auto-dictionary) | [#909133](http://bugs.debian.org/909133) | automatic dictionary switcher for flyspell
-company | [20180913.2311](https://melpa.org/#/company) | [0.9.6-1](https://tracker.debian.org/elpa-company) | Modular text completion framework
-company-go | [20170825.1643](https://melpa.org/#/company-go) | [20170907-3](https://tracker.debian.org/elpa-company-go) | company-mode backend for Go (using gocode)
-crux | [20180612.655](https://melpa.org/#/crux) | [#909337](https://bugs.debian.org/909337) | A Collection of Ridiculously Useful eXtensions
-dictionary | [20140718.329](https://melpa.org/#/dictionary) | [1.10-3](https://tracker.debian.org/pkg/dictionary-el) | Client for rfc2229 dictionary servers
-ein | [20180919.305](https://melpa.org/#/ein) | None | Emacs IPython Notebook
-elpy | [20180916.839](https://melpa.org/#/elpy) | [1.24.0-1](https://tracker.debian.org/elpa-elpy) | Emacs Python Development Environment
+anzu | [0.62](https://stable.melpa.org/#/anzu) | [0.62-2](https://tracker.debian.org/elpa-anzu) | Show number of matches in mode-line while searching
+atomic-chrome | [2.0.0](https://stable.melpa.org/#/atomic-chrome) | [#909336](http://bugs.debian.org/909336) | Edit Chrome text area with Emacs using Atomic Chrome
+auto-dictionary | [1.1](https://stable.melpa.org/#/auto-dictionary) | [#909133](http://bugs.debian.org/909133) | automatic dictionary switcher for flyspell
+company | [0.9.6](https://stable.melpa.org/#/company) | [0.9.6-1](https://tracker.debian.org/elpa-company) | Modular text completion framework
+company-go | [20170907](https://stable.melpa.org/#/company-go) | [20170907-3](https://tracker.debian.org/elpa-company-go) | company-mode backend for Go (using gocode)
+crux | [0.3.0](https://stable.melpa.org/#/crux) | [#909337](http://bugs.debian.org/909337) | A Collection of Ridiculously Useful eXtensions
+dictionary | [1.10](https://stable.melpa.org/#/dictionary) | [1.10-3](https://tracker.debian.org/pkg/dictionary-el) | Client for rfc2229 dictionary servers
+ein | [0.14.1](https://stable.melpa.org/#/ein) | [None](https://bugs.debian.org/) | Emacs IPython Notebook
+elpy | [1.24.0](https://stable.melpa.org/#/elpy) | [1.24.0-1](https://tracker.debian.org/elpa-elpy) | Emacs Python Development Environment
 gnus-alias | [20150316.42](https://melpa.org/#/gnus-alias) | [None](https://bugs.debian.org/) | an alternative to gnus-posting-styles
 go-mode | [1.5.0](https://stable.melpa.org/#/go-mode) | [3:1.5.0-2](https://tracker.debian.org/elpa-go-mode) | Major mode for the Go programming language
 ledger | [20180826.243](https://melpa.org/#/ledger-mode) | [3.1.2~pre1+g3a00e1c+dfsg1-5](https://tracker.debian.org/elpa-ledger) | command-line double-entry accounting program (emacs interface)
-magit | [20180921.1619](https://melpa.org/#/magit) | [2.13.0-3](https://tracker.debian.org/elpa-magit) | A Git porcelain inside Emacs.
-markdown-mode | [20180904.1601](https://melpa.org/#/markdown-mode) | [2.3+154-1](https://tracker.debian.org/elpa-markdown-mode) | Major mode for Markdown-formatted text
-markdown-toc | [20170711.1949](https://melpa.org/#/markdown-toc) | [#861128](http://bugs.debian.org/861128) | A simple TOC generator for markdown file
-multiple-cursors | [20180913.1237](https://melpa.org/#/multiple-cursors) | [#861127](http://bugs.debian.org/861127) | Multiple cursors for Emacs.
-notmuch | [20180829.927](https://melpa.org/#/notmuch) | [0.27-3](https://tracker.debian.org/elpa-notmuch) | run notmuch within emacs
+magit | [2.13.0](https://stable.melpa.org/#/magit) | [2.13.0-3](https://tracker.debian.org/elpa-magit) | A Git porcelain inside Emacs.
+markdown-mode | [2.3](https://stable.melpa.org/#/markdown-mode) | [2.3+154-1](https://tracker.debian.org/elpa-markdown-mode) | Major mode for Markdown-formatted text
+markdown-toc | [0.1.2](https://stable.melpa.org/#/markdown-toc) | [#861128](http://bugs.debian.org/861128) | A simple TOC generator for markdown file
+multiple-cursors | [1.4.0](https://stable.melpa.org/#/multiple-cursors) | [#861127](http://bugs.debian.org/861127) | Multiple cursors for Emacs.
+notmuch | [0.27](https://stable.melpa.org/#/notmuch) | [0.27-3](https://tracker.debian.org/elpa-notmuch) | run notmuch within emacs
 org | [None](https://github.com/melpa/melpa/blob/master/CONTRIBUTING.org) | [9.1.14+dfsg-3](https://tracker.debian.org/elpa-org) | Keep notes, maintain ToDo lists, and do project planning in emacs
 rainbow-mode | [1.0.1](https://elpa.gnu.org/packages/rainbow-mode.html) | [1.0.1-1](https://tracker.debian.org/elpa-rainbow-mode) | Colorize color names in buffers
-solarized-theme | [20180808.539](https://melpa.org/#/solarized-theme) | [1.2.2-3](https://tracker.debian.org/elpa-solarized-theme) | The Solarized color theme, ported to Emacs.
-use-package | [20180715.1801](https://melpa.org/#/use-package) | [2.3+repack-2](https://tracker.debian.org/elpa-use-package) | A configuration macro for simplifying your .emacs
-webpaste | [20180815.1855](https://melpa.org/#/webpaste) | [None](https://bugs.debian.org/) | Paste to pastebin-like services
-writegood-mode | [20180525.1343](https://melpa.org/#/writegood-mode) | [2.0.3-1](https://tracker.debian.org/elpa-writegood-mode) | Polish up poor writing on the fly
-writeroom-mode | [20170623.1027](https://melpa.org/#/writeroom-mode) | [#861124](http://bugs.debian.org/861124) | Minor mode for distraction-free writing
-yasnippet | [20180916.2115](https://melpa.org/#/yasnippet) | [0.13.0-2](https://tracker.debian.org/elpa-yasnippet) | Yet another snippet extension for Emacs.
+solarized-theme | [1.2.2](https://stable.melpa.org/#/solarized-theme) | [1.2.2-3](https://tracker.debian.org/elpa-solarized-theme) | The Solarized color theme, ported to Emacs.
+use-package | [2.3](https://stable.melpa.org/#/use-package) | [2.3+repack-2](https://tracker.debian.org/elpa-use-package) | A use-package declaration for simplifying your .emacs
+webpaste | [2.1.0](https://stable.melpa.org/#/webpaste) | [None](https://bugs.debian.org/) | Paste to pastebin-like services
+writegood-mode | [2.0.3](https://stable.melpa.org/#/writegood-mode) | [2.0.3-1](https://tracker.debian.org/elpa-writegood-mode) | Polish up poor writing on the fly
+writeroom-mode | [3.7](https://stable.melpa.org/#/writeroom-mode) | [#861124](http://bugs.debian.org/861124) | Minor mode for distraction-free writing
+yasnippet | [0.13.0](https://stable.melpa.org/#/yasnippet) | [0.13.0-2](https://tracker.debian.org/elpa-yasnippet) | Yet another snippet extension for Emacs.
 
 The above was automatically generated using [check-emacs-packages](https://gitlab.com/anarcat/scripts/blob/master/check-emacs-packages)
 with some manual modifications for packages on the main ELPA archive

resort, again, wth
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index c8869b2d..5e51907b 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -38,8 +38,8 @@ crux | [20180612.655](https://melpa.org/#/crux) | [#909337](https://bugs.debian.
 dictionary | [20140718.329](https://melpa.org/#/dictionary) | [1.10-3](https://tracker.debian.org/pkg/dictionary-el) | Client for rfc2229 dictionary servers
 ein | [20180919.305](https://melpa.org/#/ein) | None | Emacs IPython Notebook
 elpy | [20180916.839](https://melpa.org/#/elpy) | [1.24.0-1](https://tracker.debian.org/elpa-elpy) | Emacs Python Development Environment
-go-mode | [20180327.1530](https://melpa.org/#/go-mode) | [3:1.5.0-2](https://tracker.debian.org/elpa-go-mode) | Major mode for the Go programming language
 gnus-alias | [20150316.42](https://melpa.org/#/gnus-alias) | [None](https://bugs.debian.org/) | an alternative to gnus-posting-styles
+go-mode | [1.5.0](https://stable.melpa.org/#/go-mode) | [3:1.5.0-2](https://tracker.debian.org/elpa-go-mode) | Major mode for the Go programming language
 ledger | [20180826.243](https://melpa.org/#/ledger-mode) | [3.1.2~pre1+g3a00e1c+dfsg1-5](https://tracker.debian.org/elpa-ledger) | command-line double-entry accounting program (emacs interface)
 magit | [20180921.1619](https://melpa.org/#/magit) | [2.13.0-3](https://tracker.debian.org/elpa-magit) | A Git porcelain inside Emacs.
 markdown-mode | [20180904.1601](https://melpa.org/#/markdown-mode) | [2.3+154-1](https://tracker.debian.org/elpa-markdown-mode) | Major mode for Markdown-formatted text

explain where the list of packages actually comes from
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index 5b13855a..c8869b2d 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -57,4 +57,9 @@ yasnippet | [20180916.2115](https://melpa.org/#/yasnippet) | [0.13.0-2](https://
 
 The above was automatically generated using [check-emacs-packages](https://gitlab.com/anarcat/scripts/blob/master/check-emacs-packages)
 with some manual modifications for packages on the main ELPA archive
-(as opposed to MELPA, which is surprisingly rare).
+(as opposed to MELPA, which is surprisingly rare). To feed the list of
+packages into the script, I used this mighty pipeline:
+
+    ( grep '^(use-package' ~/.emacs | sed 's/.* //' ; \
+    grep -A2 packages ~/.emacs-custom  | tail -1 | sed 's/[()]//g;s/ /\n/g' ) \
+    | sort -u

sort ledger correctly (??)
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index 9a47d517..5b13855a 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -38,9 +38,9 @@ crux | [20180612.655](https://melpa.org/#/crux) | [#909337](https://bugs.debian.
 dictionary | [20140718.329](https://melpa.org/#/dictionary) | [1.10-3](https://tracker.debian.org/pkg/dictionary-el) | Client for rfc2229 dictionary servers
 ein | [20180919.305](https://melpa.org/#/ein) | None | Emacs IPython Notebook
 elpy | [20180916.839](https://melpa.org/#/elpy) | [1.24.0-1](https://tracker.debian.org/elpa-elpy) | Emacs Python Development Environment
-ledger | [20180826.243](https://melpa.org/#/ledger-mode) | [3.1.2~pre1+g3a00e1c+dfsg1-5](https://tracker.debian.org/elpa-ledger) | command-line double-entry accounting program (emacs interface)
 go-mode | [20180327.1530](https://melpa.org/#/go-mode) | [3:1.5.0-2](https://tracker.debian.org/elpa-go-mode) | Major mode for the Go programming language
 gnus-alias | [20150316.42](https://melpa.org/#/gnus-alias) | [None](https://bugs.debian.org/) | an alternative to gnus-posting-styles
+ledger | [20180826.243](https://melpa.org/#/ledger-mode) | [3.1.2~pre1+g3a00e1c+dfsg1-5](https://tracker.debian.org/elpa-ledger) | command-line double-entry accounting program (emacs interface)
 magit | [20180921.1619](https://melpa.org/#/magit) | [2.13.0-3](https://tracker.debian.org/elpa-magit) | A Git porcelain inside Emacs.
 markdown-mode | [20180904.1601](https://melpa.org/#/markdown-mode) | [2.3+154-1](https://tracker.debian.org/elpa-markdown-mode) | Major mode for Markdown-formatted text
 markdown-toc | [20170711.1949](https://melpa.org/#/markdown-toc) | [#861128](http://bugs.debian.org/861128) | A simple TOC generator for markdown file

more packages i missed in the first review
diff --git a/software/desktop/emacs.mdwn b/software/desktop/emacs.mdwn
index 45061526..9a47d517 100644
--- a/software/desktop/emacs.mdwn
+++ b/software/desktop/emacs.mdwn
@@ -32,24 +32,28 @@ Package | Emacs | Debian | Description
 anzu | [20161017.1607](https://melpa.org/#/anzu) | [0.62-2](https://tracker.debian.org/elpa-anzu) | Show number of matches in mode-line while searching
 atomic-chrome | [20180617.724](https://melpa.org/#/atomic-chrome) | [#909336](https://bugs.debian.org/909336) | Edit Chrome text area with Emacs using Atomic Chrome
 auto-dictionary | [20150410.1610](https://melpa.org/#/auto-dictionary) | [#909133](http://bugs.debian.org/909133) | automatic dictionary switcher for flyspell
+company | [20180913.2311](https://melpa.org/#/company) | [0.9.6-1](https://tracker.debian.org/elpa-company) | Modular text completion framework
 company-go | [20170825.1643](https://melpa.org/#/company-go) | [20170907-3](https://tracker.debian.org/elpa-company-go) | company-mode backend for Go (using gocode)
 crux | [20180612.655](https://melpa.org/#/crux) | [#909337](https://bugs.debian.org/909337) | A Collection of Ridiculously Useful eXtensions
 dictionary | [20140718.329](https://melpa.org/#/dictionary) | [1.10-3](https://tracker.debian.org/pkg/dictionary-el) | Client for rfc2229 dictionary servers
 ein | [20180919.305](https://melpa.org/#/ein) | None | Emacs IPython Notebook
 elpy | [20180916.839](https://melpa.org/#/elpy) | [1.24.0-1](https://tracker.debian.org/elpa-elpy) | Emacs Python Development Environment
 ledger | [20180826.243](https://melpa.org/#/ledger-mode) | [3.1.2~pre1+g3a00e1c+dfsg1-5](https://tracker.debian.org/elpa-ledger) | command-line double-entry accounting program (emacs interface)
+go-mode | [20180327.1530](https://melpa.org/#/go-mode) | [3:1.5.0-2](https://tracker.debian.org/elpa-go-mode) | Major mode for the Go programming language
 gnus-alias | [20150316.42](https://melpa.org/#/gnus-alias) | [None](https://bugs.debian.org/) | an alternative to gnus-posting-styles
+magit | [20180921.1619](https://melpa.org/#/magit) | [2.13.0-3](https://tracker.debian.org/elpa-magit) | A Git porcelain inside Emacs.
 markdown-mode | [20180904.1601](https://melpa.org/#/markdown-mode) | [2.3+154-1](https://tracker.debian.org/elpa-markdown-mode) | Major mode for Markdown-formatted text
 markdown-toc | [20170711.1949](https://melpa.org/#/markdown-toc) | [#861128](http://bugs.debian.org/861128) | A simple TOC generator for markdown file
 multiple-cursors | [20180913.1237](https://melpa.org/#/multiple-cursors) | [#861127](http://bugs.debian.org/861127) | Multiple cursors for Emacs.
 notmuch | [20180829.927](https://melpa.org/#/notmuch) | [0.27-3](https://tracker.debian.org/elpa-notmuch) | run notmuch within emacs
+org | [None](https://github.com/melpa/melpa/blob/master/CONTRIBUTING.org) | [9.1.14+dfsg-3](https://tracker.debian.org/elpa-org) | Keep notes, maintain ToDo lists, and do project planning in emacs
 rainbow-mode | [1.0.1](https://elpa.gnu.org/packages/rainbow-mode.html) | [1.0.1-1](https://tracker.debian.org/elpa-rainbow-mode) | Colorize color names in buffers
 solarized-theme | [20180808.539](https://melpa.org/#/solarized-theme) | [1.2.2-3](https://tracker.debian.org/elpa-solarized-theme) | The Solarized color theme, ported to Emacs.
 use-package | [20180715.1801](https://melpa.org/#/use-package) | [2.3+repack-2](https://tracker.debian.org/elpa-use-package) | A configuration macro for simplifying your .emacs
 webpaste | [20180815.1855](https://melpa.org/#/webpaste) | [None](https://bugs.debian.org/) | Paste to pastebin-like services
 writegood-mode | [20180525.1343](https://melpa.org/#/writegood-mode) | [2.0.3-1](https://tracker.debian.org/elpa-writegood-mode) | Polish up poor writing on the fly
 writeroom-mode | [20170623.1027](https://melpa.org/#/writeroom-mode) | [#861124](http://bugs.debian.org/861124) | Minor mode for distraction-free writing
-
+yasnippet | [20180916.2115](https://melpa.org/#/yasnippet) | [0.13.0-2](https://tracker.debian.org/elpa-yasnippet) | Yet another snippet extension for Emacs.
 
 The above was automatically generated using [check-emacs-packages](https://gitlab.com/anarcat/scripts/blob/master/check-emacs-packages)
 with some manual modifications for packages on the main ELPA archive

more elpa packages
diff --git a/software/packages.yml b/software/packages.yml
index 11fd18f0..9f10f215 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -87,7 +87,9 @@
       - chromium
       - diceware
       - dict
+      - dictionary-el
       - electrum
+      - elpa-writegood-mode
       - emacs
       - exiftool
       - feed2exec
@@ -209,9 +211,13 @@
       - elpa-anzu
       - elpa-company
       - elpa-company-go
+      - elpa-elpy
       - elpa-ledger
+      - elpa-magit
       - elpa-markdown-mode
       - elpa-py-autopep8
+      - elpa-rainbow-mode
+      - elpa-solarized-theme
       - elpa-use-package
       - elpa-yasnippet
       - exuberant-ctags

quite notes on UHK
diff --git a/hardware/keyboard.mdwn b/hardware/keyboard.mdwn
index 2e1e6d93..fac4c1d4 100644
--- a/hardware/keyboard.mdwn
+++ b/hardware/keyboard.mdwn
@@ -167,3 +167,11 @@ Ultimate hacking keyboard
 "Built to last", "split keyboard" and all sorts of buzzwords...
 
 <https://www.crowdsupply.com/ugl/ultimate-hacking-keyboard>
+
+<https://ultimatehackingkeyboard.com/>
+
+Downsides:
+
+ * no escape key (?!)
+ * no function keys without modifier
+ * expensive

Archival link:

The above link creates a machine-readable RSS feed that can be used to easily archive new changes to the site. It is used by internal scripts to do sanity checks on new entries in the wiki.

Created . Edited .