Recent changes to this wiki. Not to be confused with my history.

Complete source to the wiki is available on gitweb or by cloning this site.

another test tool
diff --git a/services/dns/dnssec.mdwn b/services/dns/dnssec.mdwn
index 71166928..2ced2bdc 100644
--- a/services/dns/dnssec.mdwn
+++ b/services/dns/dnssec.mdwn
@@ -113,6 +113,8 @@ Pour activer DNSSEC dans bind, il faut ajouter, au bloc `options {}`:
 
 Ensuite, suivre la section 3. Mais il est peut-être préférable d'utiliser un outil comme [[!debpkg opendnssec]] qui fait la signature des zones automatiquement...
 
+Pour tester ses zones, on peut utiliser [DNS Viz](http://dnsviz.net/).
+
 Autre documentation
 ===================
 

more mail notes
diff --git a/services/mail.mdwn b/services/mail.mdwn
index 9f2f3cbf..e7ca32d1 100644
--- a/services/mail.mdwn
+++ b/services/mail.mdwn
@@ -182,7 +182,41 @@ Then postfix must be reloaded:
     postfix reload
 
 Test messages can be sent to [dkimvalidator](http://dkimvalidator.com/) or
-[mail-tester.com](https://www.mail-tester.com/) which will both check DKIM signatures.
+[mail-tester.com](https://www.mail-tester.com/) which will both check
+DKIM signatures. A good spamassassin install will also check those
+signatures, what you are looking for is:
+
+ * `-0.1 DKIM_VALID`: Message has at least one valid DKIM or DK
+   signature
+ * `-0.1 DKIM_VALID_AU`: Message has a valid DKIM or DK signature from
+   author's domain
+ * `-0.1 DKIM_VALID_EF`: Message has a valid DKIM or DK signature from
+   envelope-from domain
+
+If one of those is missing, then you are doing something wrong and
+your "spamminess" score will be worse. The latter is especially tricky
+as it validates the "Envelope From", which is the `MAIL FROM:` header
+as sent by the originating MTA, which you see as `from=<>` in the
+postfix lost.
+    
+The following will happen anyways, as soon as you have a signature,
+that's normal:
+
+ * `0.1 DKIM_SIGNED`: Message has a DKIM or DK signature, not necessarily valid
+
+And this might happen if you have a ADSP record *but* do not correctly
+sign the message with a domain field that matches the record:
+
+ * `1.1 DKIM_ADSP_ALL` No valid author signature, domain signs all mail
+
+That's bad and will affect your spam core badly. I fixed that issue by
+using a wildcard key in the key table:
+
+    --- a/opendkim/key.table
+    +++ b/opendkim/key.table
+    @@ -1 +1 @@
+    -marcos anarc.at:marcos:/etc/opendkim/keys/marcos.private
+    +marcos %:marcos:/etc/opendkim/keys/marcos.private
 
 References used:
 

fix syntax error
diff --git a/services/mail.mdwn b/services/mail.mdwn
index 2fdddecd..9f2f3cbf 100644
--- a/services/mail.mdwn
+++ b/services/mail.mdwn
@@ -200,11 +200,11 @@ I am using this simple, non-restrictive [DMARC](https://en.wikipedia.org/wiki/DM
 
 Breaking it down, this means:
 
-    * `v=DMARC`: version number
-    * `p=none`: policy, do nothing (could be `quarantine` or `reject`)
-    * `pct=100`: to which ratio of emails does the policy apply to
+ * `v=DMARC`: version number
+ * `p=none`: policy, do nothing (could be `quarantine` or `reject`)
+ * `pct=100`: to which ratio of emails does the policy apply to
     (all emails)
-    * `rua=...`: where to send failure reports
+ * `rua=...`: where to send failure reports
 
 I am not clear yet on how that interacts with DKIM and SPF, but that
 seems like a safe way to start.

add DMARC
diff --git a/services/mail.mdwn b/services/mail.mdwn
index 96c3c708..2fdddecd 100644
--- a/services/mail.mdwn
+++ b/services/mail.mdwn
@@ -191,6 +191,24 @@ References used:
  * [linode tutorial](https://www.linode.com/docs/email/postfix/configure-spf-and-dkim-in-postfix-on-debian-8/), also recommends rotating keys every 6 months
  * [jak-linux](https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/): uses rspamd instead of opendkim, and [PostSRSd](https://github.com/roehling/postsrsd)
 
+DMARC
+-----
+
+I am using this simple, non-restrictive [DMARC](https://en.wikipedia.org/wiki/DMARC) policy:
+
+    _dmarc	IN	TXT	"v=DMARC1;p=none;pct=100;rua=mailto:postmaster@anarc.at"
+
+Breaking it down, this means:
+
+    * `v=DMARC`: version number
+    * `p=none`: policy, do nothing (could be `quarantine` or `reject`)
+    * `pct=100`: to which ratio of emails does the policy apply to
+    (all emails)
+    * `rua=...`: where to send failure reports
+
+I am not clear yet on how that interacts with DKIM and SPF, but that
+seems like a safe way to start.
+
 Postfix
 =======
 

clarify what the ADSP policy is
diff --git a/services/mail.mdwn b/services/mail.mdwn
index 86b53747..96c3c708 100644
--- a/services/mail.mdwn
+++ b/services/mail.mdwn
@@ -127,9 +127,15 @@ like this:
               "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAubF5LI+R0lroVTItfbbs714+BW3waB34fUZ7wJT6Vrj3QVNg82bjIAL9u+WMOGt4okNi/QhjtoofqeUTVJycULiu1YXn6yQ8Dqrvm4s4uO8mwErOPV2mIplyRVwLEmS5zCw4UcVTdyDnPHqbzrRXAJfwJ3qFwwLwRraLDuuzqv+RX+mb5/s4VgjAXXAFHzDOdhcj7kGk8CzxXW"
               "ZWt8W7ilJUSoRoslBoA/jj5TMUU/xtbtSV0kZUVf8Y+0IKuxJHTlSrDqJ/PcMJZqMHtRY2ZQPtzaGPeFLJRN1J4U+krqerJiCc+n7P0KBS/yPb0H24mbWGP2WFxou3s3XdYeGSiQIDAQAB" )  ; ----- DKIM key marcos for anarc.at
 
+Then this will tell other servers we always sign our mails, and refuse
+unsigned emails:
+
     ; enforce DKIM on all mails from this domain, deprecated
     _adsp._domainkey        IN      TXT     "dkim=all"
 
+Note that the above "ADSP" policy is now [discouraged](https://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to-historic/) [in favor
+of](https://dmarc.org/wiki/FAQ#What_happens_if_a_sender_uses_DMARC_and_ADSP.3F) [DMARC](https://en.wikipedia.org/wiki/DMARC).
+
 Fix permissions on the key files:
 
     chmod o-rw /etc/opendkim/keys

new packages
diff --git a/software/packages.yml b/software/packages.yml
index c8b20fca..54e255c0 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -83,6 +83,7 @@
     tags: desktop
     apt: name={{item}} state=installed
     with_items:
+      - anki
       - apksigner
       - arandr
       - aspell-fr
@@ -419,6 +420,7 @@
       - bup
       - canid
       - ccze
+      - certbot
       - cu
       - curl
       - dateutils
@@ -461,6 +463,7 @@
       - powertop
       - pv
       - pwgen
+      - px
       - rcs
       - reptyr
       - rmlint

add SPF and DKIM configurations
diff --git a/services/mail.mdwn b/services/mail.mdwn
index d0e4fca6..86b53747 100644
--- a/services/mail.mdwn
+++ b/services/mail.mdwn
@@ -31,6 +31,160 @@ blocked because it sent spam in the past: in this case your only hope
 is to get the address unblocked or have your provider give you another
 IP address.
 
+SPF
+---
+
+SPF records are fairly simple: they specify policies on the servers
+allowed to send email for a specific domain. They are an extension of
+the traditional "reverse DNS must match" kind of policies.
+
+In my case, it was simply a matter of saying that my main server is
+responsible for all outgoing mail:
+
+    @       TXT     "v=spf1 a mx -all"
+
+Breaking it down, this means:
+
+ * `v=spf1`: this is SPF version 1 ([RFC7208](https://tools.ietf.org/html/rfc7208))
+ * `a`: `A` records for the domain name (e.g. the `A` record of
+   `anarc.at`) are allowed
+ * `MX`: `MX` records for the domain name (e.g. the `MX` record of
+   `anarc.at`, currently [[hardware/server/marcos]]) are allowed
+
+Because the mail exchanger (which receives email) is the same as the
+outgoing email server, this is sufficient. If those services would be
+split up (for example with a separate machine sending email like a
+mailing list server), this would need to be expanded, for example by
+including `a:lists.example.com`.
+
+Configurations can be tested with those tools:
+
+ * [Vamsoft policy tester](https://vamsoft.com/support/tools/spf-policy-tester) or the
+ * [DMARC analyzer](https://www.dmarcanalyzer.com/spf/checker/)
+ * [mxtoolbox.com](https://mxtoolbox.com/)
+ * [intodns.com](https://intodns.com/)
+ * [dt](https://github.com/42wim/dt) (golang)
+
+DKIM
+----
+
+[DKIM](https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail) is a standard ([RFC6376](https://tools.ietf.org/html/rfc6376)) to sign email headers and
+prevent forgeries.
+
+First install the DKIM server and tools:
+
+    apt install opendkim opendkim-tools
+
+Create some configuration directories:
+
+    mkdir -p /etc/opendkim/keys
+
+In `/etc/opendkim.conf`, we add those lines:
+
+    # write socket inside postfix's chroot
+    Socket                  local:/var/spool/postfix/opendkim/opendkim.sock
+
+    # key/host mappings
+    SigningTable        refile:/etc/opendkim/signing.table
+    KeyTable        /etc/opendkim/key.table
+
+    # Hosts to generate signatures for
+    InternalHosts       /etc/opendkim/internal.hosts
+    # Hosts to ignore when verifying signatures, same as above
+    ExternalIgnoreList  /etc/opendkim/internal.hosts
+
+We would have used `Domain` if we had only one domain, but since we
+have many, we need those tables. First one, is `signing.table`:
+
+    *@anarc.at marcos
+    *@orangeseeds.org marcos
+
+First part is a pattern, second is a key that is then found in
+`key.table`:
+
+    marcos anarc.at:marcos:/etc/opendkim/keys/marcos.private
+
+First field is the `signing.table` key. Second is field is a
+colon-separated list of fields: the domain name, a selector (can be
+anything, but we picked the hostname), and the private key file.
+
+Then generate the private key and DNS record
+
+    opendkim-genkey --directory=/etc/opendkim/keys/ --selector=marcos --domain=anarc.at --verbose
+
+The private key is the `.private` file specified above, and the DNS
+record is written in a `.txt` file. The latter should be included in
+the zone file:
+
+    $INCLUDE "/etc/opendkim/keys/marcos.txt"
+
+Unfortunately, that fails with an obscure `permission denied`, maybe
+because it's outside of the normal bind directories and/or
+apparmor. Instead, copy-paste the content, which will look something
+like this:
+
+    marcos._domainkey       IN      TXT     ( "v=DKIM1; h=sha256; k=rsa; "
+              "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAubF5LI+R0lroVTItfbbs714+BW3waB34fUZ7wJT6Vrj3QVNg82bjIAL9u+WMOGt4okNi/QhjtoofqeUTVJycULiu1YXn6yQ8Dqrvm4s4uO8mwErOPV2mIplyRVwLEmS5zCw4UcVTdyDnPHqbzrRXAJfwJ3qFwwLwRraLDuuzqv+RX+mb5/s4VgjAXXAFHzDOdhcj7kGk8CzxXW"
+              "ZWt8W7ilJUSoRoslBoA/jj5TMUU/xtbtSV0kZUVf8Y+0IKuxJHTlSrDqJ/PcMJZqMHtRY2ZQPtzaGPeFLJRN1J4U+krqerJiCc+n7P0KBS/yPb0H24mbWGP2WFxou3s3XdYeGSiQIDAQAB" )  ; ----- DKIM key marcos for anarc.at
+
+    ; enforce DKIM on all mails from this domain, deprecated
+    _adsp._domainkey        IN      TXT     "dkim=all"
+
+Fix permissions on the key files:
+
+    chmod o-rw /etc/opendkim/keys
+    chown opendkim:root /etc/opendkim/keys/*
+
+The DKIM server is then restarted and checked:
+
+    service opendkim restart
+    opendkim-testkey -d anarc.at -s marcos -vv
+
+The `keys not secure` message means you are not using DNSSEC.
+
+Then create the directory with proper permissions:
+
+    mkdir /var/spool/postfix/opendkim
+    chown opendkim /var/spool/postfix/opendkim
+
+Allow postfix to access the opendkim socket:
+
+    adduser postfix opendkim
+
+And restart dkim again:
+
+    service opendkim restart
+
+Then hook it up in postfix:
+
+    postconf -e milter_protocol=6
+    postconf -e milter_default_action=accept
+    postconf -e smtpd_milters=local:opendkim/opendkim.sock
+    postconf -e non_smtpd_milters=local:opendkim/opendkim.sock
+
+If emails are double-signed, add `receive_override_options=no_milters`
+to loopback `smtpd` servers in `master.cf`, for example:
+
+    localhost:10026        inet    n       -       n       -       10      smtpd
+        -o smtpd_tls_security_level=none
+        -o content_filter=
+        -o myhostname=delivery.anarc.at
+        -o receive_override_options=no_milters
+
+Then postfix must be reloaded:
+
+    postfix reload
+
+Test messages can be sent to [dkimvalidator](http://dkimvalidator.com/) or
+[mail-tester.com](https://www.mail-tester.com/) which will both check DKIM signatures.
+
+References used:
+
+ * [Ubuntu documentation](https://help.ubuntu.com/community/Postfix/DKIM)
+ * [Debian wiki](https://wiki.debian.org/opendkim)
+ * [linode tutorial](https://www.linode.com/docs/email/postfix/configure-spf-and-dkim-in-postfix-on-debian-8/), also recommends rotating keys every 6 months
+ * [jak-linux](https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/): uses rspamd instead of opendkim, and [PostSRSd](https://github.com/roehling/postsrsd)
+
 Postfix
 =======
 

mention the helios
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index 8db5730f..b031d0a0 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -168,6 +168,11 @@ The Gnubee's hardware also seems to be lacking by modern standards
 now: the CPU is slow and might have trouble maxing out all drives in
 the cluster, as 6 drives going full spin will generate a lot of I/O.
 
+## Helios
+
+In pre-order: https://kobol.io/helios4/ Interesting alternative to the
+gnubee (more powerful, among other things).
+
 ## Other SoC boards
 
 There are many SoC boards that could be used to create a device from

fix again
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 41d1a702..ba692b31 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -44,8 +44,7 @@ This list details per-year disk usage of my Photo archive:
 | 2017  |     32    |  2953 | 20GB in december: FujiFilm X-T2 test shoot |
 | 2018  |    164    |  9035 | FujiFilm X-T2 |
 | 2019  |      0.9  |    48 | ongoing |
-| ----- | --------- | ----- | ------- |
-| Total |    292    | 31801 |         |
+| **Total** |  292  | 31801 |         |
 
 Years before 2004 are probably mislabeled. Archives from 1988 to 2004
 are still in film and haven't been imported. Yes, the first year using

fix table layout
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 9845a600..41d1a702 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -23,27 +23,29 @@ Disk usage
 
 This list details per-year disk usage of my Photo archive:
 
-| Year  |  Size (GB) | Count | Notes |
-| 1969  |   0.1  |     5 | |
-| 1970  |   0.8  |    26 | |
-| 1998  |   0.3  |    10 | |
-| 2004  |   0.4  |    48 | |
-| 2005  |   0.3  |   557 | |
-| 2006  |   0.9  |   932 | Canon PowerShot A430 |
-| 2007  |   2    |  1161 | |
-| 2008  |   0.9  |   656 | |
-| 2009  |   0.8  |   495 | Nokia N900 |
-| 2010  |   1    |  1077 | |
-| 2011  |   1    |  1241 | |
-| 2012  |  10    |  2908 | Canon PowerShot G12 |
-| 2013  |  33    |  4192 | |
-| 2014  |  27    |  4126 | |
-| 2015  |  11    |  2004 | |
-| 2016  |   0.4  |   131 | |
-| 2017  |  32    |  2953 | 20GB in december: FujiFilm X-T2 test shoot |
-| 2018  | 164    |  9035 | FujiFilm X-T2 |
-| 2019  |   0.9  |    48 | ongoing |
-| Total | 292    | 31801 | |
+| Year  | Size (GB) | Count | Notes |
+| ----- | --------- | ----- | ----- |
+| 1969  |      0.1  |     5 | |
+| 1970  |      0.8  |    26 | |
+| 1998  |      0.3  |    10 | |
+| 2004  |      0.4  |    48 | |
+| 2005  |      0.3  |   557 | |
+| 2006  |      0.9  |   932 | Canon PowerShot A430 |
+| 2007  |      2    |  1161 | |
+| 2008  |      0.9  |   656 | |
+| 2009  |      0.8  |   495 | Nokia N900 |
+| 2010  |      1    |  1077 | |
+| 2011  |      1    |  1241 | |
+| 2012  |     10    |  2908 | Canon PowerShot G12 |
+| 2013  |     33    |  4192 | |
+| 2014  |     27    |  4126 | |
+| 2015  |     11    |  2004 | |
+| 2016  |      0.4  |   131 | |
+| 2017  |     32    |  2953 | 20GB in december: FujiFilm X-T2 test shoot |
+| 2018  |    164    |  9035 | FujiFilm X-T2 |
+| 2019  |      0.9  |    48 | ongoing |
+| ----- | --------- | ----- | ------- |
+| Total |    292    | 31801 |         |
 
 Years before 2004 are probably mislabeled. Archives from 1988 to 2004
 are still in film and haven't been imported. Yes, the first year using

document more stuff
diff --git a/services/backup.mdwn b/services/backup.mdwn
index 4e6de53f..12d691c4 100644
--- a/services/backup.mdwn
+++ b/services/backup.mdwn
@@ -26,7 +26,7 @@ Backup storage
 
 ### External
 
- * `wd`: black external WD drive connected to `marcos`
+ * `wd`: 4TB black external WD drive connected to `marcos`
  * `calyx`: 1.5TB iOmega external backup drive, encrypted, `borg`
    backups for angela
  * `archive0`: 160GB Maxtor hard drive, clear, partial `git-annex`

fix broken link
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index 0c5e82f4..8db5730f 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -8,7 +8,7 @@ particulier [[services/mail]] et [[services/backup]].
 (copied from [[hardware/server/marcos/configuration]])
 
  * motherboard: [ASUS P5G41-M LE/CSM LGA 775 Intel G41 Micro ATX Intel
-   Motherboard](http://www.newegg.com/Product/Product.aspx?Item=N82E16813131399) 65$ newegg ([processeurs supportés](http://commercial.asus.com/product/detail/18))
+   Motherboard](http://www.newegg.com/Product/Product.aspx?Item=N82E16813131399) 65$ newegg ([processeurs supportés](https://www.asus.com/Motherboards/P5G41M/specifications/))
  * case: [Antec Black Aluminum / Steel Fusion Remote Black Micro ATX
    Media Center / HTPC Case](http://www.newegg.com/Product/Product.aspx?Item=N82E16811129054) 150$ newegg, includes "GD01 MX LCD
    Display/IR Receiver"

vero ordered
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index 8d5793f0..0c5e82f4 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -121,6 +121,13 @@ Possible issues:
  * the vero could not possibly take all the load, another server would
    be necessary, if only for storage
 
+Update: I've ordered a Vero 4k, we'll see how it goes. It will at
+least allow me to move the server outside of the living room for now,
+which will simplify things. It will also postpone storage issues if I
+just buy an external hard drive... It seems I have eaten over 1TB of
+storage only in 2018, according to Grafana / Prometheus /
+Munin. Ironically, Prometheus is about 1% of that (12GB)...
+
 ## GnuBee
 
 A possible solution is to shift storage to a SAN like the

more hardware research
diff --git a/hardware/server/marcos.mdwn b/hardware/server/marcos.mdwn
index 00cdb045..8d5793f0 100644
--- a/hardware/server/marcos.mdwn
+++ b/hardware/server/marcos.mdwn
@@ -144,10 +144,22 @@ Specs:
    * 2 x USB 2.0
    * 3-pin J1 serial jack or audio port
  * 11.64W board, power adapter: 12VDC @ 3A
- * all firmware and code FLOSS
+ * all firmware and code FLOSS in theory, in practice Neil Brown had
+   trouble running mainline on it (see this [LWN.net review](https://lwn.net/Articles/743609/) and
+   [this long github discussion](https://github.com/gnubee-git/GnuBee_Docs/issues/75)) so it's not as good as it should
+   be
  * 223$USD
 
-[LWN.net review](https://lwn.net/Articles/743609/)
+One problem with the GnuBee is that the manufacturers are not
+maintaining the drivers and source code for the hardware, which ships
+only with an outdated version of [libreCMC](https://librecmc.org/). This means that
+installing Debian (for example), is tricky. Brown is still working on
+mainline support and [reported in December 2018](https://github.com/gnubee-git/GnuBee_Docs/issues/75#issuecomment-445512986) that his [4.20
+tree](https://github.com/neilbrown/linux/commits/gnubee/v4.20) was working as "near-mainline".
+
+The Gnubee's hardware also seems to be lacking by modern standards
+now: the CPU is slow and might have trouble maxing out all drives in
+the cluster, as 6 drives going full spin will generate a lot of I/O.
 
 ## Other SoC boards
 
@@ -172,3 +184,30 @@ The [Fitlet 2](https://fit-iot.com/web/products/fitlet2/) runs Debian by default
 machine.
 
 See also the [board-db](https://www.board-db.org/) for a full list.
+
+## Other SAN options
+
+On top of the above SoC systems, a proper case or some system will be
+needed to handle more drives. Buying gigantic drives is nice, but it's
+really a single point of failure and I should probably do some RAID or
+ZFS at some point.
+
+One option is to delegate this to an external enclosure, like those
+[Orico multi-bay enclosures](http://www.orico.cc/category.php?id=412&price_min=0&price_max=0&page=2&sort=goods_id&order=DESC#goods_list). They can connect over USB 3 and/or
+eSATA which should give good performance. It's hard to see how they
+work or *if* they work in Linux at all, so compatibility and
+reliability could be a problem.
+
+[SATA](https://en.wikipedia.org/wiki/Serial_ATA#Revisions) and [USB throughput](https://en.wikipedia.org/wiki/USB#Version_history) could also be a bottleneck. For
+example, the Seagate Ironwolf 10TB can do up to 210MB/s, which means
+~1600Mbit/s, totally overloading USB 2 (480Mbps, so by a factor of
+4). USB 3.0 should be able to handle one drive, maybe two or even
+three (5Gbit/s) but at that point, CPU might become a problem as
+well. There are also compatibility problems between newer drives and
+older SATA controllers: marcos' old 3Gbps SATA 2 controller only
+recognized 4TB out of a Seagate IronWolf 8TB NAS drive.
+
+Of course, computer cases should be able to offer hot-swappable disk
+slots but strangely it's somewhat rare in user-level
+hardware. Most SATA controllers and disks support hot-swapping, but it
+needs to be double-checked.

d'autres cossins
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index f8a82410..9845a600 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -204,7 +204,7 @@ Reference
 
 Évidemment, je magasine encore...
 
-Adaptateurs:
+Cossins:
 
  2. de la meilleure photo astronomique. peut-être avec un un adapteur
     à téléscope. [80$USD](https://www.telescopeadapters.com/best-sellers/522-2-ultrawide-true-2-prime-focus-adapter.html) pour un adapteur 2", [exemples plus ou
@@ -230,13 +230,25 @@ Lentilles:
     taille, scellée, 350-400$ sur kijiji , 500$ lozeau
  2. [16-55mm f/2.8 R LM WR ø77](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf16_55mmf28_r_lm_wr/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/16-55mm-f28.htm), [Phoblographer](https://www.thephoblographer.com/2015/03/12/review-fujifilm-16-55mm-f2-8-lm-wr-fujifilm-x-mount/), huge
     but real nice, 900-1400$
- 4. [56mm f/1.2 R ø62mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf56mmf12_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/56mm-f12.htm) ("extraordinary lens",
+ 3. [56mm f/1.2 R ø62mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf56mmf12_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/56mm-f12.htm) ("extraordinary lens",
     again), [Photography life](https://photographylife.com/reviews/fuji-xf-56mm-f-1-2-r) ("one of the best prime portrait
     lenses on the market") 900$ sur kijiji, 1175$ lozeau, not so great
     for macro (70cm min)
+ 4. [90mm f/2 R WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf90mmf2_r_lm_wr/): [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/90mm-f2.htm), [jokas photography](https://jonasraskphotography.com/2015/05/25/the-fujifilm-xf-90mm-f2-review/)
+    ("amazing lens"), [fstoppers](https://fstoppers.com/originals/fstoppers-reviews-fujifilm-xf-90mm-f20-lens-133836) ("spectacular"), [1300$
+    Lozeau](https://lozeau.com/produits/fr/fujifilm/fujifilm-fujinon-xf-90mm-f-2-0-r-lm-wr-p24751/?search=90mm%20fuji&description=true), looks like a good portrait lens but no OIS
+ 5. [80mm f/2.8 R LM OIS WR Macro](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf80mmf28_r_lm_ois_wr_macro/), leur seule vraie lentille
+    macro, [1550$ lozeau](https://lozeau.com/produits/fr/fujifilm/fujifilm-fujinon-xf-80mmf2-8-r-lm-ois-wr-macro-p31178/?search=80mm%20fuji&description=true)
+ 6. une "wide angle", quelques options: [phoblographer](https://www.thephoblographer.com/2017/06/21/best-wide-angle-lenses-for-fujifilm-weve-got-you-covered/), [dpreview
+    forum](https://www.dpreview.com/forums/thread/4049063)
 
 Second appareil:
 
+ 1. [X-T3](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x_t3/), [1780$ chez Lozeau](https://lozeau.com/produits/fr/fujifilm/boitier-fujifilm-x-t3-argent-p32620/?search=x-t3&description=true), [1400$USD chez B&H](https://www.bhphotovideo.com/c/product/1433840-REG/fujifilm_16589058_x_t3_mirrorless_digital_camera.html),
+    [dpreview](https://www.dpreview.com/reviews/fujifilm-x-t3) sont dithyrambiques, grosse amélioration de
+    l'autofocus, nouveau senseur, "night mode" pour le LCD, 4k
+    60fps. peut-être attendre le X-T4 qui pourrait finir par avoir de
+    la stabilisation d'image, comme le [X-H1](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x_h1/)?
  4. [X-E3](http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x_e3/), [dpreview](https://www.dpreview.com/reviews/fujifilm-x-e3-review/), 800CAD on kijiji with a 35mm f/1.4!,
     [1450$ lozeau](https://lozeau.com/produits/fr/photo/appareils-sans-miroir-hybrides/fujifilm/fujifilm/ensemble-fujifilm-x-e3-noir-avec-23mm-f-2-r-wr-p31164c74c77c101/?limit=100), similar size than the x100f but interchangeable
     lenses and cheaper. especially relevant with the 27mm pancake

le 35 f/1.4 est pas vraiment intéressant si je prends la f/2
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index bbb0137f..f8a82410 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -230,8 +230,6 @@ Lentilles:
     taille, scellée, 350-400$ sur kijiji , 500$ lozeau
  2. [16-55mm f/2.8 R LM WR ø77](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf16_55mmf28_r_lm_wr/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/16-55mm-f28.htm), [Phoblographer](https://www.thephoblographer.com/2015/03/12/review-fujifilm-16-55mm-f2-8-lm-wr-fujifilm-x-mount/), huge
     but real nice, 900-1400$
- 3. [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("extraordinary lens"),
-    700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 400-460$ on kijiji
  4. [56mm f/1.2 R ø62mm](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf56mmf12_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/56mm-f12.htm) ("extraordinary lens",
     again), [Photography life](https://photographylife.com/reviews/fuji-xf-56mm-f-1-2-r) ("one of the best prime portrait
     lenses on the market") 900$ sur kijiji, 1175$ lozeau, not so great
@@ -248,6 +246,10 @@ Second appareil:
 
 Écarté:
 
+
+ * [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("extraordinary lens"),
+   700$ new [B&H](https://www.bhphotovideo.com/c/product/839139-REG/Fujifilm_16240755_35mm_f_1_4_XF_R.html), 400-460$ on kijiji. je préfère passer à la f/2,
+   qui est tropicalisée.
  * [50mm f/2 R WR ø46](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf50mmf2_r_wr/), not many reviews. 480$ kijiji, 600$
     Lozeau, cheaper slower version of the 56mm, [not good for
     macro](https://www.imaging-resource.com/lenses/fujinon/xf-50mm-f2-r-wr/review/) as small magnification and not much closeup (39cm min)

le doubleur marchera pas
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 436b5d9e..bbb0137f 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -206,9 +206,6 @@ Reference
 
 Adaptateurs:
 
- 1. un *vrai* doubleur, le [Fujinon Teleconverter XF2X TC WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf2x_tc_wr/) - un
-    vrai doubleur, probablement plus fiable, mais un gros [450$ USD
-    chez B&H](https://www.bhphotovideo.com/c/product/1254242-REG/fujifilm_16516271_xf_2x_tc_wr.html) et rien chez Lozeau (juste le 1.4x)
  2. de la meilleure photo astronomique. peut-être avec un un adapteur
     à téléscope. [80$USD](https://www.telescopeadapters.com/best-sellers/522-2-ultrawide-true-2-prime-focus-adapter.html) pour un adapteur 2", [exemples plus ou
     moins concluants](https://www.lost-infinity.com/fujifilm-x-t1-2-telescope-adapter/). certains prennent de bonnes poses [sans
@@ -254,9 +251,26 @@ Second appareil:
  * [50mm f/2 R WR ø46](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf50mmf2_r_wr/), not many reviews. 480$ kijiji, 600$
     Lozeau, cheaper slower version of the 56mm, [not good for
     macro](https://www.imaging-resource.com/lenses/fujinon/xf-50mm-f2-r-wr/review/) as small magnification and not much closeup (39cm min)
-  * blower are apparently the best solution to clear sensors,
-    e.g. [blower on B&H](https://www.bhphotovideo.com/c/buy/Blowers-Compressed-Air/ci/18806/N/4077634545?origSearch=blower), 5-15$. a [red one](https://www.bhphotovideo.com/c/product/838821-REG/sensei_bl_014_bulb_air_blower_cleaning_system.html) is easier to find
-    in a bag (8$USD). i already have a blower, so not necessary.
+ * blower are apparently the best solution to clear sensors,
+   e.g. [blower on B&H](https://www.bhphotovideo.com/c/buy/Blowers-Compressed-Air/ci/18806/N/4077634545?origSearch=blower), 5-15$. a [red one](https://www.bhphotovideo.com/c/product/838821-REG/sensei_bl_014_bulb_air_blower_cleaning_system.html) is easier to find
+   in a bag (8$USD). i already have a blower, so not necessary.
+
+ * un objectif pour la capture d'oiseaux. j'ai d'abord regardé pour un
+   "vrai doubleur", par exemple le [Fujinon Teleconverter XF2X TC
+   WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf2x_tc_wr/) qui serait plus fiable que la cochonnerie que j'ai acheté
+   (voir "doubleur cheap" ci-bas). un gros [450$ USD chez B&H](https://www.bhphotovideo.com/c/product/1254242-REG/fujifilm_16516271_xf_2x_tc_wr.html) et
+   rien chez Lozeau (juste le 1.4x). Mais ces doubleurs fonctionnent
+   seulement avec les gros objectifs comme le [XF100-400mmF4.5-5.6 R
+   LM OIS WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf100_400mmf45_56_r_lm_ois_wr/) ([2500$CAD chez Lozeau](https://lozeau.com/produits/fr/fujifilm/fujifilm-fujinon-xf-100-400mm-f-4-5-5-6-r-lm-ois-wr-p24755/?search=100-400&description=1), [1900$USD chez B&H](https://www.bhphotovideo.com/c/product/1210897-REG/fujifilm_16501109_xf_100_400mm_f_4_5_5_6_r.html)!)
+   ou le [XF200mmF2 R LM OIS WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf200mmf2_r_lm_ois_wr/) ([7800$CAD chez lozeau](https://lozeau.com/produits/fr/fujifilm/fujifilm-fujinon-xf-200mm-f-2-r-lm-ois-wr-avec-teleconvertisseur-1-4x-f2-tc-wr-p32442/?search=200%20f2&description=true),
+   [6000$USD chez B&H](https://www.bhphotovideo.com/c/product/1424734-REG/fujifilm_16586343_xf_200mm_f_2_ois.html)!!!) et pas avec mon 55-200mm. Je considère
+   plutôt utiliser une lentille moins chère et plus "longue" comme la
+   [AF-S NIKKOR 200-500mm f/5.6E ED VR](https://www.nikonusa.com/en/nikon-products/product/camera-lenses/af-s-nikkor-200-500mm-f%252f5.6e-ed-vr.html) ([1800$CAD chez Lozeau](https://lozeau.com/produits/fr/nikon/nikon-af-s-nikkor-200-500mm-f-5-6e-ed-vr-p24429/?search=200-500&description=true),
+   [1400$USD chez B&H](https://www.bhphotovideo.com/c/product/1175034-REG/nikon_af_s_nikkor_200_500mm_f_5_6e.html)) avec un [adaptateur](https://www.bhphotovideo.com/c/search?ci=3420&fct=fct_camera-body-mount_1595%7cfujifilm-x-mount%2bfct_lens-mount_1596%7cnikon-f&N=4077634486&), possiblement avec
+   un "speed boost" ([150$ chez B&H](https://www.bhphotovideo.com/c/product/1226782-REG/mitakon_zhongyi_mtkltm2aig2x_lens_turbo_adapter_v2.html)). Mais possiblement pas de
+   contrôle ouverture/autofocus, malheureusement. Ca donnerait un
+   objectif 300-750mm f/4, ce qui est quand même intéressant pour le
+   prix.
 
 PS: It looks like Rockwell considers almost all Fujifilm lenses to be
 "extraordinary" in some way, so be warned of the potential

mettre a jour l'utilisation de disque et photos par année
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index 774553b1..436b5d9e 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -23,28 +23,33 @@ Disk usage
 
 This list details per-year disk usage of my Photo archive:
 
- * 1969: 0.1 GB
- * 1970: 0.8 GB
- * 1998: 0.3 GB
- * 2004: 0.4 GB
- * 2005: 0.3 GB
- * 2006: 0.9 GB (Canon PowerShot A430)
- * 2007: 2 GB
- * 2008: 0.9 GB
- * 2009: 0.8 GB (Nokia N900)
- * 2010: 1 GB
- * 2011: 1 GB
- * 2012: 10 GB (Canon PowerShot G12)
- * 2013: 33 GB
- * 2014: 27 GB
- * 2015: 11 GB
- * 2016: 0.4 GB
- * 2017: 32 GB (20 GB in december: FujiFilm X-T2 test shoot!)
- * 2018: 137 GB and counting (FujiFilm X-T2)
- * Total: 263 GB
+| Year  |  Size (GB) | Count | Notes |
+| 1969  |   0.1  |     5 | |
+| 1970  |   0.8  |    26 | |
+| 1998  |   0.3  |    10 | |
+| 2004  |   0.4  |    48 | |
+| 2005  |   0.3  |   557 | |
+| 2006  |   0.9  |   932 | Canon PowerShot A430 |
+| 2007  |   2    |  1161 | |
+| 2008  |   0.9  |   656 | |
+| 2009  |   0.8  |   495 | Nokia N900 |
+| 2010  |   1    |  1077 | |
+| 2011  |   1    |  1241 | |
+| 2012  |  10    |  2908 | Canon PowerShot G12 |
+| 2013  |  33    |  4192 | |
+| 2014  |  27    |  4126 | |
+| 2015  |  11    |  2004 | |
+| 2016  |   0.4  |   131 | |
+| 2017  |  32    |  2953 | 20GB in december: FujiFilm X-T2 test shoot |
+| 2018  | 164    |  9035 | FujiFilm X-T2 |
+| 2019  |   0.9  |    48 | ongoing |
+| Total | 292    | 31801 | |
 
 Years before 2004 are probably mislabeled. Archives from 1988 to 2004
-are still in film and haven't been imported.
+are still in film and haven't been imported. Yes, the first year using
+the X-T2 takes two thirds of all the disk space used by my pictures
+(184 GB vs 292 GB), even if it is barely a third of the shots (10034
+of 31801, at the time of writing).
 
 The above was created with:
 

idées pour le prochain calendrier
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 8cc9638b..40ad32d8 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -845,6 +845,19 @@ d'impression, plutôt qu'une date exacte car la date précise est
 généralement inconnue lors du montage. De plus, il est possible qu'une
 impression prenne plusieurs jours pour les gros volumes.
 
+## Améliorations futures
+
+Les calendriers ont souvent un "mini-calendrier" qui montre les mois
+suivants ou précédents. Ou encore montrer les dates des première et
+dernière semaines même si elles sont d'un mois différent, quitte à les
+atténuer (mettre en gris). Voir le [bogue #17](https://github.com/profound-labs/wallcalendar/issues/17).
+
+Le premier jour de la semaine pourrait être un dimanche ([bogue
+#12](https://github.com/profound-labs/wallcalendar/issues/12)).
+
+Le colophon pourrait être disposé en colonnes pour mieux utiliser
+l'espace, peut-être avec le package [multicol](https://www.ctan.org/pkg/multicol).
+
 # Projets similaires #
 
 Ce projet a été inspiré par d'autres projets [DIY](https://fr.wikipedia.org/wiki/Do_it_yourself), en particulier

shoulder strap
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index f3ddedc2..774553b1 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -208,6 +208,8 @@ Adaptateurs:
     à téléscope. [80$USD](https://www.telescopeadapters.com/best-sellers/522-2-ultrawide-true-2-prime-focus-adapter.html) pour un adapteur 2", [exemples plus ou
     moins concluants](https://www.lost-infinity.com/fujifilm-x-t1-2-telescope-adapter/). certains prennent de bonnes poses [sans
     aucun adapteur](https://www.dpreview.com/forums/thread/3656867)
+ 3. shoulder strap that keeps the camera on the side instead of in
+    front, e.g. [BlackRapid Sport Breathe](https://www.bhphotovideo.com/c/product/1278394-REG/blackrapid_361005_sport_breathe_single_strap.html) (75$USD @ B&H)
  5. un holder a lentilles "lens flipper" [75$USD @ B&H](https://www.bhphotovideo.com/c/product/1203066-REG/gowing_8809416750118_lens_flipper_for_mount.html)
  6. un tube macro [MCEX-11 ou MCEX-16](http://www.fujifilm.com/products/digital_cameras/accessories/lens/#mountadapter), [this table](http://www.fujifilm.com/products/digital_cameras/accessories/pdf/mcex_01.pdf) shows the
     magnification/distance for various lens. MCEX-16, for example,

erreur dans la date
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index d53c1cec..8cc9638b 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -836,6 +836,15 @@ possible de corriger dans Darktable, car le module de réduction de
 bruit amenait un effet "gouache" distrayant. Ces effets ont été
 remarqué trop tard dans la chaîne de production pour être corrigés.
 
+## Seconde impression ~20 décembre 2018
+
+Le calendrier indique "Achevé d'imprimer à Montréal, 18 décembre 2018"
+mais il devrait plutôt indiquer le 20 ou 21 décembre. En général, il
+est plutôt coutume d'indiquer seulement le mois et l'année
+d'impression, plutôt qu'une date exacte car la date précise est
+généralement inconnue lors du montage. De plus, il est possible qu'une
+impression prenne plusieurs jours pour les gros volumes.
+
 # Projets similaires #
 
 Ce projet a été inspiré par d'autres projets [DIY](https://fr.wikipedia.org/wiki/Do_it_yourself), en particulier

nouvelle impression complétée: compiler les coûts et finaliser todo
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 9b31967d..d53c1cec 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -701,7 +701,7 @@ Estimé des coûts préliminaire:
  * Coupe et montage: 7.80$ total
  * Total: ~95-155$
 
-Coûts à date:
+Coûts première impression:
 
  * Test BEG: 5.92$
  * Test Papeterie du plateau: 2.30$
@@ -711,17 +711,15 @@ Coûts à date:
  * Impression: 39$ (15 calendriers * 13 feuilles/calendrier *
    20¢/feuille)
  * Temps: 0$ (3h+ à 0$/hre)
- * Sous-total: 69.00$
+ * Reliure repro-UQAM: 46.07$ (7.27$ coupe, 32.80$ 15 reliures)
+ * Shipping: 24.16$ ("paquet avion" 12.16$ + 12.00$, Argentine et
+   Australie)
+ * Sous-total: 139.23$ (14 calendriers complets, 9.95$/calendrier)
 
-Nouvel estimé, pour 15 calendriers:
+Coûts deuxième impression avec Mardigrafe: 200.06$ (tx inc, 12
+calendriers, 16.67$/calendrier)
 
- * Reliure spirale: 22.50$ (1.50$/calendrier)
- * Coupe et assemblage: 7.80$
- * Sous-total: 30.30$
-
-Grand total prévu: 99.30$ (7.09$/calendrier, pour un tirage de
-<del>15</del>14 calendriers -- coûts moindres par calendrier pour un
-plus grand tirage).
+Grand total: 339.29$ (13$/calendrier)
 
 # Liste de tâches
 
@@ -792,6 +790,10 @@ plus grand tirage).
  * première lot d'impressions (fait, au CÉGEP Bois-de-Boulogne, sur la
    [Xerox Altalink C8045][] avec le papier [Verso Sterling Premium
    Digital][])
+ * reliure
+ * ajouter au blog - mentionné dans le [[rapport mensuel de décembre
+   2018|blog/2018-12-21-report]], mais mériterait une présentation
+   plus complète...
 
 Bugs upstream (signalés):
 
@@ -802,12 +804,10 @@ Bugs upstream (signalés):
    evaluation de d'autres fontes)
  * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
 
-## Restantes
+## En cours
 
- * reliure
  * distribution (voir liste des récipiendaires dans agenda, 27
    novembre 2017)
- * ajouter au blog
 
 [latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322
 

lier vers le blog
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 5624c5f5..9b31967d 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -18,7 +18,9 @@ camera](https://www.flickr.com/groups/sooc/)") et pourront l'être ultérieureme
 fanatiques du développement, les "négatifs" ("raw") sont disponibles
 sur demande.)
 
-Les détails sur mes outils de travail sont dans la page [camera](/hardware/camera/).
+Les détails sur mes outils de travail sont dans la page
+[camera](/hardware/camera/). J'ai également mentionné le projet dans mon [[rapport
+mensuel de décembre 2018|blog/2018-12-21-report]].
 
 ## Note sur le nom ##
 

creating tag page tag/brazil
diff --git a/tag/brazil.mdwn b/tag/brazil.mdwn
new file mode 100644
index 00000000..be6f1be9
--- /dev/null
+++ b/tag/brazil.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged brazil"]]
+
+[[!inline pages="tagged(brazil)" actions="no" archive="yes"
+feedshow=10]]

creating tag page tag/photography
diff --git a/tag/photography.mdwn b/tag/photography.mdwn
new file mode 100644
index 00000000..f41b0b82
--- /dev/null
+++ b/tag/photography.mdwn
@@ -0,0 +1,4 @@
+[[!meta title="pages tagged photography"]]
+
+[[!inline pages="tagged(photography)" actions="no" archive="yes"
+feedshow=10]]

monthly report
diff --git a/blog/2018-12-21-report.mdwn b/blog/2018-12-21-report.mdwn
new file mode 100644
index 00000000..ba14b277
--- /dev/null
+++ b/blog/2018-12-21-report.mdwn
@@ -0,0 +1,350 @@
+[[!meta title="December 2018 report: archiving Brazil, calendar and LTS"]]
+
+[[!toc levels=2]]
+
+Last two months free software work
+==================================
+
+Keen readers probably noticed that I didn't produce a report in
+November. I am not sure why, but I couldn't find the time to do
+so. When looking back at those past two months, I didn't find that many
+individual projects I worked on, but there were massive ones, of the
+scale of archiving the entire government of Brazil or learning the
+intricacies of print media, both of which were slightly or largely
+beyond my existing skill set.
+
+Calendar project
+----------------
+
+I've been meaning to write about this project more publicly for a
+while, but never found the way to do so productively. But now that the
+project is almost over -- I'm getting the final prints today and
+mailing others hopefully soon -- I think this deserves at least a few
+words.
+
+As some of you might know, I bought a new camera last January. Wanting
+to get familiar with how it works and refresh my photography skills, I
+decided to embark on the project of creating a photo calendar
+for 2019. The basic idea was simple: take pictures regularly, then
+each month pick the best picture of that month, collect all those
+twelve pictures and send that to the corner store to print a few
+calendars.
+
+Simple, right?
+
+Well, twelve pictures turned into a whopping 8000 pictures since
+January, not all of which were that good of course. And of course, a
+calendar has twelve months -- so twelve pictures -- but also a cover
+and a back, which means *thirteen* pictures and some explaining. Being
+critical of my own work, it turned out that finding those pictures was
+sometimes difficult, especially considering the medium imposed some
+rules I didn't think about.
+
+For example, the US Letter paper size imposes a different ratio (1.29)
+than the photographic ratio (~1.5) which means I had to reframe every
+photograph. Sometimes this meant discarding entire ideas. Other photos
+were discarded because too depressing even if I found them
+artistically or journalistically important: you don't want to be
+staring at a poor kid distressed at going into school every morning
+for an entire month. Another advice I got was to forget about sunsets
+and dark pictures, as they are difficult to render correctly in
+print. We're used to bright screens displaying those pictures, paper
+is a completely different feeling. Having a good vibe for night and
+star photography, this was a fairly dramatic setback, even though I
+still did feature two excellent pictures.
+
+Then I got a little carried away. At the suggestion of a friend, I
+figured I could get rid of the traditional holiday dates and replace
+them with truly secular holidays, which got me involved in a deep
+search for layout tools, which in turn naturally brought me to [this
+LaTeX template](https://github.com/profound-labs/wallcalendar/). Those who have worked with LaTeX (or probably any
+professional layout tool) know what's next: I spent a significant
+amount of time perfecting the rendering and crafting the final
+document.
+
+Slightly upset by the prices proposed by the corner store
+(15$CAD/calendar!), I figured I could do better by printing on my own,
+especially encouraged by a friend who had access to a good color laser
+printer. I then spent multiple days (if not weeks) looking for the
+right paper, which got me in the rabbit hole of paper weights,
+brightness, texture, and more. I'll just say this: if you ever thought
+lengths were ridiculous in the imperial system, wait until you find
+out how you [find out about how paper weights work](https://discuss.pixls.us/t/grammage-a-lamentation/10182). I finally
+managed to find some 270gsm gloss paper at the corner store -- after
+looking all over town, it was *right there* -- and did a first print
+of 15 calendars, which turned into 14 because of trouble with jammed
+paper. Because the printer couldn't do *recto-verso* copies, I had to
+spend basically 4 hours tending to that stupid device, bringing my
+loathing of printers (the machines) and my respect for printers (the
+people) to an entirely new level.
+
+The time spent on the print was clearly not worth it in the end, and I
+ended up scheduling another print with a professional printer. The
+first proof are clearly superior to the ones I have done myself and,
+in retrospect, completely worth the 15$ per copy.
+
+I still haven't paid for my time in any significant way on that
+project, something I seem to excel at doing consistently. The prints
+themselves are not paid for, but my time in producing those
+photographs is not paid either, which clearly outlines my future as a
+professional photographer, if any, lie far away from producing those
+silly calendars, at least for now.
+
+More documentation on the project is available, in french, in
+[[communication/photo/calendrier-2019]]. I am also hoping to
+eventually publish a graphical review of the calendar, but for now
+I'll leave that for the friends and family who will receive the
+calendar as a gift...
+
+Archival of Brasil
+------------------
+
+Another modest project I embarked on was a mission to archive the
+government of Brazil following the election the infamous [Jair
+Bolsonaro](https://en.wikipedia.org/wiki/Jair_Bolsonaro), dictatorship supporter, homophobe, racist, nationalist
+and christian freak that somehow managed to get elected president of
+Brazil. Since he threatened to rip apart basically the entire fabric
+of Brazilian society, comrades were worried that he might attack and
+destroy precious archives and data from government archives when he
+comes in power, in January 2019. Like many countries in Latin America
+that lived under dictatorships in the 20th century, Brazil made an
+effort to investigate and keep memory of the atrocities that were
+committed during those troubled times.
+
+Since I had [written about archiving websites](https://anarc.at/blog/2018-10-04-archiving-web-sites/), those comrades
+naturally thought I could be of use, so we embarked on a crazy quest
+to archive Brazil, basically. We tried to create a movement similar to
+the [Internet Archive (IA) response to the 2016 Trump election](https://www.archiveteam.org/index.php?title=Government_Backup) but
+were not really successful at getting IA involved. I was, fortunately,
+able to get the good folks at [Archive Team](https://www.archiveteam.org/) (AT) involved and we
+have [successfully archived](https://www.archiveteam.org/index.php?title=ArchiveBot/2018_Brazilian_general_elections) a significant number of websites,
+adding terabytes of data to the IA through the backdoor that is AT. We
+also ran a bunch of archival on a special server, leveraging tools
+like [youtube-dl](https://youtube-dl.org/), [git-annex](http://git-annex.branchable.com/), [wpull](https://github.com/ArchiveTeam/wpull) and, eventually,
+[grab-site](https://github.com/ludios/grab-site/) to archive websites, social network sites and video
+feeds.
+
+I kind of burned out on the job. Following Brazilian politics was
+scary and traumatizing - I have been very close to Brazil folks and
+they are colorful, friendly people. The idea that such a horrible
+person could come into power there is absolutely terrifying and I
+kept on thinking how disgusted I would be if I would have to archive
+stuff from the government of Canada, which I do not particularly like
+either... This goes against a lot of my personal ethics, but then it
+beats the obscurity of pure destruction of important scientific,
+cultural and historical data.
+
+Miscellaneous
+-------------
+
+Considering the workload involved in the above craziness, the fact
+that I worked on less project than my usual madness shouldn't come as
+a surprise.
+
+ * As part of the calendar work, I wrote a new tool called
+   `moonphases` which shows a list of moon phase events in the given
+   time period, and shipped that as part of [undertime 1.5](https://gitlab.com/anarcat/undertime/commits/1.5.0) for
+   lack of a better place.
+
+ * [AlternC](http://alternc.org) revival: friends at [Koumbit](https://koumbit.org) asked me for source
+   code of AlternC projects I was working on. I was disappointed (but
+   not surprised) that upstream simply took those repositories down
+   without publishing an archive. Thankfully, I still had SVN
+   checkouts but unfortunately, those do not have the full history, so
+   I reconstructed repositories based on the last checkout that I had
+   for [alternc-mergelog](https://gitlab.com/anarcat/alternc-mergelog), [alternc-stats](https://gitlab.com/anarcat/alternc-stats), and
+   [alternc-slavedns](https://gitlab.com/anarcat/alternc-slavedns).
+
+ * I packaged two new projects into Debian, [bitlbee-mastodon](https://salsa.debian.org/debian/bitlbee-mastodon) (to
+   connect to the new Mastodon network over IRC) and
+   [python-internetarchive](https://tracker.debian.org/python-internetarchive) (a command line interface to the IA
+   upload forms)
+
+ * my work on archival tools led to a moderately important patch in
+   pywb: [allow symlinking and hardlinking files instead of just
+   copying](https://github.com/webrecorder/pywb/pull/409) was important to manage multiple large WARC files along
+   with git-annex.
+
+ * I also noticed the IA people were using a tool called [slurm](https://tracker.debian.org/pkg/slurm) to
+   diagnose bandwidth problems on their networks and [implemented
+   iface speed detection on Linux](https://github.com/mattthias/slurm/pull/29) while I was there. slurm is
+   interesting, but I also found out about [bmon](https://tracker.debian.org/pkg/bmon) through the
+   hilarious [hollywood](https://tracker.debian.org/pkg/hollywood) project. Each has their advantages: bmon
+   has packets per second graphs, while slurm only has bandwidth
+   graphs, but also notices maximal burst speeds which is very useful.
+
+Debian Long Term Support (LTS)
+==============================
+
+This is my monthly [Debian LTS][] report. Note that my previous report
+wasn't published on this blog but [on the mailing list](https://lists.debian.org/debian-lts/2018/11/msg00090.html).
+
+[Debian LTS]: https://www.freexian.com/services/debian-lts.html
+
+## Enigmail / GnuPG 2.1 backport
+
+I've spent a significant amount of time working on the Enigmail
+backport for a third consecutive month. I first [published][] a
+straightforward backport of GnuPG 2.1 depending on the libraries
+available in jessie-backports last month, but then I actually [rebuilt
+the dependencies as well][] and sent a "HEADS UP" to the mailing list,
+which finally got peoples' attention.
+
+[rebuilt the dependencies as well]: https://lists.debian.org/87zht94219.fsf@curie.anarc.at
+[published]: https://lists.debian.org/87r2fqnja0.fsf@curie.anarc.at

(Diff truncated)
rename
diff --git a/blog/git-annex.mdwn b/blog/2018-12-21-large-files-with-git.mdwn
similarity index 100%
rename from blog/git-annex.mdwn
rename to blog/2018-12-21-large-files-with-git.mdwn

add tags
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index f4a0c014..3fbe788a 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -312,4 +312,4 @@ using most of their current workflow.
 [first appeared]: https://lwn.net/Articles/774125/
 [Linux Weekly News]: http://lwn.net/
 
-[[!tag debian-planet lwn]]
+[[!tag debian-planet lwn git-annex git archive]]

article public, regen
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 4257c6eb..f4a0c014 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -1,9 +1,8 @@
 [[!meta title="Large files with Git: LFS and git-annex"]]
-\[LWN subscriber-only content\]
--------------------------------
 
-[[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T15:33:10-0500"]]
+[[!meta date="2018-12-11T00:00:00+0000"]]
+
+[[!meta updated="2018-12-21T13:48:33-0500"]]
 
 [[!toc levels=2]]
 
@@ -16,6 +15,10 @@ other projects are helping Git address this challenge. This article
 compares how [Git LFS][] and [git-annex][] address this problem and
 should help readers pick the right solution for their needs.
 
+  [commit graph work]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt
+  [Git LFS]: https://git-lfs.github.com/
+  [git-annex]: https://git-annex.branchable.com/
+
 The problem with large files
 ----------------------------
 
@@ -51,23 +54,32 @@ takes 21 minutes to perform, mostly taken up by Git resolving deltas.
 Commit, push, and pull are noticeably slower than a regular repository,
 taking anywhere from a few seconds to a minute depending one how old the
 local copy is. And running annotate on that large file can take up to
-ten minutes. So even though that is a simple text file, it's grown large
-enough to cause significant problems for Git, which is otherwise known
-for stellar performance.
+ten minutes. So even though that is a simple text file, it's grown
+large enough to cause significant problems for Git, which is otherwise
+known for stellar performance.
 
 Intuitively, the problem is that Git needs to copy files into its object
 store to track them. Third-party projects therefore typically solve the
 large-files problem by taking files out of Git. In 2009, Git evangelist
 Scott Chacon released [GitMedia][], which is a Git filter that simply
 takes large files out of Git. Unfortunately, there hasn't been an
-official release since then and it's [unclear][] if the project is still
-maintained. The next effort to come up was [git-fat][], first released
-in 2012 and still maintained. But neither tool has seen massive adoption
-yet. If I would have to venture a guess, it might be because both
-require manual configuration. Both also require a custom server (rsync
-for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits
+official release since then and it's [unclear][] if the project is
+still maintained. The next effort to come up was [git-fat][], first
+released in 2012 and still maintained. But neither tool has seen massive
+adoption yet. If I would have to venture a guess, it might be because
+both require manual configuration. Both also require a custom server
+(rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits
 collaboration since users need access to another service.
 
+  [improving the pack-file format]: https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/
+  [put it]: https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/
+  [Caca Labs]: http://caca.zoy.org/
+  [git-bigfiles]: http://caca.zoy.org/wiki/git-bigfiles
+  [1.7.6]: https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/
+  [GitMedia]: https://github.com/alebedev/git-media
+  [unclear]: https://github.com/alebedev/git-media/issues/15
+  [git-fat]: https://github.com/jedbrown/git-fat
+
 Git LFS
 -------
 
@@ -116,8 +128,8 @@ This process only works for new files you are importing into Git,
 however. If a Git repository already has large files in its history, LFS
 can fortunately "fix" repositories by retroactively rewriting history
 with [git lfs migrate][]. This has all the normal downsides of rewriting
-history, however — existing clones will have to be reset to benefit from
-the cleanup.
+history, however --- existing clones will have to be reset to benefit
+from the cleanup.
 
 LFS also supports [file locking][], which allows users to claim a lock
 on a file, making it read-only everywhere except in the locking
@@ -129,10 +141,10 @@ remove other user's locks by using the `--force` flag. LFS can also
 The main [limitation][] of LFS is that it's bound to a single upstream:
 large files are usually stored in the same location as the central Git
 repository. If it is hosted on GitHub, this means a default quota of 1GB
-storage and bandwidth, but you can purchase additional "packs" to expand
-both of those quotas. GitHub also limits the size of individual files to
-2GB. This [upset][] some users surprised by the bandwidth fees, which
-were previously hidden in GitHub's cost structure.
+storage and bandwidth, but you can purchase additional "packs" to
+expand both of those quotas. GitHub also limits the size of individual
+files to 2GB. This [upset][] some users surprised by the bandwidth fees,
+which were previously hidden in GitHub's cost structure.
 
 While the actual server-side implementation used by GitHub is closed
 source, there is a [test server][] provided as an example
@@ -140,15 +152,16 @@ implementation. Other Git hosting platforms have also [implemented][]
 support for the LFS [API][], including GitLab, Gitea, and BitBucket;
 that level of adoption is something that git-fat and GitMedia never
 achieved. LFS does support hosting large files on a server other than
-the central one — a project could run its own LFS server, for example —
-but this will involve a different set of credentials, bringing back the
-difficult user onboarding that affected git-fat and GitMedia.
+the central one --- a project could run its own LFS server, for example
+--- but this will involve a different set of credentials, bringing back
+the difficult user onboarding that affected git-fat and GitMedia.
 
 Another limitation is that LFS only supports pushing and pulling files
-over HTTP(S) — no SSH transfers. LFS uses some [tricks][] to bypass HTTP
-basic authentication, fortunately. This also might change in the future
-as there are proposals to add [SSH support][], resumable uploads through
-the [tus.io protocol][], and other [custom transfer protocols][].
+over HTTP(S) --- no SSH transfers. LFS uses some [tricks][] to bypass
+HTTP basic authentication, fortunately. This also might change in the
+future as there are proposals to add [SSH support][], resumable uploads
+through the [tus.io protocol][], and other [custom transfer
+protocols][].
 
 Finally, LFS can be slow. Every file added to LFS takes up double the
 space on the local filesystem as it is copied to the `.git/lfs/objects`
@@ -156,6 +169,20 @@ storage. The smudge/clean interface is also slow: it works as a pipe,
 but buffers the file contents in memory each time, which can be
 prohibitive with files larger than available memory.
 
+  [released]: https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/
+  [git lfs migrate]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn
+  [file locking]: https://github.com/git-lfs/git-lfs/wiki/File-Locking
+  [prune]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-prune.1.ronn
+  [limitation]: https://github.com/git-lfs/git-lfs/wiki/Limitations
+  [upset]: https://medium.com/@megastep/github-s-large-file-storage-is-no-panacea-for-open-source-quite-the-opposite-12c0e16a9a91
+  [test server]: https://github.com/git-lfs/lfs-test-server
+  [implemented]: https://github.com/git-lfs/git-lfs/wiki/Implementations%0A
+  [API]: https://github.com/git-lfs/git-lfs/tree/master/docs/api
+  [tricks]: https://github.com/git-lfs/git-lfs/blob/master/docs/api/authentication.md
+  [SSH support]: https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md
+  [tus.io protocol]: https://tus.io/
+  [custom transfer protocols]: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md
+
 git-annex
 ---------
 
@@ -171,17 +198,18 @@ the Git LFS storage layout is obviously inspired by git-annex. The
 original design of git-annex introduced all sorts of problems however,
 especially on filesystems lacking symbolic-link support. So Hess has
 implemented different solutions to this problem. Originally, when
-git-annex detected such a "crippled" filesystem, it switched to [direct
-mode][], which kept files directly in the work tree, while internally
-committing the symbolic links into the Git repository. This design
-turned out to be a little confusing to users, including myself; I have
-managed to shoot myself in the foot more than once using this system.
+git-annex detected such a "crippled" filesystem, it switched to
+[direct mode][], which kept files directly in the work tree, while
+internally committing the symbolic links into the Git repository. This
+design turned out to be a little confusing to users, including myself; I
+have managed to shoot myself in the foot more than once using this
+system.
 
 Since then, git-annex has adopted a different v7 mode that is also based
-on smudge/clean filters, which it called "[unlocked files][]". Like Git
-LFS, unlocked files will double disk space usage by default. However it
-*is* possible to reduce disk space usage by using "thin mode" which uses
-hard links between the internal git-annex disk storage and the work
+on smudge/clean filters, which it called "[unlocked files][]". Like
+Git LFS, unlocked files will double disk space usage by default. However
+it *is* possible to reduce disk space usage by using "thin mode" which
+uses hard links between the internal git-annex disk storage and the work
 tree. The downside is, of course, that changes are immediately performed
 on files, which means previous file versions are automatically
 discarded. This can lead to data loss if users are not careful.
@@ -197,31 +225,31 @@ those problems itself but it would be better for those to be implemented
 in Git natively.
 
 Being more distributed by design, git-annex does not have the same
-"locking" semantics as LFS. Locking a file in git-annex means protecting
-it from changes, so files need to actually be in the "unlocked" state to
-be editable, which might be counter-intuitive to new users. In general,
-git-annex has some of those unusual quirks and interfaces that often
-come with more powerful software.
+"locking" semantics as LFS. Locking a file in git-annex means
+protecting it from changes, so files need to actually be in the
+"unlocked" state to be editable, which might be counter-intuitive to
+new users. In general, git-annex has some of those unusual quirks and
+interfaces that often come with more powerful software.
 
 And git-annex is much more powerful: it not only addresses the
 "large-files problem" but goes much further. For example, it supports
-"partial checkouts" — downloading only some of the large files. I find
-that especially useful to manage my video, music, and photo collections,
-as those are too large to fit on my mobile devices. Git-annex also has
-support for location tracking, where it knows how many copies of a file
-exist and where, which is useful for archival purposes. And while Git
-LFS is only starting to look at transfer protocols other than HTTP,
-git-annex already supports a [large number][] through a [special remote
-protocol][] that is fairly easy to implement.
-
-"Large files" is therefore only scratching the surface of what git-annex
-can do: I have used it to build an [archival system for remote native
-communities in northern Québec][], while others have built a [similar
-system in Brazil][]. It's also used by the scientific community in
-projects like [GIN][] and [DataLad][], which manage terabytes of data.

(Diff truncated)
add gh-backup
diff --git a/software/packages.yml b/software/packages.yml
index fd5f412b..c8b20fca 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -253,6 +253,7 @@
       - git-extras
       - git-mediawiki
       - git-svn
+      - github-backup
       - gitlint
       - glade
       - gocode

sort
diff --git a/software/packages.yml b/software/packages.yml
index 8764e547..fd5f412b 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -317,19 +317,18 @@
       - python3-unidecode
       - python-wheel
       - python3-vcr
-      - rename
-      - reprotest
-      - tox
-      - twine
       - qemu
       - qemu-kvm
       - quilt
+      - rename
+      - reprotest
       - sbuild
       - shellcheck
       - sloccount
       - sqlitebrowser
       - subversion
       - time
+      - tox
       - twine
       - ubuntu-dev-tools
       - vagrant

two useful bandwidth monitors in their own way
bmon has nice PPS graphs and slurm has "peak" values, both have nice
graphs
diff --git a/software/packages.yml b/software/packages.yml
index fbdc8770..8764e547 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -413,6 +413,7 @@
       - apache2-utils
       - apt-transport-https
       - asciinema
+      - bmon
       - borgbackup
       - borgbackup-doc
       - bup
@@ -468,6 +469,7 @@
       - sdparm
       - siege
       - sipcalc
+      - slurm
       - socat
       - sshfs
       - strace

add dateutils/numutils to do commandline math in dev and sysadmin
diff --git a/software/packages.yml b/software/packages.yml
index 84280a9f..fbdc8770 100644
--- a/software/packages.yml
+++ b/software/packages.yml
@@ -211,6 +211,7 @@
       - curl
       - colordiff
       - cvs
+      - dateutils
       - debian-el
       - debian-installer-9-netboot-amd64
       - dgit
@@ -283,6 +284,7 @@
       - myrepos
       - ncdu
       - npm
+      - num-utils
       - org-mode
       - org-mode-doc
       - pastebinit
@@ -418,6 +420,7 @@
       - ccze
       - cu
       - curl
+      - dateutils
       - debian-goodies
       - deborphan
       - debsums
@@ -451,6 +454,7 @@
       - netdata
       - nethogs
       - nmap
+      - numutils
       - oping
       - passwdqc
       - powertop

noter les fautes de la première impression
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 2487f813..5624c5f5 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -809,6 +809,31 @@ Bugs upstream (signalés):
 
 [latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322
 
+# Errata
+
+## Impression du 14 décembre 2018
+
+La photo de novembre est décrite comme ayant été prise avec une focale
+5mm. Il s'agit plutôt d'une focale 55mm.
+
+Les photos de la couverture et du colophon n'avait pas de description,
+on devrait plutôt lire:
+
+> * Couverture: Ciel et terre. Pointe-St-Charles, Montréal. `f/4
+> 1/1000s ISO 800 18mm`
+>
+> * Colophon: Auteur. Parc Lafontaine, Montréal. `f/22 1/27s ISO 3200
+> 18mm`
+
+Le rendu final de certaines photos n'est pas tout à fait
+satisfaisant. Les photos de juillet et novembre, par exemple, sont
+trop sombres et, dans le cas de novembre, trop bleues. La photo du
+hérisson est "hors gamme", ce qui rend certaines zones floues et
+bizarres. La photo du mois d'août a un grain élevé qu'il n'a pas été
+possible de corriger dans Darktable, car le module de réduction de
+bruit amenait un effet "gouache" distrayant. Ces effets ont été
+remarqué trop tard dans la chaîne de production pour être corrigés.
+
 # Projets similaires #
 
 Ce projet a été inspiré par d'autres projets [DIY](https://fr.wikipedia.org/wiki/Do_it_yourself), en particulier

status update: première impression faite!
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 9e5b017e..2487f813 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -785,6 +785,11 @@ plus grand tirage).
    que ça s'aligne comme il faut. changer les marges ne marche pas
    parce que ça bousille les pages précédentes. ouch. (fixed -
    shortened descs)
+ * impression d'une épreuve de test (fait)
+ * derniere correction d'une épreuve (fait, errata ajouté ici)
+ * première lot d'impressions (fait, au CÉGEP Bois-de-Boulogne, sur la
+   [Xerox Altalink C8045][] avec le papier [Verso Sterling Premium
+   Digital][])
 
 Bugs upstream (signalés):
 
@@ -793,15 +798,13 @@ Bugs upstream (signalés):
  * crop incorrect des images (fixed, cropper au ratio 8.5x11)
  * fonte différente dans le colphon (fixed, choisi roboto apres
    evaluation de d'autres fontes)
+ * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
 
 ## Restantes
 
- * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
- * impression d'une épreuve de test (fais quelques tests avec pollo et
-   BEG, pas terminé)
- * derniere correction d'une épreuve
- * impression finale
- * distribution
+ * reliure
+ * distribution (voir liste des récipiendaires dans agenda, 27
+   novembre 2017)
  * ajouter au blog
 
 [latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322

corriger des liens
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index f1e11703..9e5b017e 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -554,7 +554,7 @@ Imprimeurs possibles:
    épreuves PDF, relevant quelques défauts typographiques,
    orthographiques et de mise en page. Merci!
  * [CEGEP](https://agebdeb.org/impressions/): 0.20$/feuille
-   * [Xerox C8045](https://www.office.xerox.com/multifunction-printer/color-multifunction/altalink-c8000-series/enus.html) (beaux bleux, un peu smudgy sur certaines poses)
+   * [Xerox Altalink C8045][] (beaux bleux, un peu smudgy sur certaines poses)
    * [Canon image RUNNER ADVANCE C5550i](https://www.usa.canon.com/internet/portal/us/home/products/details/copiers-mfps-fax-machines/multifunction-copiers/imagerunner-advance-c5550i)
  * Bâtiment 7 ont un labo d'impression numérique un peu informel
  * Centre Japonais de la Photo: 450-688-6530
@@ -564,6 +564,7 @@ Imprimeurs possibles:
  * Papeterie du plateau
  * CDN impression (Rosemont + St-Denis)
 
+[Xerox Altalink C8045]: https://www.office.xerox.com/multifunction-printer/color-multifunction/altalink-c8000-series/enus.html
 [latex-cmyk]: https://tex.stackexchange.com/a/9973/33322
 
 On a fait des tests au CÉGEP avec du papier 148gsm (gram per square
@@ -632,9 +633,11 @@ Voici donc un récapitulatif des papiers considérés:
    [Buroplus.ca](http://buroplus.ca/) AKA [Hamster.ca](https://www.hamster.ca/):
    * [Hammermill Color Copy Cover 100lb](https://www.hamster.ca/en/hammermill-color-copy-cover-790162) 30.79$/250 feuilles
      (12¢/feuille), brillance 100, trop mat
-   * Verso [Sterling Premium Digital](https://www.versoco.com/wps/wcm/connect/797e51bb-30fd-435d-a635-5a19b01c49b4/VC15-006+Sterling+Premium+Digital+Sell+Sheet+112015+NPC.pdf?MOD=AJPERES) 100lb cover (271gsm) gloss,
+   * [Verso Sterling Premium Digital][] 100lb cover (271gsm) gloss,
      21.78$CAD + tx (25.04$) pour 200 feuilles (11¢/feuille)
 
+[Verso Sterling Premium Digital]: https://www.versoco.com/wps/wcm/connect/797e51bb-30fd-435d-a635-5a19b01c49b4/VC15-006+Sterling+Premium+Digital+Sell+Sheet+112015+NPC.pdf?MOD=AJPERES
+
 J'ai fini par choisir ce dernier papier, en désespoir de cause, vu le
 bas prix est les résultats (acceptables) faits en magasin. Les
 résultats étaient en fait médiocres ("spots" noirs, bandes "effacées")
@@ -674,7 +677,7 @@ D'autres sites:
 
 ## Reliure
 
-Repro-UQAM font reliure et coupe. Sur un lot de 20 calendriers, 7.80$
+[Repro-UQAM](https://repro.uqam.ca/) font reliure et coupe. Sur un lot de 20 calendriers, 7.80$
 pour la coupe, 1.50$/reliure en spirale continue (2.20$ pour 10). À
 faire avant le 21 décembre, 24h de tombée, possiblement 2jrs. J'ai
 fait une première reliure avec du "Proclick" mais c'est trop cheap,
@@ -700,9 +703,9 @@ Coûts à date:
 
  * Test BEG: 5.92$
  * Test Papeterie du plateau: 2.30$
- * Papier Papeterie du plateau: 21.78$CAD + tx = 25.04$ ([Sterling
-   Premium Digital](https://www.versoco.com/wps/wcm/connect/797e51bb-30fd-435d-a635-5a19b01c49b4/VC15-006+Sterling+Premium+Digital+Sell+Sheet+112015+NPC.pdf?MOD=AJPERES) 100lb cover (271gsm) gloss, 200 feuilles,
-   11¢/feuille
+ * Papier Papeterie du plateau: 21.78$CAD + tx = 25.04$ ([Verso
+   Sterling Premium Digital][] 100lb cover (271gsm) gloss, 200
+   feuilles, 11¢/feuille
  * Impression: 39$ (15 calendriers * 13 feuilles/calendrier *
    20¢/feuille)
  * Temps: 0$ (3h+ à 0$/hre)

expliquer comment les photos ont été choisies
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 4f515019..f1e11703 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -55,6 +55,28 @@ semble fixée mais qui en fin de compte n'aura jamais lieu.
 
 > *— [Wikipedia](https://fr.wikipedia.org/wiki/Calendes)*
 
+# Photos
+
+Chaque photo a été choisie en fonction du mois où elle a été prise,
+et, dans la mesure du possible, pour refléter l'esprit de ce
+mois. Certaines photos de nuit et les couchers de soleil ont été
+écartés car ils sont typiquement difficiles à faire ressortir sur
+l'imprimé, du moins c'est l'avis que j'ai reçu. Même chose pour les
+photos généralement sombres.
+
+Toutes les photos ont été prises avec un [Fuji X-T2](https://en.wikipedia.org/wiki/Fujifilm_X-T2), avec
+différents objectifs, détaillés en dernière page du calendrier (le
+"colophon"). Les photos de nuit ont été prises avec trépied. Les
+photos avec un de ces objectifs:
+
+ * [Fujifilm 18-55mm f/2.8-4 R LM OIS](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf18_55mmf28_4_r_lm_ois/)
+ * [Fujifilm 55-200mm f/3.5-4.8 R LM OIS](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf55_200mmf35_48_r_lm_ois/)
+ * [Fujifilm 27mm f/2.8 ø39](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf27mmf28/)
+
+Voir également [[hardware/camera]] pour le matériel que j'utilise. La
+plupart des photos ont été retravaillées avec [Darktable 2.4](https://darktable.org), sauf
+le mois de janvier, qui a été traité avec Adobe Lightroom 6.
+
 # Évènements #
 
 Un calendrier, c'est des petites boîtes en colonnes avec des chiffres

un autre truc pdf
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 82ee8c70..4f515019 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -631,7 +631,12 @@ et "impair", imprimant le second en premier:
     pdftk calendes.pdf cat 1-endeven output calendes-even.pdf
     pdftk calendes.pdf cat 1-endodd output calendes-odd.pdf
 
-L'impression a pris environ 3h de travail continu.
+L'impression a pris environ 3h de travail continu. Une copie des
+photos de chaque mois a également été produite, mais à cause des
+"paper jams", elles sont inutilisables pour assembler un quinzième
+calendrier. Des pages individuelles ont été extraites avec `pdfjoin`:
+
+    pdfjoin calendes.pdf 22,24
 
 Avec plus de temps, il serait peut-être possible de faire venir du
 papier des États-Unis ou d'ailleurs. Par exemple, le [papier Xerox](https://www.xeroxpaperusa.com/en-us/where-to-buy/merchants)

première impression complétée
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index af73ca6c..82ee8c70 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -617,7 +617,21 @@ J'ai fini par choisir ce dernier papier, en désespoir de cause, vu le
 bas prix est les résultats (acceptables) faits en magasin. Les
 résultats étaient en fait médiocres ("spots" noirs, bandes "effacées")
 mais selon le technicien, c'était dû à la machine et je pouvais voir
-le potentiel du papier. Donc j'ai décidé de faire l'essai.
+le potentiel du papier. Donc j'ai décidé de faire l'essai. Les
+résultats sont excellents, compte tenu du coût.
+
+J'ai imprimé 14 calendriers à 13 feuilles. Environ une demie-douzaine
+de "paper jams" (à l'entrée!) se sont produit. J'aurais normalement dû
+produire 15 calendriers mais au moins deux calendriers ont ingéré
+plusieurs feuilles à la fois, ce qui a bousillé un calendrier - j'ai
+pu récupérer l'autre en colligeant des feuilles et réimprimant de
+l'autre côté. J'ai dû séparer le calendrier en un fichier PDF "pair"
+et "impair", imprimant le second en premier:
+
+    pdftk calendes.pdf cat 1-endeven output calendes-even.pdf
+    pdftk calendes.pdf cat 1-endodd output calendes-odd.pdf
+
+L'impression a pris environ 3h de travail continu.
 
 Avec plus de temps, il serait peut-être possible de faire venir du
 papier des États-Unis ou d'ailleurs. Par exemple, le [papier Xerox](https://www.xeroxpaperusa.com/en-us/where-to-buy/merchants)
@@ -662,17 +676,20 @@ Coûts à date:
  * Papier Papeterie du plateau: 21.78$CAD + tx = 25.04$ ([Sterling
    Premium Digital](https://www.versoco.com/wps/wcm/connect/797e51bb-30fd-435d-a635-5a19b01c49b4/VC15-006+Sterling+Premium+Digital+Sell+Sheet+112015+NPC.pdf?MOD=AJPERES) 100lb cover (271gsm) gloss, 200 feuilles,
    11¢/feuille
- * Sous-total: 30.00$
+ * Impression: 39$ (15 calendriers * 13 feuilles/calendrier *
+   20¢/feuille)
+ * Temps: 0$ (3h+ à 0$/hre)
+ * Sous-total: 69.00$
 
 Nouvel estimé, pour 15 calendriers:
 
- * Impression: 39$ (15 calendriers * 13 feuilles/calendrier * 20¢/feuille)
  * Reliure spirale: 22.50$ (1.50$/calendrier)
  * Coupe et assemblage: 7.80$
- * Sous-total: 69.30$
+ * Sous-total: 30.30$
 
-Grand total prévu: 99.30$ (6.62$/calendrier, pour un tirage de 15
-calendrier -- coûts moindres par calendrier pour un plus grand tirage).
+Grand total prévu: 99.30$ (7.09$/calendrier, pour un tirage de
+<del>15</del>14 calendriers -- coûts moindres par calendrier pour un
+plus grand tirage).
 
 # Liste de tâches
 

removed
diff --git a/blog/2018-04-12-terminal-emulators-1/comment_6_81240d56b3227c328e18e2b225353c19._comment b/blog/2018-04-12-terminal-emulators-1/comment_6_81240d56b3227c328e18e2b225353c19._comment
deleted file mode 100644
index 2053ae3d..00000000
--- a/blog/2018-04-12-terminal-emulators-1/comment_6_81240d56b3227c328e18e2b225353c19._comment
+++ /dev/null
@@ -1,10 +0,0 @@
-[[!comment format=mdwn
- ip="36.250.174.142"
- claimedauthor="lovedbyfew"
- url="http://www.lovedbyfew.com/"
- subject="lovedbyfew"
- date="2018-12-12T12:15:32Z"
- content="""
-<a href=\"http://www.hevikauppa.com/billig-nike-air-force-1-low-dam%C3%A4nner-skob\">billig nike air force 1 low dam盲nner</a> <a href=\"http://www.evelynreynoso.com/adidas-superstar-2-light-blau-shoesv\">adidas superstar 2 light blau</a> <a href=\"http://www.kangenkitchen.com/nike-lunar-one-shot-zalando-runninga\">nike lunar one shot zalando</a> <a href=\"http://www.parkcarlton.com/oakley-multicam-fuel-cell-sunglasses-sunglassesr\">oakley multicam fuel cell sunglasses</a> <a href=\"http://www.qtbymary.com/reebok-gl-6000-white-shoesv\">reebok gl 6000 white</a> <a href=\"http://www.otsutyazh.com/nike-lunarepic-low-flyknit-rainbow-skow\">nike lunarepic low flyknit rainbow</a>
- <a href=\"http://www.lovedbyfew.com/\" >lovedbyfew</a> [url=http://www.lovedbyfew.com/]lovedbyfew[/url]
-"""]]

removed
diff --git a/blog/2018-04-12-terminal-emulators-1/comment_5_f51ef06220af78e0ace98c6c7c1a97d6._comment b/blog/2018-04-12-terminal-emulators-1/comment_5_f51ef06220af78e0ace98c6c7c1a97d6._comment
deleted file mode 100644
index 62432327..00000000
--- a/blog/2018-04-12-terminal-emulators-1/comment_5_f51ef06220af78e0ace98c6c7c1a97d6._comment
+++ /dev/null
@@ -1,10 +0,0 @@
-[[!comment format=mdwn
- ip="112.111.172.158"
- claimedauthor="newmadagasca"
- url="http://www.newmadagasca.com/"
- subject="newmadagasca"
- date="2018-12-12T12:06:35Z"
- content="""
-<a href=\"http://www.lbclubmitu.com/comprar-new-balance-1500-v1-skos\">comprar new balance 1500 v1</a> <a href=\"http://www.adanazuzu.com/fendi-logo-belt-beltr\">fendi logo belt</a> <a href=\"http://www.karenconti.com/mackage-jacket-toronto-east-mackagea\">mackage jacket toronto east</a> <a href=\"http://www.cliffsears.com/gold-nike-sandals-skow\">gold nike sandals</a> <a href=\"http://www.gfwholesalers.com/lightning-77-victor-hedman-yellow-2017-all-star-atlantic-division-stitched-nhl-jersey-nflg\">lightning 77 victor hedman yellow 2017 all star atlantic division stitched nhl jersey</a> <a href=\"http://www.rcnspecials.com/matt-ryan-authentic-jersey-nflr\">matt ryan authentic jersey</a>
- <a href=\"http://www.newmadagasca.com/\" >newmadagasca</a> [url=http://www.newmadagasca.com/]newmadagasca[/url]
-"""]]

Added a comment: lovedbyfew
diff --git a/blog/2018-04-12-terminal-emulators-1/comment_6_81240d56b3227c328e18e2b225353c19._comment b/blog/2018-04-12-terminal-emulators-1/comment_6_81240d56b3227c328e18e2b225353c19._comment
new file mode 100644
index 00000000..2053ae3d
--- /dev/null
+++ b/blog/2018-04-12-terminal-emulators-1/comment_6_81240d56b3227c328e18e2b225353c19._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="36.250.174.142"
+ claimedauthor="lovedbyfew"
+ url="http://www.lovedbyfew.com/"
+ subject="lovedbyfew"
+ date="2018-12-12T12:15:32Z"
+ content="""
+<a href=\"http://www.hevikauppa.com/billig-nike-air-force-1-low-dam%C3%A4nner-skob\">billig nike air force 1 low dam盲nner</a> <a href=\"http://www.evelynreynoso.com/adidas-superstar-2-light-blau-shoesv\">adidas superstar 2 light blau</a> <a href=\"http://www.kangenkitchen.com/nike-lunar-one-shot-zalando-runninga\">nike lunar one shot zalando</a> <a href=\"http://www.parkcarlton.com/oakley-multicam-fuel-cell-sunglasses-sunglassesr\">oakley multicam fuel cell sunglasses</a> <a href=\"http://www.qtbymary.com/reebok-gl-6000-white-shoesv\">reebok gl 6000 white</a> <a href=\"http://www.otsutyazh.com/nike-lunarepic-low-flyknit-rainbow-skow\">nike lunarepic low flyknit rainbow</a>
+ <a href=\"http://www.lovedbyfew.com/\" >lovedbyfew</a> [url=http://www.lovedbyfew.com/]lovedbyfew[/url]
+"""]]

Added a comment: newmadagasca
diff --git a/blog/2018-04-12-terminal-emulators-1/comment_5_f51ef06220af78e0ace98c6c7c1a97d6._comment b/blog/2018-04-12-terminal-emulators-1/comment_5_f51ef06220af78e0ace98c6c7c1a97d6._comment
new file mode 100644
index 00000000..62432327
--- /dev/null
+++ b/blog/2018-04-12-terminal-emulators-1/comment_5_f51ef06220af78e0ace98c6c7c1a97d6._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ ip="112.111.172.158"
+ claimedauthor="newmadagasca"
+ url="http://www.newmadagasca.com/"
+ subject="newmadagasca"
+ date="2018-12-12T12:06:35Z"
+ content="""
+<a href=\"http://www.lbclubmitu.com/comprar-new-balance-1500-v1-skos\">comprar new balance 1500 v1</a> <a href=\"http://www.adanazuzu.com/fendi-logo-belt-beltr\">fendi logo belt</a> <a href=\"http://www.karenconti.com/mackage-jacket-toronto-east-mackagea\">mackage jacket toronto east</a> <a href=\"http://www.cliffsears.com/gold-nike-sandals-skow\">gold nike sandals</a> <a href=\"http://www.gfwholesalers.com/lightning-77-victor-hedman-yellow-2017-all-star-atlantic-division-stitched-nhl-jersey-nflg\">lightning 77 victor hedman yellow 2017 all star atlantic division stitched nhl jersey</a> <a href=\"http://www.rcnspecials.com/matt-ryan-authentic-jersey-nflr\">matt ryan authentic jersey</a>
+ <a href=\"http://www.newmadagasca.com/\" >newmadagasca</a> [url=http://www.newmadagasca.com/]newmadagasca[/url]
+"""]]

fixed overflow
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 39619a5b..af73ca6c 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -733,6 +733,11 @@ calendrier -- coûts moindres par calendrier pour un plus grand tirage).
  * faire une gallerie, publique, seulement pour les 14 photos du
    calendrier 2019, avec explications? permettra de faire un blog plus
    joli... (fuck that)
+ * la dernière page overflow depuis qu'on a ajouté le qrcode ([bug
+   report](https://github.com/profound-labs/wallcalendar/issues/15)) parce qu'il a fallu remettre les tailles normales pour
+   que ça s'aligne comme il faut. changer les marges ne marche pas
+   parce que ça bousille les pages précédentes. ouch. (fixed -
+   shortened descs)
 
 Bugs upstream (signalés):
 
@@ -744,14 +749,10 @@ Bugs upstream (signalés):
 
 ## Restantes
 
- * la dernière page overflow depuis qu'on a ajouté le qrcode ([bug
-   report](https://github.com/profound-labs/wallcalendar/issues/15)) parce qu'il a fallu remettre les tailles normales pour
-   que ça s'aligne comme il faut. changer les marges ne marche pas
-   parce que ça bousille les pages précédentes. ouch.
  * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
  * impression d'une épreuve de test (fais quelques tests avec pollo et
    BEG, pas terminé)
- * correction d'une épreuve
+ * derniere correction d'une épreuve
  * impression finale
  * distribution
  * ajouter au blog

todo update
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 9f2556ec..39619a5b 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -222,8 +222,6 @@ On se limite à 4 jours identifiés par mois et/ou un par semaine.
  * 22 décembre: [Solstice][Solstice] d'hiver, jour le plus court
  * 25 décembre: [Naissance de Newton][Newtonmas] (au lieu de [Noël][])
 
-MANQUANT: évènements astronomiques, voir ci-bas.
-
 [420]: https://en.wikipedia.org/wiki/420_(cannabis_culture)
 [Abraham Maslow]: https://en.wikipedia.org/wiki/Abraham_Maslow
 [Action de grâce]: https://fr.wikipedia.org/wiki/Action_de_gr%C3%A2ce_(Canada)
@@ -727,6 +725,14 @@ calendrier -- coûts moindres par calendrier pour un plus grand tirage).
  * choix du papier (voir ci-haut, fait, dans la mesure du possible)
  * choix de la reliure (a priori: spirales plastiques à Repro-UQAM ou
    Katasoho voir ci-haut)
+ * faire une page d'accueil pour le projet (ici. on pourra mettre la
+   technique dans une sous-page si on veut)
+ * pointer le lien dans le colophon (avec qr-code, <del>en mode
+   [halftone](https://jsfiddle.net/lachlan/r8qWV/)</del> - pas de half-tone, c'est assez compliqué comme
+   ça!) vers la page d'accueil
+ * faire une gallerie, publique, seulement pour les 14 photos du
+   calendrier 2019, avec explications? permettra de faire un blog plus
+   joli... (fuck that)
 
 Bugs upstream (signalés):
 
@@ -738,13 +744,11 @@ Bugs upstream (signalés):
 
 ## Restantes
 
+ * la dernière page overflow depuis qu'on a ajouté le qrcode ([bug
+   report](https://github.com/profound-labs/wallcalendar/issues/15)) parce qu'il a fallu remettre les tailles normales pour
+   que ça s'aligne comme il faut. changer les marges ne marche pas
+   parce que ça bousille les pages précédentes. ouch.
  * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
- * faire une page d'accueil pour le projet
- * faire une gallerie, publique, seulement pour les 14 photos du
-   calendrier 2019, avec explications? permettra de faire un blog plus
-   joli...
- * pointer le lien dans le colophon (avec qr-code, en mode
-   [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
  * impression d'une épreuve de test (fais quelques tests avec pollo et
    BEG, pas terminé)
  * correction d'une épreuve

autre idée: gallerie séparée pour 2019
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 18417dcc..9f2556ec 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -724,7 +724,6 @@ calendrier -- coûts moindres par calendrier pour un plus grand tirage).
  * possiblement [sortir le PDF en CMYK][latex-cmyk] - semble pas nécessaire
    pour mardigrafe, finalement
  * testé la sortie Adobe RGB: couleurs ternes sur l'écran, ouache.
- * faire une page d'accueil pour le projet (fait, ici)
  * choix du papier (voir ci-haut, fait, dans la mesure du possible)
  * choix de la reliure (a priori: spirales plastiques à Repro-UQAM ou
    Katasoho voir ci-haut)
@@ -740,6 +739,10 @@ Bugs upstream (signalés):
 ## Restantes
 
  * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
+ * faire une page d'accueil pour le projet
+ * faire une gallerie, publique, seulement pour les 14 photos du
+   calendrier 2019, avec explications? permettra de faire un blog plus
+   joli...
  * pointer le lien dans le colophon (avec qr-code, en mode
    [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
  * impression d'une épreuve de test (fais quelques tests avec pollo et

mise à jour de la docu wallcalendar
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 462af8b0..18417dcc 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -486,30 +486,11 @@ alternatives](https://alternativeto.net/software/wallcalendar/) on alternativeto
 
 J'ai fini par utiliser le patron LaTeX [wallcalendar][]. L'auteur a
 fourni des correctifs qui font le gros du travail et j'ai pu établir
-un premier brouillon!
+un rendu assez satisfaisant en PDF.
 
-Le tout est construit à partir de [ce dépôt](https://github.com/anarcat/wallcalendar/), sur la branche
-`calendes`. Puis les dossiers ont été mis en place:
-
-    git clone -b calendes https://github.com/anarcat/wallcalendar/
-    cd wallcalendar
-    ln -s doc/examples/cal-photo-and-notes.tex .
-    ln -s doc/examples/fonts .
-    mkdir photos data
-    ( cd data ; ln -s ../doc/examples/data/* . ; mv anarcat.csv holidays.csv)
-
-Les photos ont été copiées dans `photos/` avec:
-
-    cp $(grep Thumbnail */index.md | sed 's/index.md:Thumbnail: //') ~/src/wallcalendar/photos/
-
-Update: les photos ont été re-cadrée en 8.5x11 donc ceci est
-obsolète.
-
-Le contenu de `colophon.tex` a été construit à la main est n'est
-présentement pas dans Git (mais devrait l'être, dans un dépôt
-privé). Des instructions sur l'installation du calendrier sont dans
-le dépôt git de la gallerie Sigal (`~/Pictures/calendes/calendrier`)
-et un README là explique comment installer le calendrier.
+Des instructions sur l'installation du calendrier sont dans le dépôt
+git de la gallerie Sigal (`~/Pictures/calendes/calendrier`) et un
+README explique comment installer le calendrier.
 
 Une note sur les fontes. L'auteur du calendrier original a choisi la
 fonte [Josefin Sans](https://www.fontsquirrel.com/fonts/Josefin-Sans) pour le calendrier, qui est très joli, mais le

update todo
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index ee8a462b..462af8b0 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -743,6 +743,10 @@ calendrier -- coûts moindres par calendrier pour un plus grand tirage).
  * possiblement [sortir le PDF en CMYK][latex-cmyk] - semble pas nécessaire
    pour mardigrafe, finalement
  * testé la sortie Adobe RGB: couleurs ternes sur l'écran, ouache.
+ * faire une page d'accueil pour le projet (fait, ici)
+ * choix du papier (voir ci-haut, fait, dans la mesure du possible)
+ * choix de la reliure (a priori: spirales plastiques à Repro-UQAM ou
+   Katasoho voir ci-haut)
 
 Bugs upstream (signalés):
 
@@ -754,17 +758,15 @@ Bugs upstream (signalés):
 
 ## Restantes
 
- * faire une page d'accueil pour le projet
+ * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
  * pointer le lien dans le colophon (avec qr-code, en mode
    [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
- * choix du papier (lustré des deux bords, selon Lozeau: 240gsm+)
- * choix de la technique de montage (a priori: spirales, Repro-UQAM,
-   voir plus bas)
  * impression d'une épreuve de test (fais quelques tests avec pollo et
    BEG, pas terminé)
  * correction d'une épreuve
  * impression finale
- * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
+ * distribution
+ * ajouter au blog
 
 [latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322
 

déplacer la liste de tâches en bas
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index fef30fbd..ee8a462b 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -525,79 +525,6 @@ des recommendations de Google Fonts:
 Je suis resté avec la populaire fonte [Roboto](https://www.fontsquirrel.com/fonts/roboto) car elle est plus
 comprimée que Raleway et le LaTeX est bien formatté.
 
-# Liste de tâches
-
-## Faites
-
- * confirmer les dates (voir plus haut, fait)
- * vérifier dates: (fait)
-   * ... des changements d'heures (fait)
-   * ... de tous les autre? (on va dire que oui)
- * ajouter les évènements astronomiques (fait)
- * établir le contenu de la dernière page
-   * photo en exergue de l'auteur (fait)
-   * remerciements aux réviseurs-euses (fait)
-   * explications des dates (fait)
-   * sommaire du projet (fait)
-   * date, lieu (fait)
-   * explications astronomiques (dates UTC-4, fait)
-   * description des photos (fait)
- * choix final des photos:
-   * Cover: ok, DSCF2561.jpg (mur)
-   * Janvier: ok, DSCF0879.jpg (du pain et des roses), avec lightroom
-   * Février: ok, DSCF1191.jpg (oiseau de proie)
-   * Mars: ok, DSCF1436.jpg (five roses), sharpness? hard to work on
-     the RAW, too far from jpeg.
-   * Avril: pas sûr, DSCF2305.jpg (runners). était DSCF2175.jpg,
-     opitciwan, avant, considérer aussi DSCF2283.JPG (marché)
-   * Mai: ok, DSCF4585.RAF (hirondelle)
-   * Juin: ok, DSCF4890.jpg (porc-épic)
-   * Juillet: ok, DSCF5762.jpg (lac) peut-être remettre DSCF5746.jpg
-     si elle sort bien
-   * Août: ok, DSCF6767.jpg (maison), peut-être un problème de bruit
-   * Septembre: ok, DSCF7399.jpg (oies)
-   * Octobre: ok, DSCF7648.jpg (st-gregoire)
-   * Novembre: ok, éclaircir? contraste neige?
-   * Décembre: ok, DSCF7823.jpg, pic-bois.
- * corriger date d'impression dans le colophon (fait, générée
-   automatiquement au rendu PDF)
- * recentrer la DSCF4890 (porc-épic) - tenté un recadrage
- * recentrer la page couverture (ligne de coupe à gauche trop proche
-   de la photo), [signalée en amont](https://github.com/profound-labs/wallcalendar/issues/14)
- * "désmudger" la DSCF6767 (maison) - retiré la réduction de bruit
- * possiblement sortir les photos en TIFF - pas possible [LaTeX
-   supporte pas les TIF](https://tex.stackexchange.com/questions/89989/add-tif-image-to-latex), mais ca supporte les PNG, mais aucune
-   différence avec le JPG visible à l'oeil nu à 400% dans evince
- * possiblement faire le [PDF non-compressé][latex-uncompressed], aucune différence
-   visible à l'oeil nu à 400% dans evince
- * possiblement [sortir le PDF en CMYK][latex-cmyk] - semble pas nécessaire
-   pour mardigrafe, finalement
- * testé la sortie Adobe RGB: couleurs ternes sur l'écran, ouache.
-
-Bugs upstream (signalés):
-
- * corriger le mois de septembre qui déborde (fixed, remis les notes)
- * overflow en première page (fixed, crop 8.5x11)
- * crop incorrect des images (fixed, cropper au ratio 8.5x11)
- * fonte différente dans le colphon (fixed, choisi roboto apres
-   evaluation de d'autres fontes)
-
-## Restantes
-
- * faire une page d'accueil pour le projet
- * pointer le lien dans le colophon (avec qr-code, en mode
-   [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
- * choix du papier (lustré des deux bords, selon Lozeau: 240gsm+)
- * choix de la technique de montage (a priori: spirales, Repro-UQAM,
-   voir plus bas)
- * impression d'une épreuve de test (fais quelques tests avec pollo et
-   BEG, pas terminé)
- * correction d'une épreuve
- * impression finale
- * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
-
-[latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322
-
 # Impression #
 
 Si j'avais choisi de ne pas faire le montage, j'aurais pris les
@@ -768,6 +695,79 @@ Nouvel estimé, pour 15 calendriers:
 Grand total prévu: 99.30$ (6.62$/calendrier, pour un tirage de 15
 calendrier -- coûts moindres par calendrier pour un plus grand tirage).
 
+# Liste de tâches
+
+## Faites
+
+ * confirmer les dates (voir plus haut, fait)
+ * vérifier dates: (fait)
+   * ... des changements d'heures (fait)
+   * ... de tous les autre? (on va dire que oui)
+ * ajouter les évènements astronomiques (fait)
+ * établir le contenu de la dernière page
+   * photo en exergue de l'auteur (fait)
+   * remerciements aux réviseurs-euses (fait)
+   * explications des dates (fait)
+   * sommaire du projet (fait)
+   * date, lieu (fait)
+   * explications astronomiques (dates UTC-4, fait)
+   * description des photos (fait)
+ * choix final des photos:
+   * Cover: ok, DSCF2561.jpg (mur)
+   * Janvier: ok, DSCF0879.jpg (du pain et des roses), avec lightroom
+   * Février: ok, DSCF1191.jpg (oiseau de proie)
+   * Mars: ok, DSCF1436.jpg (five roses), sharpness? hard to work on
+     the RAW, too far from jpeg.
+   * Avril: pas sûr, DSCF2305.jpg (runners). était DSCF2175.jpg,
+     opitciwan, avant, considérer aussi DSCF2283.JPG (marché)
+   * Mai: ok, DSCF4585.RAF (hirondelle)
+   * Juin: ok, DSCF4890.jpg (porc-épic)
+   * Juillet: ok, DSCF5762.jpg (lac) peut-être remettre DSCF5746.jpg
+     si elle sort bien
+   * Août: ok, DSCF6767.jpg (maison), peut-être un problème de bruit
+   * Septembre: ok, DSCF7399.jpg (oies)
+   * Octobre: ok, DSCF7648.jpg (st-gregoire)
+   * Novembre: ok, éclaircir? contraste neige?
+   * Décembre: ok, DSCF7823.jpg, pic-bois.
+ * corriger date d'impression dans le colophon (fait, générée
+   automatiquement au rendu PDF)
+ * recentrer la DSCF4890 (porc-épic) - tenté un recadrage
+ * recentrer la page couverture (ligne de coupe à gauche trop proche
+   de la photo), [signalée en amont](https://github.com/profound-labs/wallcalendar/issues/14)
+ * "désmudger" la DSCF6767 (maison) - retiré la réduction de bruit
+ * possiblement sortir les photos en TIFF - pas possible [LaTeX
+   supporte pas les TIF](https://tex.stackexchange.com/questions/89989/add-tif-image-to-latex), mais ca supporte les PNG, mais aucune
+   différence avec le JPG visible à l'oeil nu à 400% dans evince
+ * possiblement faire le [PDF non-compressé][latex-uncompressed], aucune différence
+   visible à l'oeil nu à 400% dans evince
+ * possiblement [sortir le PDF en CMYK][latex-cmyk] - semble pas nécessaire
+   pour mardigrafe, finalement
+ * testé la sortie Adobe RGB: couleurs ternes sur l'écran, ouache.
+
+Bugs upstream (signalés):
+
+ * corriger le mois de septembre qui déborde (fixed, remis les notes)
+ * overflow en première page (fixed, crop 8.5x11)
+ * crop incorrect des images (fixed, cropper au ratio 8.5x11)
+ * fonte différente dans le colphon (fixed, choisi roboto apres
+   evaluation de d'autres fontes)
+
+## Restantes
+
+ * faire une page d'accueil pour le projet
+ * pointer le lien dans le colophon (avec qr-code, en mode
+   [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
+ * choix du papier (lustré des deux bords, selon Lozeau: 240gsm+)
+ * choix de la technique de montage (a priori: spirales, Repro-UQAM,
+   voir plus bas)
+ * impression d'une épreuve de test (fais quelques tests avec pollo et
+   BEG, pas terminé)
+ * correction d'une épreuve
+ * impression finale
+ * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
+
+[latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322
+
 # Projets similaires #
 
 Ce projet a été inspiré par d'autres projets [DIY](https://fr.wikipedia.org/wiki/Do_it_yourself), en particulier

update print status
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index e5549500..fef30fbd 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -642,6 +642,11 @@ On a fait des tests au CÉGEP avec du papier 148gsm (gram per square
 meter) mat, mais il est clair que ça sortirait mieux sur du papier
 lustré (recto-verso).
 
+Pour l'instant je vais avec l'impression au CÉGEP, mais j'ai espoir de
+faire affaire avec Katasoho à long terme. Les coûts sont trop élevés
+chez Mardigrafe, malgré le -- ou peut-être à cause du -- service
+exceptionnel.
+
 ## Choix du papier
 
 Faut voir quelle "grammage" choisir. C'est pas évident parce que c'est

séparer la liste de tâche
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 6a8b1c36..e5549500 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -459,7 +459,7 @@ de l'espace pour prendre des notes et des évènements pertinents.
 
 Les outils suivant ont été considérés pour monter les photos:
 
- * [wallcalendar](https://github.com/profound-labs/wallcalendar) - patron Latex, superbe mais n'est pas
+ * [wallcalendar][] - patron Latex, superbe mais n'est pas
    11x17/tabloid (A3) donc trop petit, voir [bogue #4](https://github.com/profound-labs/wallcalendar/issues/4)
  * Inkscape...
    * ... a un plugin "Calendar" mais ca produit toute une année d'une
@@ -477,16 +477,57 @@ Les outils suivant ont été considérés pour monter les photos:
  * Timeanddate.com ont des [templates PDF](https://www.timeanddate.com/calendar/create.html%3Fyear%3D2018%26country%3D29?typ=2&tpl=2&country=27&lang=fr&cpa=5&hol=4195103&wno=1)
  * Vertex42 ont des [templates ODT](https://www.vertex42.com/calendars/monthly-calendar.html) (et Excel et Google)
 
+[wallcalendar]: https://github.com/profound-labs/wallcalendar
+
 I [asked the question on SE](https://softwarerecs.stackexchange.com/questions/52778/printing-a-monthly-calendar-with-custom-pictures-and-events) and documented the known [wallcalendar
 alternatives](https://alternativeto.net/software/wallcalendar/) on alternativeto.net.
 
 ## Wallcalendar ##
 
-J'ai fait plus de travail sur le module LaTeX. L'auteur a fourni des
-correctifs qui font le gros du travail et j'ai pu établir un premier
-brouillon!
+J'ai fini par utiliser le patron LaTeX [wallcalendar][]. L'auteur a
+fourni des correctifs qui font le gros du travail et j'ai pu établir
+un premier brouillon!
+
+Le tout est construit à partir de [ce dépôt](https://github.com/anarcat/wallcalendar/), sur la branche
+`calendes`. Puis les dossiers ont été mis en place:
+
+    git clone -b calendes https://github.com/anarcat/wallcalendar/
+    cd wallcalendar
+    ln -s doc/examples/cal-photo-and-notes.tex .
+    ln -s doc/examples/fonts .
+    mkdir photos data
+    ( cd data ; ln -s ../doc/examples/data/* . ; mv anarcat.csv holidays.csv)
 
-Checklist:
+Les photos ont été copiées dans `photos/` avec:
+
+    cp $(grep Thumbnail */index.md | sed 's/index.md:Thumbnail: //') ~/src/wallcalendar/photos/
+
+Update: les photos ont été re-cadrée en 8.5x11 donc ceci est
+obsolète.
+
+Le contenu de `colophon.tex` a été construit à la main est n'est
+présentement pas dans Git (mais devrait l'être, dans un dépôt
+privé). Des instructions sur l'installation du calendrier sont dans
+le dépôt git de la gallerie Sigal (`~/Pictures/calendes/calendrier`)
+et un README là explique comment installer le calendrier.
+
+Une note sur les fontes. L'auteur du calendrier original a choisi la
+fonte [Josefin Sans](https://www.fontsquirrel.com/fonts/Josefin-Sans) pour le calendrier, qui est très joli, mais le
+colophon retombait, par défaut, sur le plus classique [TeX Gyre
+Pagella](https://www.fontsquirrel.com/fonts/TeX-Gyre-Pagella). Ceci contraste trop avec le reste du calendrier, à mon
+avis, alors j'ai plutôt choisi une font sans. J'ai fait des tests avec
+des recommendations de Google Fonts:
+
+ * [Raleway](https://www.fontsquirrel.com/fonts/raleway): joli, mais le LaTeX ne sortait pas bien
+ * [Oswald](https://www.fontsquirrel.com/fonts/oswald): mieux car plus compressé mais le LaTeX était brisé
+   aussi
+
+Je suis resté avec la populaire fonte [Roboto](https://www.fontsquirrel.com/fonts/roboto) car elle est plus
+comprimée que Raleway et le LaTeX est bien formatté.
+
+# Liste de tâches
+
+## Faites
 
  * confirmer les dates (voir plus haut, fait)
  * vérifier dates: (fait)
@@ -533,7 +574,15 @@ Checklist:
    pour mardigrafe, finalement
  * testé la sortie Adobe RGB: couleurs ternes sur l'écran, ouache.
 
-Tâches restantes:
+Bugs upstream (signalés):
+
+ * corriger le mois de septembre qui déborde (fixed, remis les notes)
+ * overflow en première page (fixed, crop 8.5x11)
+ * crop incorrect des images (fixed, cropper au ratio 8.5x11)
+ * fonte différente dans le colphon (fixed, choisi roboto apres
+   evaluation de d'autres fontes)
+
+## Restantes
 
  * faire une page d'accueil pour le projet
  * pointer le lien dans le colophon (avec qr-code, en mode
@@ -545,53 +594,9 @@ Tâches restantes:
    BEG, pas terminé)
  * correction d'une épreuve
  * impression finale
+ * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))?
 
 [latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322
-Bugs restants upstream (signalés):
-
- * corriger le mois de septembre qui déborde (fixed, remis les notes)
- * overflow en première page (fixed, crop 8.5x11)
- * crop incorrect des images (fixed, cropper au ratio 8.5x11)
- * fonte différente dans le colphon (fixed, choisi roboto apres
-   evaluation de d'autres fontes)
- * commencer le dimanche (not fixed, [bug report](https://github.com/profound-labs/wallcalendar/issues/12))
-
-Le tout est construit à partir de [ce dépôt](https://github.com/anarcat/wallcalendar/), sur la branche
-`calendes`. Puis les dossiers ont été mis en place:
-
-    git clone -b calendes https://github.com/anarcat/wallcalendar/
-    cd wallcalendar
-    ln -s doc/examples/cal-photo-and-notes.tex .
-    ln -s doc/examples/fonts .
-    mkdir photos data
-    ( cd data ; ln -s ../doc/examples/data/* . ; mv anarcat.csv holidays.csv)
-
-Les photos ont été copiées dans `photos/` avec:
-
-    cp $(grep Thumbnail */index.md | sed 's/index.md:Thumbnail: //') ~/src/wallcalendar/photos/
-
-Update: les photos ont été re-cadrée en 8.5x11 donc ceci est
-obsolète.
-
-Le contenu de `colophon.tex` a été construit à la main est n'est
-présentement pas dans Git (mais devrait l'être, dans un dépôt
-privé). Des instructions sur l'installation du calendrier sont dans
-le dépôt git de la gallerie Sigal (`~/Pictures/calendes/calendrier`)
-et un README là explique comment installer le calendrier.
-
-Une note sur les fontes. L'auteur du calendrier original a choisi la
-fonte [Josefin Sans](https://www.fontsquirrel.com/fonts/Josefin-Sans) pour le calendrier, qui est très joli, mais le
-colophon retombait, par défaut, sur le plus classique [TeX Gyre
-Pagella](https://www.fontsquirrel.com/fonts/TeX-Gyre-Pagella). Ceci contraste trop avec le reste du calendrier, à mon
-avis, alors j'ai plutôt choisi une font sans. J'ai fait des tests avec
-des recommendations de Google Fonts:
-
- * [Raleway](https://www.fontsquirrel.com/fonts/raleway): joli, mais le LaTeX ne sortait pas bien
- * [Oswald](https://www.fontsquirrel.com/fonts/oswald): mieux car plus compressé mais le LaTeX était brisé
-   aussi
-
-Je suis resté avec la populaire fonte [Roboto](https://www.fontsquirrel.com/fonts/roboto) car elle est plus
-comprimée que Raleway et le LaTeX est bien formatté.
 
 # Impression #
 

clarifier les autres possibilités de papier
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index fb9a0038..6a8b1c36 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -623,6 +623,7 @@ Imprimeurs possibles:
  * [CEGEP](https://agebdeb.org/impressions/): 0.20$/feuille
    * [Xerox C8045](https://www.office.xerox.com/multifunction-printer/color-multifunction/altalink-c8000-series/enus.html) (beaux bleux, un peu smudgy sur certaines poses)
    * [Canon image RUNNER ADVANCE C5550i](https://www.usa.canon.com/internet/portal/us/home/products/details/copiers-mfps-fax-machines/multifunction-copiers/imagerunner-advance-c5550i)
+ * Bâtiment 7 ont un labo d'impression numérique un peu informel
  * Centre Japonais de la Photo: 450-688-6530
  * BEG Place dupuis: 514-843-8647 2, 1, 1
  * Deserres marché central: 514-908-0505, [Epson P6000](https://epson.com/Support/Printers/Professional-Imaging-Printers/SureColor-Series/Epson-SureColor-P6000-Standard-Edition/s/SPT_SCP6000SE#manuals)
@@ -702,6 +703,18 @@ résultats étaient en fait médiocres ("spots" noirs, bandes "effacées")
 mais selon le technicien, c'était dû à la machine et je pouvais voir
 le potentiel du papier. Donc j'ai décidé de faire l'essai.
 
+Avec plus de temps, il serait peut-être possible de faire venir du
+papier des États-Unis ou d'ailleurs. Par exemple, le [papier Xerox](https://www.xeroxpaperusa.com/en-us/where-to-buy/merchants)
+vient de [Domtar](https://www.domtar.com/en/resources/paper-tools/where-buy) il selon Réal de l'imprimerie du B7, il devrait
+être possible de faire une commande. Il y a également [d'autres
+papeteries](https://www.pagesjaunes.ca/search/si/1/papier/montr%C3%A9al/rca-00952600-Articles-de-papier%C2%B2rca-01238700-Papeterie%C2%B2rca-00952010-Papetieres-et-distributeurs-de-papiers) selon les pages jaunes.
+
+D'autres sites:
+
+ * [Limited papers](https://www.limitedpapers.com/)
+ * [Digital Paper Supply](https://www.digitalpapersupply.com/) (cher?)
+ * [Kelly Paper](https://kellypaper.com/) (É-U seul.?)
+
 ## Reliure
 
 Repro-UQAM font reliure et coupe. Sur un lot de 20 calendriers, 7.80$

ménage de la section impression, choix d'un premier papier fait
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 245759bc..fb9a0038 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -595,103 +595,128 @@ comprimée que Raleway et le LaTeX est bien formatté.
 
 # Impression #
 
-If I don't edit it myself, I can just use the Jean-Coutu template or
-whatever.
+Si j'avais choisi de ne pas faire le montage, j'aurais pris les
+patrons de Jean-Coutu ou Lozeau. Mais vu que j'ai mis beaucoup de
+temps dans le choix des évènements et le montage, j'ai finalement
+besoin d'une impression sur mesure. Selon [MagicFab](https://social.weho.st/@anarcat/100916925996701022), en bref:
+PDF + bureau en gros ou imprimeur local, avec un test avant.
 
-If I do edit it myself, I have no idea how to print it just yet.
-
-Some [advice from magicfab](https://social.weho.st/@anarcat/100916925996701022), en bref: PDF + bureau en gros ou
-imprimeur local, test avec une copie avant.
+## Imprimeurs
 
 Imprimeurs possibles:
 
  * [Clic Imprimerie](https://www.yelp.com/biz/clickimprimerie-montr%C3%A9al-3): pas cher, il paraît, fusionné et déménager, à
    suivre
  * [Katasoho](http://katasoho.com/2/): camarades, vert, 5000 Iberville, sur le bord sud de
-   la track. pas de réponse.
+   la track, pas de prix final
  * [Lozeau](https://lozeau.com/): 20$/calendrier 8.5x11, 5 à 7 jours ouvrables
  * [Jean-Coutu](https://iphoto.jeancoutu.com/fr/Products/Calendars/classic): 20$/calendrier + rabais 30%, identique à Lozeau, 5
-   à 7 jours ouvrables, probablement le même labo
+   à 7 jours ouvrables, probablement le même labo que Lozeau
  * Copie Express (St-Denis/Jean-Talon): 26$/calendrier (1.10$/feuille,
    0.85$/impression)
  * [Mardigrafe](http://mardigrafe.com/): en contact, demandent du CMYK, [pas dans
-   Darktable](https://discuss.pixls.us/t/print-shop-asks-for-cmyk-any-options/10176/5) mais peut-être possible en [LaTeX][latex-cmyk]
+   Darktable](https://discuss.pixls.us/t/print-shop-asks-for-cmyk-any-options/10176/5) mais peut-être possible en [LaTeX][latex-cmyk].
+   14.07$/calendrier. [Konica-Minolta BizHub Press C1070](https://www.biz.konicaminolta.com/production/c1070_c1060/index.html). On
+   fournit une correction (gratuite!) par courriel d'une des premières
+   épreuves PDF, relevant quelques défauts typographiques,
+   orthographiques et de mise en page. Merci!
  * [CEGEP](https://agebdeb.org/impressions/): 0.20$/feuille
+   * [Xerox C8045](https://www.office.xerox.com/multifunction-printer/color-multifunction/altalink-c8000-series/enus.html) (beaux bleux, un peu smudgy sur certaines poses)
+   * [Canon image RUNNER ADVANCE C5550i](https://www.usa.canon.com/internet/portal/us/home/products/details/copiers-mfps-fax-machines/multifunction-copiers/imagerunner-advance-c5550i)
  * Centre Japonais de la Photo: 450-688-6530
  * BEG Place dupuis: 514-843-8647 2, 1, 1
- * Deserres marché central: 514-908-0505
+ * Deserres marché central: 514-908-0505, [Epson P6000](https://epson.com/Support/Printers/Professional-Imaging-Printers/SureColor-Series/Epson-SureColor-P6000-Standard-Edition/s/SPT_SCP6000SE#manuals)
  * Lozeau: 514-274-6577
+ * Papeterie du plateau
+ * CDN impression (Rosemont + St-Denis)
 
 [latex-cmyk]: https://tex.stackexchange.com/a/9973/33322
 
-On a fait des tests avec du papier 148gsm (gram per square meter) mat,
-mais il est clair que ça sortirait mieux sur du papier lustré (vice
-versa). Les deux imprimantes utilisées:
-
- * [Xerox C8045](https://www.office.xerox.com/multifunction-printer/color-multifunction/altalink-c8000-series/enus.html) (beaux bleux, un peu smudgy sur certaines poses)
- * [Canon image RUNNER ADVANCE C5550i](https://www.usa.canon.com/internet/portal/us/home/products/details/copiers-mfps-fax-machines/multifunction-copiers/imagerunner-advance-c5550i)
-
-Papiers possibles:
+On a fait des tests au CÉGEP avec du papier 148gsm (gram per square
+meter) mat, mais il est clair que ça sortirait mieux sur du papier
+lustré (recto-verso).
 
- * [HP laser couleur pour dépliants](https://www.staples.ca/fr/HP-Papier-laser-couleur-pour-d%C3%A9pliants-8-1-2-po-x-11-po-lustr%C3%A9/product_608153_1-CA_2_20001): 30$/150 feuilles
-   (20¢/feuille), 150gsm / 40 lbs, brillance 97
- * [Staples pour brochures / circulaires](https://www.staples.ca/fr/staples-papier-%C3%A0-brochure-et-circulaire-mat-8-x-11-po/product_SS2006024_1-CA_2_20001#/id='dropdown_610489'): 38$/150 feuilles
-   (25¢/feuille), 48lbs (120gsm ou 170gsm?)
- * [Verso - Papier laser Sterling](https://www.staples.ca/fr/verso-papier-laser-sterling-num%C3%A9rique-lustr%C3%A9-premium-80-lb-8-5-x-11-po-blanc-bte-3000-feuilles-283618/product_2856893_1-CA_2_20001): 153$/3000 feuilles
-   (5¢/feuille), 118gsm, 16mil, brillance 94
- * [Carolina Cover 10 pts C2C](https://www.westrock.com/en/products/paperboard/carolina-digital-c1s-and-c2s), brillance 92
- * [Xerox Bold Super Gloss Cover 12 pts 3R11686](https://www.staples.com/Xerox-Bold-Super-Gloss-Cover-12-Point-8-1-2-x-11-Case/product_194862), 247gsm C1S
- * [Xerox Bold Coated Gloss 3R11462](https://www.staples.com/Xerox-Bold-Coated-Gloss-Digital-Printing-Paper-100-lb-Cover-8-1-2-x-11-Case/product_194865) 111$/100lbs, 1500 sheets, 7¢/sheet, unclear if C1S or C2S
- * [Hammermill Color Copy Cover 100lb](https://www.hamster.ca/en/hammermill-color-copy-cover-790162) 30.79$/250 feuilles
-   (12¢/feuille), brillance 100
+## Choix du papier
 
 Faut voir quelle "grammage" choisir. C'est pas évident parce que c'est
 parfois en "lbs" et parfois en gramme. La conversion "naive" semble
-être [1.48gsm/lbs](https://bollorethinpapers.com/en/basic-calculators/gsm-basis-weight-calculator), mais selon [ce tableau](http://coastalprint.com/convert_gsm_to_pounds/) le système impérial varie
-selon la sorte de papier (WTF bond, text, cover, bristol, index,
-tag??).
-
-Update: fait un test à BEG, sur leurs papiers, bon succès sur du
-"80lbs cover" (216gsm) lustré des deux bords. Mais ce papier est
-disponible seulement "derrière le comptoir", si on leur commande des
-impressions, pas en magasin. L'impression de test a couté 5.92$ pour 5
-photos, soit 53¢/impression et 0.10¢/papier ("lttr text C2S"), avec 2$
-de frais de service.
-
-J'ai aussi appelé Omer Deserres pour voir s'ils ont du papier. C'est
-comme BEG: seulement derrière le comptoir. Leur fournisseur est Fuji
-mais ils ne peuvent pas donner leur contact. Ils utilisent une [Epson
-P6000](https://epson.com/Support/Printers/Professional-Imaging-Printers/SureColor-Series/Epson-SureColor-P6000-Standard-Edition/s/SPT_SCP6000SE#manuals).
-
-Tous ces papiers sont un peu trop léger. Selon Lozeau, ça prend du
-240gsm minimum. Or ça c'est beaucoup plus difficile à trouver - même
-le 216gsm de BEG n'est pas en vente ... chez BEG et difficile à
-trouver ailleurs (rien chez Deserre non plus). Après des recherches
-plus approfondies, pollo a trouvé le [manuel de la Xerox](https://www.xerox.com/downloads/usa/en/supplies/rml/AltaLink_C8030_C8035_C8045_C8055_RML_April2017.pdf) qui
-indique quels papiers sont "compatibles" (lire: faits par Xerox). Le
-[3R11686](https://www.staples.com/Xerox-Bold-Super-Gloss-Cover-12-Point-8-1-2-x-11-Case/product_194862) est intéressant (247gsm) mais seulement un côté (C1S). Ça
-nous laisse seulement le 3R11462 (280gsm) qui se feede seulement en
-manuel, mais qui est (surprise!) [disponible chez BEG](https://www.staples.com/Xerox-Bold-Coated-Gloss-Digital-Printing-Paper-100-lb-Cover-8-1-2-x-11-Case/product_194865) (111$ pour
-100lbs!, 1500 feuilles, 7¢/feuille). Il semble pas y avoir de glosse
-qui fait du recto-verso sur cette machine. (!)
-
-Louis Desjardins, de Mardigrafe, a évalué le projet à 14$/calendrier
-et a effectué une correction (gratuite!) d'une épreuve PDF, relevant
-quelques défauts typographiques, orthographiques et de mise en
-page. Mardigrafe utilisent une [Konica-Minolta BizHub Press C1070](https://www.biz.konicaminolta.com/production/c1070_c1060/index.html).
+être [1.48gsm/lbs](https://bollorethinpapers.com/en/basic-calculators/gsm-basis-weight-calculator), mais selon [ce tableau](http://coastalprint.com/convert_gsm_to_pounds/) le système impérial
+varie selon la sorte de papier (bond, text, cover, bristol, index,
+tag, WTF?).
 
 Comme référence, le papier utilisé par Jean-Coutu pour leurs
 impressions semble être plus près du 280gsm. Un échantillon pris sur
 un calendrier 2018 pesait 16g par feuille (±1g) et les feuilles sont
 sont en US Letter trimmé, (277x214mm², ±1mm²), ce qui donne entre 250
-et 289gsm, d'où le 280gsm.
+et 289gsm, d'où le 280gsm. Au téléphone, Lozeau recommendent du
+240gsm ou plus pour un calendrier.
+
+J'ai fait un test à BEG, avec de bons résultat sur du "80lbs cover"
+(216gsm) lustré des deux bords. Mais ce papier est disponible
+seulement "derrière le comptoir", c'est à dire s'ils font
+l'impression, pas en magasin. L'impression de test a couté 5.92$ pour
+5 photos, soit 53¢/impression et 0.10¢/papier ("lttr text C2S"), avec
+2$ de frais de service. Ceci rendrait les coûts du calendrier
+prohibitifs (15$/calendrier, seulement pour l'impression).
+
+Après des recherches plus approfondies, pollo a trouvé le [manuel de
+la Xerox](https://www.xerox.com/downloads/usa/en/supplies/rml/AltaLink_C8030_C8035_C8045_C8055_RML_April2017.pdf) qui indique quels papiers sont "compatibles" (lire: faits
+par Xerox). Le [3R11686](https://www.staples.com/Xerox-Bold-Super-Gloss-Cover-12-Point-8-1-2-x-11-Case/product_194862) est intéressant (247gsm) mais seulement un
+côté (C1S). Ça nous laisse seulement le 3R11462 (280gsm) qui se feede
+seulement en manuel, mais qui est [disponible chez staples.com](https://www.staples.com/Xerox-Bold-Coated-Gloss-Digital-Printing-Paper-100-lb-Cover-8-1-2-x-11-Case/product_194865)
+mais *PAS* chez Bureau En Gros (staples.ca). Il semble pas y avoir de
+glosse qui fait du recto-verso sur cette machine. (!) Il semble
+impossible de trouver ce papier au détail au Canada.
+
+J'ai aussi appelé Omer Deserres pour voir s'ils ont du papier. C'est
+comme BEG: seulement derrière le comptoir. Leur fournisseur est Fuji
+mais ils ne peuvent pas donner leur contact. Mardigrafe proposent
+d'utiliser du Carolina Cover 10 pts C2C, également impossible à
+trouver au détail.
+
+Voici donc un récapitulatif des papiers considérés:
+
+ * Bureau En Gros:
+   * [HP laser couleur pour dépliants](https://www.staples.ca/fr/HP-Papier-laser-couleur-pour-d%C3%A9pliants-8-1-2-po-x-11-po-lustr%C3%A9/product_608153_1-CA_2_20001): 30$/150 feuilles
+     (20¢/feuille), 150gsm / 40 lbs, brillance 97
+   * [Staples pour brochures / circulaires](https://www.staples.ca/fr/staples-papier-%C3%A0-brochure-et-circulaire-mat-8-x-11-po/product_SS2006024_1-CA_2_20001#/id='dropdown_610489'): 38$/150 feuilles
+     (25¢/feuille), 48lbs (120gsm ou 170gsm?)
+   * [Verso - Papier laser Sterling](https://www.staples.ca/fr/verso-papier-laser-sterling-num%C3%A9rique-lustr%C3%A9-premium-80-lb-8-5-x-11-po-blanc-bte-3000-feuilles-283618/product_2856893_1-CA_2_20001): 153$/3000 feuilles
+     (5¢/feuille), 118gsm, 16mil, brillance 94
+ * Mardigrafe:
+   * [Carolina Cover 10 pts C2C](https://www.westrock.com/en/products/paperboard/carolina-digital-c1s-and-c2s), brillance 92
+ * Xerox / Domtar (commandes spéciales):
+   * [Xerox Bold Super Gloss Cover 12 pts 3R11686](https://www.staples.com/Xerox-Bold-Super-Gloss-Cover-12-Point-8-1-2-x-11-Case/product_194862), 247gsm C1S
+   * [Xerox Bold Coated Gloss 3R11462](https://www.staples.com/Xerox-Bold-Coated-Gloss-Digital-Printing-Paper-100-lb-Cover-8-1-2-x-11-Case/product_194865) 111$/100lbs, 1500 sheets,
+     7¢/sheet, unclear if C1S or C2S
+ * Papeterie du Plateau (Beaubien / Chateaubriand) AKA UPS Store AKA
+   [Buroplus.ca](http://buroplus.ca/) AKA [Hamster.ca](https://www.hamster.ca/):
+   * [Hammermill Color Copy Cover 100lb](https://www.hamster.ca/en/hammermill-color-copy-cover-790162) 30.79$/250 feuilles
+     (12¢/feuille), brillance 100, trop mat
+   * Verso [Sterling Premium Digital](https://www.versoco.com/wps/wcm/connect/797e51bb-30fd-435d-a635-5a19b01c49b4/VC15-006+Sterling+Premium+Digital+Sell+Sheet+112015+NPC.pdf?MOD=AJPERES) 100lb cover (271gsm) gloss,
+     21.78$CAD + tx (25.04$) pour 200 feuilles (11¢/feuille)
+
+J'ai fini par choisir ce dernier papier, en désespoir de cause, vu le
+bas prix est les résultats (acceptables) faits en magasin. Les
+résultats étaient en fait médiocres ("spots" noirs, bandes "effacées")
+mais selon le technicien, c'était dû à la machine et je pouvais voir
+le potentiel du papier. Donc j'ai décidé de faire l'essai.
+
+## Reliure
 
 Repro-UQAM font reliure et coupe. Sur un lot de 20 calendriers, 7.80$
 pour la coupe, 1.50$/reliure en spirale continue (2.20$ pour 10). À
 faire avant le 21 décembre, 24h de tombée, possiblement 2jrs. J'ai
-fait une première reliure avec du "Proclick".
+fait une première reliure avec du "Proclick" mais c'est trop cheap,
+c'est mieux avec leur reliure en plastique.
+
+Katasoho pourrait aussi faire la reliure pour un prix similaire
+(2$/calendrier). Ça serait intéressant de faire affaire avec eux pour
+entamer une relation à plus long terme.
 
-Total des coûts:
+## Coûts
+
+Estimé des coûts préliminaire:
 
  * Papier: 0.65-3.25$/calendrier (5-25¢/feuille)
  * Impression: 2.60$/calendrier (20¢/feuille)
@@ -699,7 +724,26 @@ Total des coûts:
  * Sous-total: 4.75-7.35$/calendrier
  * 20 calendriers: 87-147$

(Diff truncated)
d'autres options papier
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 1539eb19..245759bc 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -631,7 +631,7 @@ versa). Les deux imprimantes utilisées:
  * [Xerox C8045](https://www.office.xerox.com/multifunction-printer/color-multifunction/altalink-c8000-series/enus.html) (beaux bleux, un peu smudgy sur certaines poses)
  * [Canon image RUNNER ADVANCE C5550i](https://www.usa.canon.com/internet/portal/us/home/products/details/copiers-mfps-fax-machines/multifunction-copiers/imagerunner-advance-c5550i)
 
-Papiers possibles au BEG:
+Papiers possibles:
 
  * [HP laser couleur pour dépliants](https://www.staples.ca/fr/HP-Papier-laser-couleur-pour-d%C3%A9pliants-8-1-2-po-x-11-po-lustr%C3%A9/product_608153_1-CA_2_20001): 30$/150 feuilles
    (20¢/feuille), 150gsm / 40 lbs, brillance 97
@@ -639,6 +639,11 @@ Papiers possibles au BEG:
    (25¢/feuille), 48lbs (120gsm ou 170gsm?)
  * [Verso - Papier laser Sterling](https://www.staples.ca/fr/verso-papier-laser-sterling-num%C3%A9rique-lustr%C3%A9-premium-80-lb-8-5-x-11-po-blanc-bte-3000-feuilles-283618/product_2856893_1-CA_2_20001): 153$/3000 feuilles
    (5¢/feuille), 118gsm, 16mil, brillance 94
+ * [Carolina Cover 10 pts C2C](https://www.westrock.com/en/products/paperboard/carolina-digital-c1s-and-c2s), brillance 92
+ * [Xerox Bold Super Gloss Cover 12 pts 3R11686](https://www.staples.com/Xerox-Bold-Super-Gloss-Cover-12-Point-8-1-2-x-11-Case/product_194862), 247gsm C1S
+ * [Xerox Bold Coated Gloss 3R11462](https://www.staples.com/Xerox-Bold-Coated-Gloss-Digital-Printing-Paper-100-lb-Cover-8-1-2-x-11-Case/product_194865) 111$/100lbs, 1500 sheets, 7¢/sheet, unclear if C1S or C2S
+ * [Hammermill Color Copy Cover 100lb](https://www.hamster.ca/en/hammermill-color-copy-cover-790162) 30.79$/250 feuilles
+   (12¢/feuille), brillance 100
 
 Faut voir quelle "grammage" choisir. C'est pas évident parce que c'est
 parfois en "lbs" et parfois en gramme. La conversion "naive" semble
@@ -675,6 +680,12 @@ et a effectué une correction (gratuite!) d'une épreuve PDF, relevant
 quelques défauts typographiques, orthographiques et de mise en
 page. Mardigrafe utilisent une [Konica-Minolta BizHub Press C1070](https://www.biz.konicaminolta.com/production/c1070_c1060/index.html).
 
+Comme référence, le papier utilisé par Jean-Coutu pour leurs
+impressions semble être plus près du 280gsm. Un échantillon pris sur
+un calendrier 2018 pesait 16g par feuille (±1g) et les feuilles sont
+sont en US Letter trimmé, (277x214mm², ±1mm²), ce qui donne entre 250
+et 289gsm, d'où le 280gsm.
+
 Repro-UQAM font reliure et coupe. Sur un lot de 20 calendriers, 7.80$
 pour la coupe, 1.50$/reliure en spirale continue (2.20$ pour 10). À
 faire avant le 21 décembre, 24h de tombée, possiblement 2jrs. J'ai

got the mee audio, mention the maestro
diff --git a/hardware/audio.mdwn b/hardware/audio.mdwn
index 3997e5a1..7ef7ef6d 100644
--- a/hardware/audio.mdwn
+++ b/hardware/audio.mdwn
@@ -13,7 +13,11 @@ Recommended by a friend:
  * [Blue designs snowball](https://www.bluedesigns.com/products/snowball/#): 70$, omni mike, USB, [50$USD B&H](https://www.bhphotovideo.com/c/product/836611-REG/Blue_SNOWBALL_ICE_Snowball_USB_Condenser_Microphone.html)
  * [Mee audio m6 pro](https://www.meeaudio.com/EP-M6PROG2/): 50$, ear bud, detachable, two cables:
    with/without mike, avec deux [comply foam](https://www.complyfoam.com/) qui isolent du son
-   extérieur, [50$ B&H](https://www.bhphotovideo.com/c/product/1412274-REG/mee_audio_ep_m6prog2_bk_mee_51_m6_pro_universal_fit.html)
+   extérieur, [50$ B&H](https://www.bhphotovideo.com/c/product/1412274-REG/mee_audio_ep_m6prog2_bk_mee_51_m6_pro_universal_fit.html) (update: bought those)
+
+ * [German Maestro](https://www.german-maestro.de/Englisch/Products/Logic/Headphones/) ([review](https://www.audiophileon.com/news/2014821german-maestro-gmp-835d-jfb-review), [another](https://djworx.com/review-germanmaestro-gmp-8-35-d-jfb-dj-headphones/), [street
+   stress-testing](https://www.youtube.com/watch?v=t5SPvZWgw7M)), apparently indestructible, pads, no mike,
+   200EUR (!)
 
 Inventory
 ---------
@@ -38,6 +42,11 @@ crap:
    volume](https://paste.anarc.at/mike-check/ipod-neck-bis.flac), [even in face](https://paste.anarc.at/mike-check/ipod-face-bis.flac)
  * nokia 2: [okay sound](https://paste.anarc.at/mike-check/nokia2-face.flac) but small hiss, moderately
    comfortable. clunky mike.
+ * Mee Audio M6 Pro earbuds, comfortable, detachable wires (but
+   non-standard), nice sound-isolating pads useful for listening in
+   noisy env (metro, jams), [good mike](https://paste.anarc.at/mike-check/meeaudiom6pro-face.flac), [echo-y in neck](https://paste.anarc.at/mike-check/meeaudiom6pro-neck.flac) but
+   still good because of nice neck clip, which also keeps cables away
+   from keyboard
 
 I also have many headphones-only gizmos:
 

remove stuff that is documented elsewhere
diff --git a/wishlist.mdwn b/wishlist.mdwn
index dca8ac28..b50dd23b 100644
--- a/wishlist.mdwn
+++ b/wishlist.mdwn
@@ -58,15 +58,16 @@ Voici des choses que vous pouvez m'acheter si vous êtes le Père Nowel (yeah ri
      * [La théorie du drone](http://www.worldcat.org/oclc/847564093)
      * [The ARRL Operating Manual](http://www.arrl.org/shop/The-ARRL-Operating-Manual/)
      * [Les idées noires](https://en.wikipedia.org/wiki/Id%C3%A9es_noires) de Franquin, [l'intégrale](http://www.worldcat.org/oclc/493932411)
- * une liseuse 13" comme le [Sony DPT-S1](https://www.sony.com/electronics/digital-paper-notepads/dpts1#product_details_default) ou le [Onyx BOOX Max](https://onyxboox.com/boox_max),
-   ou encore une tablette rootable qui roule le plus de logiciel libre possible
+ * <del>une liseuse 13" comme le [Sony DPT-S1](https://www.sony.com/electronics/digital-paper-notepads/dpts1#product_details_default) ou le [Onyx BOOX Max](https://onyxboox.com/boox_max),
+   ou encore une tablette rootable qui roule le plus de logiciel libre
+   possible</del> - voir [[hardware/tablet]]
  * des longues vacances au costa rica, dans le charlevoix ou à une autre place pas rapport
  * un [[hardware/radio/FmTransmitter]]
  * un "portable image scanner" comme le [SVP 4500](http://www.svp-tech.com/ps4400/ps4400.html) ou le Wolverine
    Data pass
  * un transceiver générique, e.g. le [hack RF](https://greatscottgadgets.com/hackrf/), esp. avec le [portapack](https://sharebrained.myshopify.com/products/portapack-for-hackrf-one)
  * un [cours de premier de cordée](http://www.passemontagne.com/fr/cours.html)
- * un appareil photo digital reflex de qualité... voir [[hardware/camera]]
+ * <del>un appareil photo digital reflex de qualité...</del> voir [[hardware/camera]]
  * une autre liste de [wishlist](https://lib3.net/bookie/anarcat/recent/wishlist)
 
 Voir aussi [[hardware]] pour le matériel que j'ai déjà...

word wrap
diff --git a/wishlist.mdwn b/wishlist.mdwn
index 824ca9f3..dca8ac28 100644
--- a/wishlist.mdwn
+++ b/wishlist.mdwn
@@ -62,9 +62,8 @@ Voici des choses que vous pouvez m'acheter si vous êtes le Père Nowel (yeah ri
    ou encore une tablette rootable qui roule le plus de logiciel libre possible
  * des longues vacances au costa rica, dans le charlevoix ou à une autre place pas rapport
  * un [[hardware/radio/FmTransmitter]]
- * un "portable image scanner" comme
-   le [SVP 4500](http://www.svp-tech.com/ps4400/ps4400.html) ou le
-   Wolverine Data pass
+ * un "portable image scanner" comme le [SVP 4500](http://www.svp-tech.com/ps4400/ps4400.html) ou le Wolverine
+   Data pass
  * un transceiver générique, e.g. le [hack RF](https://greatscottgadgets.com/hackrf/), esp. avec le [portapack](https://sharebrained.myshopify.com/products/portapack-for-hackrf-one)
  * un [cours de premier de cordée](http://www.passemontagne.com/fr/cours.html)
  * un appareil photo digital reflex de qualité... voir [[hardware/camera]]

déplacer le freeewrite avec les tablettes
diff --git a/hardware/tablet.mdwn b/hardware/tablet.mdwn
index 795b0854..449ee87e 100644
--- a/hardware/tablet.mdwn
+++ b/hardware/tablet.mdwn
@@ -204,6 +204,17 @@ Downsides:
    RAM and used a i.MX508 SOC with a ARM Cortex-A8 CPU 1GHz
  * no backlight?
 
+Freewrite
+---------
+
+Le [freewrite](https://astrohaus.com/) pourrait être une façon intéressante de me forcer à
+écrire. Amener seulement ça dans une chalet dans le bois pour une
+semaine. Mais ça coûte *vraiment* cher, probablement à cause de
+l'écran "E ink" (550$USD).
+
+C'est aussi une machine beaucoup plus restreinte (délibérément) qu'une
+tablette générique.
+
 Tablets
 =======
 
diff --git a/wishlist.mdwn b/wishlist.mdwn
index ece6bf85..824ca9f3 100644
--- a/wishlist.mdwn
+++ b/wishlist.mdwn
@@ -68,7 +68,6 @@ Voici des choses que vous pouvez m'acheter si vous êtes le Père Nowel (yeah ri
  * un transceiver générique, e.g. le [hack RF](https://greatscottgadgets.com/hackrf/), esp. avec le [portapack](https://sharebrained.myshopify.com/products/portapack-for-hackrf-one)
  * un [cours de premier de cordée](http://www.passemontagne.com/fr/cours.html)
  * un appareil photo digital reflex de qualité... voir [[hardware/camera]]
- * le [freewrite](https://astrohaus.com/)
  * une autre liste de [wishlist](https://lib3.net/bookie/anarcat/recent/wishlist)
 
 Voir aussi [[hardware]] pour le matériel que j'ai déjà...

abuser du titre principal
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index fbfc6543..1539eb19 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -2,8 +2,6 @@
 
 [[!toc levels=2]]
 
-# Introduction #
-
 Le projet "Calendes" est un projet pour développer mes talents de
 photographe mais aussi une façon de ma familiariser avec ma première
 caméra digitale à objectifs interchangeables.

headings
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index fb002151..5d57b888 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -26,8 +26,11 @@ jour seulement jusqu'à 2012.
 La page [[hardware/camera]] a également de l'information sur le
 matériel utilisé et le système de stockage basé sur Git-annex.
 
-Projet de calendrier
-====================
+Projets
+=======
+
+Calendrier
+----------
 
 J'ai fait un projet élaboré de calendrier regroupant mes meilleures
 photo de l'année, incluant montage et impression, voir

fix link
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index 88b72eae..fb002151 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -29,4 +29,6 @@ matériel utilisé et le système de stockage basé sur Git-annex.
 Projet de calendrier
 ====================
 
-Voir [[calendrier2019]].
+J'ai fait un projet élaboré de calendrier regroupant mes meilleures
+photo de l'année, incluant montage et impression, voir
+[[calendrier-2019]].

make headings visible in collapse mode in emacs, fixy typo
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 13258fa7..fbfc6543 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -1,9 +1,8 @@
 [[!meta title="Projet Calendes 2019"]]
 
-[[!toc]]
+[[!toc levels=2]]
 
-Introduction
-============
+# Introduction #
 
 Le projet "Calendes" est un projet pour développer mes talents de
 photographe mais aussi une façon de ma familiariser avec ma première
@@ -23,8 +22,7 @@ sur demande.)
 
 Les détails sur mes outils de travail sont dans la page [camera](/hardware/camera/).
 
-Note sur le nom
----------------
+## Note sur le nom ##
 
 Les *calendes* (en latin archaïque : *kǎlendāī*, *-āsōm* ; en latin
 classique : *cǎlendae*, *-ārum*) étaient le premier jour de chaque
@@ -37,8 +35,7 @@ mois suivant et les débiteurs devaient payer leurs dettes inscrites
 dans les calendaria, les livres de comptes, à l'origine du mot
 calendrier. [...]
 
-Héritage linguistique
----------------------
+### Héritage linguistique ###
 
 Ce mot est à l'origine de plusieurs termes et expressions utilisés
 en français.
@@ -60,8 +57,7 @@ semble fixée mais qui en fin de compte n'aura jamais lieu.
 
 > *— [Wikipedia](https://fr.wikipedia.org/wiki/Calendes)*
 
-Évènements
-==========
+# Évènements #
 
 Un calendrier, c'est des petites boîtes en colonnes avec des chiffres
 dedans qui montre l'arrangement des jours dans les semaines et mois de
@@ -72,8 +68,7 @@ traditionnels. Les gens savent généralement qu'ils sont là et de toute
 façon cela varie selon le milieu de travail ou d'éducation. À la
 place, on célèbre différents évènements importants ou farfelus.
 
-Fêtes officielles, selon Koumbit
---------------------------------
+## Fêtes officielles, selon Koumbit ##
 
  * 1er janvier: [jour de l'an][Jour de l'an]
  * 8 mars: [Journée internationale des femmes][Fête des femmes]
@@ -87,8 +82,7 @@ Fêtes officielles, selon Koumbit
  * 14 octobre: [Action de grâce][]
  * 25 décembre: [Noël][]
 
-Alternatives aux fêtes traditionnelles
---------------------------------------
+## Alternatives aux fêtes traditionnelles ##
 
 Pour sortir du carcan des fêtes traditionnelles et célébrer plutôt
 l'absence de dieu et d'autres valeurs, on cherche des alternatives.
@@ -105,8 +99,7 @@ l'absence de dieu et d'autres valeurs, on cherche des alternatives.
  * [Action de grâce][action de grâce]: voir [Columbus day][], plus bas
  * [Fête des patriotes][Journée nationale des Patriotes]: [Towel day][] (25 mai), plus bas
 
-Autres fêtes intéressantes
---------------------------
+## Autres fêtes intéressantes ##
 
  * 1er janvier: [Indépendance d'Haïti][]
  * 21 janvier: [MLK day][MLK] (troisième lundi de janvier)
@@ -143,8 +136,7 @@ Autres fêtes intéressantes
  * 26 décembre - 1er janvier: [Kwanzaa][] (Héritage, unité et culture
    africaine)
 
-Autres idées
-------------
+## Autres idées ##
 
  * autres fêtes religieuses, selon le rapport annuel de [Projet
    Genèse](http://genese.qc.ca/):
@@ -178,8 +170,7 @@ Autres idées
 [changement d'heure]: https://en.wikipedia.org/wiki/Daylight_saving_time_by_country
 [premier avril]: https://fr.wikipedia.org/wiki/Poisson_d%27avril
 
-Fêtes exclues
--------------
+## Fêtes exclues ##
 
 Ces fêtes sont exclues d'offices parce que nationalistes ou célébrant
 des choses qu'on ne veut pas célébrer.
@@ -196,8 +187,7 @@ des choses qu'on ne veut pas célébrer.
    d'octobre: [Indigenous Peoples' Day][], 9 aout: [International
    day of the world's indigenous people][])
 
-Journées choisies
------------------
+## Journées choisies ##
 
 On se limite à 4 jours identifiés par mois et/ou un par semaine.
 
@@ -328,8 +318,7 @@ MANQUANT: évènements astronomiques, voir ci-bas.
 [Yule]: https://en.wikipedia.org/wiki/Yule
 [Équinoxe]: https://fr.wikipedia.org/wiki/%C3%89quinoxe
 
-Astronomie
-----------
+## Astronomie ##
 
 Obtenir des informations significatives pour l'observation céleste est
 plus difficile qu'on peut le croire. Même pour obtenir les phases de
@@ -463,8 +452,7 @@ Sources:
 
 [seasky-list]: http://www.seasky.org/astronomy/astronomy-calendar-2019.html
 
-Montage
-=======
+# Montage #
 
 Le format de base est basé sur les calendriers qu'on peut produire à
 la pharmacie du coin. Deux pages "US légal" (8.5x11") reliée par une
@@ -494,8 +482,7 @@ Les outils suivant ont été considérés pour monter les photos:
 I [asked the question on SE](https://softwarerecs.stackexchange.com/questions/52778/printing-a-monthly-calendar-with-custom-pictures-and-events) and documented the known [wallcalendar
 alternatives](https://alternativeto.net/software/wallcalendar/) on alternativeto.net.
 
-Wallcalendar
-------------
+## Wallcalendar ##
 
 J'ai fait plus de travail sur le module LaTeX. L'auteur a fourni des
 correctifs qui font le gros du travail et j'ai pu établir un premier
@@ -608,8 +595,7 @@ des recommendations de Google Fonts:
 Je suis resté avec la populaire fonte [Roboto](https://www.fontsquirrel.com/fonts/roboto) car elle est plus
 comprimée que Raleway et le LaTeX est bien formatté.
 
-Impresion
-=========
+# Impression #
 
 If I don't edit it myself, I can just use the Jean-Coutu template or
 whatever.
@@ -706,8 +692,7 @@ Total des coûts:
  * Coupe et montage: 7.80$ total
  * Grand total: ~95-155$
 
-Projets similaires
-==================
+# Projets similaires #
 
 Ce projet a été inspiré par d'autres projets [DIY](https://fr.wikipedia.org/wiki/Do_it_yourself), en particulier
 une tradition de faire des calendriers de photos de nature ou de

fix broken links
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 326e2866..13258fa7 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -269,7 +269,7 @@ MANQUANT: évènements astronomiques, voir ci-bas.
 [Guy Fawkes Night]: https://en.wikipedia.org/wiki/Guy_Fawkes_Night
 [Halloween]: https://fr.wikipedia.org/wiki/Halloween
 [Human Rights Day]: https://en.wikipedia.org/wiki/Human_Rights_Day
-[Independence day]: https://en.wikipedia.org/wiki/Independence_Day_(United_States
+[Independence day]: https://en.wikipedia.org/wiki/Independence_Day_(United_States)
 [Indigenous Peoples' Day]: https://en.wikipedia.org/wiki/Indigenous_Peoples%27_Day
 [Indépendance d'Haïti]: https://fr.wikipedia.org/wiki/Acte_de_l%27Ind%C3%A9pendance_de_la_R%C3%A9publique_d%27Ha%C3%AFti
 [International Day of Peace]: https://en.wikipedia.org/wiki/International_Day_of_Peace
@@ -317,7 +317,7 @@ MANQUANT: évènements astronomiques, voir ci-bas.
 [Sergei Rachmaninoff]: https://en.wikipedia.org/wiki/Sergei_Rachmaninoff
 [Solstice]: https://fr.wikipedia.org/wiki/Solstice
 [St-Jean-Baptiste]: https://fr.wikipedia.org/wiki/F%C3%AAte_nationale_du_Qu%C3%A9bec
-[Thanksgiving]: https://en.wikipedia.org/wiki/Thanksgiving_(United_States
+[Thanksgiving]: https://en.wikipedia.org/wiki/Thanksgiving_(United_States)
 [Towel Day]: https://en.wikipedia.org/wiki/Towel_Day
 [Vendredi saint]: https://fr.wikipedia.org/wiki/Vendredi_saint
 [Veterans day]: https://en.wikipedia.org/wiki/Veterans_Day
@@ -342,7 +342,7 @@ toutes les phases pertinentes pour une année:
 
 [kstars]: https://edu.kde.org/kstars/
 
-Alors j'ai écrit un programme ([moonphases](https://gitlab.com/anarcat/undertime/blob/master/moonphases.py)) avec [PyEphem][] pour
+Alors j'ai écrit un programme ([moonphases](https://gitlab.com/anarcat/undertime/blob/master/moonphases)) avec [PyEphem][] pour
 sortir les dates précises des phases lunaires. Ça m'a pris une soirée,
 ce qui montre comment il est facile d'utiliser cette librairie, mais
 aussi comment chaque aspect est complexe. Par exemple, PyEphem ne m'a

credit inspiration
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 0cbd69aa..326e2866 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -705,3 +705,14 @@ Total des coûts:
  * 20 calendriers: 87-147$
  * Coupe et montage: 7.80$ total
  * Grand total: ~95-155$
+
+Projets similaires
+==================
+
+Ce projet a été inspiré par d'autres projets [DIY](https://fr.wikipedia.org/wiki/Do_it_yourself), en particulier
+une tradition de faire des calendriers de photos de nature ou de
+famille dans deux parties différentes de ma famille (chapeau à vous)
+ainsi que ces groupes plus militants:
+
+ * [Certain days: freedom for political prisonners](https://www.certaindays.org/)
+ * Agenda du [DIRA](https://bibliothequedira.wordpress.com/)

hopefully final tweaks by jake
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 5163f89d..4257c6eb 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T15:11:39-0500"]]
+[[!meta updated="2018-12-07T15:33:10-0500"]]
 
 [[!toc levels=2]]
 
@@ -19,18 +19,16 @@ should help readers pick the right solution for their needs.
 The problem with large files
 ----------------------------
 
-As readers probably know, Linus Torvalds wrote Git to manage the
-history of the kernel source code, which is a large collection of
-small files.  Every file is a "blob" in Git's object store, addressed
-by its cryptographic hash. A new version of that file will store a new
-blob in Git's history, with no deduplication between the two
-versions. The pack file format can store binary deltas between similar
-objects, but only over a certain window of objects: if many similar
+As readers probably know, Linus Torvalds wrote Git to manage the history
+of the kernel source code, which is a large collection of small files.
+Every file is a "blob" in Git's object store, addressed by its
+cryptographic hash. A new version of that file will store a new blob in
+Git's history, with no deduplication between the two versions. The pack
+file format can store binary deltas between similar objects, but if many
 objects of similar size change in a repository, that algorithm might
-fail to properly deduplicate. In practice, large binary files (say JPG
-images) have this irritating tendency of changing completely when even
-the smallest change is made, which makes delta compression useless
-anyways.
+fail to properly deduplicate. In practice, large binary files (say JPEG
+images) have an irritating tendency of changing completely when even the
+smallest change is made, which makes delta compression useless.
 
 There have been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
@@ -179,8 +177,8 @@ committing the symbolic links into the Git repository. This design
 turned out to be a little confusing to users, including myself; I have
 managed to shoot myself in the foot more than once using this system.
 
-Since then, git-annex has adopted a different v7 mode that is also based on
-smudge/clean filters, which it called "[unlocked files][]". Like Git
+Since then, git-annex has adopted a different v7 mode that is also based
+on smudge/clean filters, which it called "[unlocked files][]". Like Git
 LFS, unlocked files will double disk space usage by default. However it
 *is* possible to reduce disk space usage by using "thin mode" which uses
 hard links between the internal git-annex disk storage and the work
@@ -253,15 +251,14 @@ ultimately determine which solution is the right one for you.
 
 Ironically, after thorough evaluation of large-file solutions for the
 Debian security tracker, I ended up proposing to rewrite history and
-[split the file by year][] which improved all performance markers by
-at least an order of magnitude. As it turns out, keeping history is
+[split the file by year][] which improved all performance markers by at
+least an order of magnitude. As it turns out, keeping history is
 critical for the security team so any solution that moves large files
 outside of the Git repository is not acceptable to them. Therefore,
 before adding large files into Git, you might want to think about
 organizing your content correctly first. But if large files are
 unavoidable, the Git LFS and git-annex projects allow users to keep
-using their favorite versionning software without breaking the bank on
-hardware.
+using most of their current workflow.
 
   [commit graph work]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt
   [Git LFS]: https://git-lfs.github.com/

reformatter en page seule, inclure la présentation de la gallerie
diff --git a/communication/photo/calendrier-2019.mdwn b/communication/photo/calendrier-2019.mdwn
index 193f1376..0cbd69aa 100644
--- a/communication/photo/calendrier-2019.mdwn
+++ b/communication/photo/calendrier-2019.mdwn
@@ -1,16 +1,67 @@
-J'imprime un calendrier pour 2019. Deux pages "US légal" (8.5x11")
-reliée par une spirale, avec une photo en haut et un mois de
-calendrier en bas, avec de l'espace pour prendre des notes et des
-évènements pertinents.
+[[!meta title="Projet Calendes 2019"]]
 
-La photo de chaque mois est prise parmi mes meilleurs clichés de
-l'année. C'est une façon de me forcer à améliorer mes photos et
-développer mes compétences. C'est aussi une excuse pour jouer avec ma
-nouvelle caméra et plein d'autres gadgets. La sélection des photos se
-fait dans une galerie privée (demandez-moi les accès si vous voulez).
+[[!toc]]
+
+Introduction
+============
+
+Le projet "Calendes" est un projet pour développer mes talents de
+photographe mais aussi une façon de ma familiariser avec ma première
+caméra digitale à objectifs interchangeables.
+
+Le but est d'imprimer un calendrier de photos pour l'année 2019, avec
+une photo par mois. Chaque mois de 2018, j'ai sélectionné mes photos
+les plus notables prises pendant le mois et je les publie sur une
+gallerie de photo privée. À la fin de l'année, les meilleures photos
+(originalement: les plus populaires, mais je n'ai pas eu beaucoup de
+feedback) sont réunies dans un calendrier.
+
+Certaines photos n'ont pas été développées ("[straight out of
+camera](https://www.flickr.com/groups/sooc/)") et pourront l'être ultérieurement. (D'ailleurs, pour les
+fanatiques du développement, les "négatifs" ("raw") sont disponibles
+sur demande.)
+
+Les détails sur mes outils de travail sont dans la page [camera](/hardware/camera/).
+
+Note sur le nom
+---------------
+
+Les *calendes* (en latin archaïque : *kǎlendāī*, *-āsōm* ; en latin
+classique : *cǎlendae*, *-ārum*) étaient le premier jour de chaque
+mois dans le calendrier romain, celui de la nouvelle lune quand le
+calendrier suivait un cycle lunaire (années de Romulus et de Numa
+Pompilius).
+
+Ce jour-là, les pontifes annonçaient la date des fêtes mobiles du
+mois suivant et les débiteurs devaient payer leurs dettes inscrites
+dans les calendaria, les livres de comptes, à l'origine du mot
+calendrier. [...]
+
+Héritage linguistique
+---------------------
+
+Ce mot est à l'origine de plusieurs termes et expressions utilisés
+en français.
+
+Le calendrier dérive de l'adjectif calendarium (« calendaire »), qui
+désignait un registre de comptes (que l'on apurait le premier du
+mois ; le calendarium était proprement le « registre des échéances
+») et, partant, le calendrier est, originellement, le registre sur
+lequel l'on note les événements liés à une date précise du mois. Le
+mot français provient directement de l'adjectif latin, avec un sens
+plus général.
+
+« Renvoyer aux calendes grecques » (Ad kalendas graecas) signifie «
+repousser indéfiniment la réalisation d'une action ». En effet, les
+Grecs n'ayant jamais eu de calendes, l'expression fait référence à
+une date inconnue. Les calendes grecques, tout comme la
+[Saint-Glinglin](https://fr.wikipedia.org/wiki/Saint-Glinglin), évoquent de manière ironique une date qui
+semble fixée mais qui en fin de compte n'aura jamais lieu.
+
+> *— [Wikipedia](https://fr.wikipedia.org/wiki/Calendes)*
 
 Évènements
-----------
+==========
 
 Un calendrier, c'est des petites boîtes en colonnes avec des chiffres
 dedans qui montre l'arrangement des jours dans les semaines et mois de
@@ -21,7 +72,8 @@ traditionnels. Les gens savent généralement qu'ils sont là et de toute
 façon cela varie selon le milieu de travail ou d'éducation. À la
 place, on célèbre différents évènements importants ou farfelus.
 
-### Fêtes officielles, selon Koumbit
+Fêtes officielles, selon Koumbit
+--------------------------------
 
  * 1er janvier: [jour de l'an][Jour de l'an]
  * 8 mars: [Journée internationale des femmes][Fête des femmes]
@@ -35,7 +87,8 @@ place, on célèbre différents évènements importants ou farfelus.
  * 14 octobre: [Action de grâce][]
  * 25 décembre: [Noël][]
 
-### Alternatives aux fêtes traditionnelles
+Alternatives aux fêtes traditionnelles
+--------------------------------------
 
 Pour sortir du carcan des fêtes traditionnelles et célébrer plutôt
 l'absence de dieu et d'autres valeurs, on cherche des alternatives.
@@ -52,7 +105,8 @@ l'absence de dieu et d'autres valeurs, on cherche des alternatives.
  * [Action de grâce][action de grâce]: voir [Columbus day][], plus bas
  * [Fête des patriotes][Journée nationale des Patriotes]: [Towel day][] (25 mai), plus bas
 
-### Autres fêtes intéressantes
+Autres fêtes intéressantes
+--------------------------
 
  * 1er janvier: [Indépendance d'Haïti][]
  * 21 janvier: [MLK day][MLK] (troisième lundi de janvier)
@@ -89,7 +143,8 @@ l'absence de dieu et d'autres valeurs, on cherche des alternatives.
  * 26 décembre - 1er janvier: [Kwanzaa][] (Héritage, unité et culture
    africaine)
 
-### Autres idées
+Autres idées
+------------
 
  * autres fêtes religieuses, selon le rapport annuel de [Projet
    Genèse](http://genese.qc.ca/):
@@ -123,7 +178,8 @@ l'absence de dieu et d'autres valeurs, on cherche des alternatives.
 [changement d'heure]: https://en.wikipedia.org/wiki/Daylight_saving_time_by_country
 [premier avril]: https://fr.wikipedia.org/wiki/Poisson_d%27avril
 
-### Fêtes exclues
+Fêtes exclues
+-------------
 
 Ces fêtes sont exclues d'offices parce que nationalistes ou célébrant
 des choses qu'on ne veut pas célébrer.
@@ -407,8 +463,15 @@ Sources:
 
 [seasky-list]: http://www.seasky.org/astronomy/astronomy-calendar-2019.html
 
-How to edit
------------
+Montage
+=======
+
+Le format de base est basé sur les calendriers qu'on peut produire à
+la pharmacie du coin. Deux pages "US légal" (8.5x11") reliée par une
+spirale, avec une photo en haut et un mois de calendrier en bas, avec
+de l'espace pour prendre des notes et des évènements pertinents.
+
+Les outils suivant ont été considérés pour monter les photos:
 
  * [wallcalendar](https://github.com/profound-labs/wallcalendar) - patron Latex, superbe mais n'est pas
    11x17/tabloid (A3) donc trop petit, voir [bogue #4](https://github.com/profound-labs/wallcalendar/issues/4)
@@ -431,7 +494,8 @@ How to edit
 I [asked the question on SE](https://softwarerecs.stackexchange.com/questions/52778/printing-a-monthly-calendar-with-custom-pictures-and-events) and documented the known [wallcalendar
 alternatives](https://alternativeto.net/software/wallcalendar/) on alternativeto.net.
 
-### Wallcalendar
+Wallcalendar
+------------
 
 J'ai fait plus de travail sur le module LaTeX. L'auteur a fourni des
 correctifs qui font le gros du travail et j'ai pu établir un premier
@@ -544,8 +608,8 @@ des recommendations de Google Fonts:
 Je suis resté avec la populaire fonte [Roboto](https://www.fontsquirrel.com/fonts/roboto) car elle est plus
 comprimée que Raleway et le LaTeX est bien formatté.
 
-How to print
-------------
+Impresion
+=========
 
 If I don't edit it myself, I can just use the Jean-Coutu template or
 whatever.

bouger calendrier en sous-page
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index c4427efa..88b72eae 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -29,646 +29,4 @@ matériel utilisé et le système de stockage basé sur Git-annex.
 Projet de calendrier
 ====================
 
-J'imprime un calendrier pour 2019. Deux pages "US légal" (8.5x11")
-reliée par une spirale, avec une photo en haut et un mois de
-calendrier en bas, avec de l'espace pour prendre des notes et des
-évènements pertinents.
-
-La photo de chaque mois est prise parmi mes meilleurs clichés de
-l'année. C'est une façon de me forcer à améliorer mes photos et
-développer mes compétences. C'est aussi une excuse pour jouer avec ma
-nouvelle caméra et plein d'autres gadgets. La sélection des photos se
-fait dans une galerie privée (demandez-moi les accès si vous voulez).
-
-Évènements
-----------
-
-Un calendrier, c'est des petites boîtes en colonnes avec des chiffres
-dedans qui montre l'arrangement des jours dans les semaines et mois de
-l'année. Mais c'est aussi des évènements ponctuels.
-
-J'ai fait le choix de ne pas refléter les congés fériés et religieux
-traditionnels. Les gens savent généralement qu'ils sont là et de toute
-façon cela varie selon le milieu de travail ou d'éducation. À la
-place, on célèbre différents évènements importants ou farfelus.
-
-### Fêtes officielles, selon Koumbit
-
- * 1er janvier: [jour de l'an][Jour de l'an]
- * 8 mars: [Journée internationale des femmes][Fête des femmes]
- * 19 avril: [Vendredi saint][]
- * 1er mai: [fête des travailleurs][Fête des travailleurs]
- * 20 mai: [Journée nationale des Patriotes][] (le lundi qui précède
-   le 25 mai)
- * 24 juin: [St-Jean-Baptiste][]
- * 1er juillet: [Confédération][]
- * 2 septembre: [fête du travail][Fête du travail]
- * 14 octobre: [Action de grâce][]
- * 25 décembre: [Noël][]
-
-### Alternatives aux fêtes traditionnelles
-
-Pour sortir du carcan des fêtes traditionnelles et célébrer plutôt
-l'absence de dieu et d'autres valeurs, on cherche des alternatives.
-
- * Noël:
-   * 22 décembre: [Solstice][Solstice]/[Yule][]
-   * 23 décembre: [Festivus][]
-   * 26 décembre: [Boxing day][]
-   * 26 décembre: [Kwanzaa][]
- * [Vendredi saint][] (19 avril 2019) / [Pâques][] (21 avril 2019):
-   * [420][] (20 avril)
-   * [Record store day][] (20 avril)
-   * [Jour de la terre][Jour de la terre] (22 april)
- * [Action de grâce][action de grâce]: voir [Columbus day][], plus bas
- * [Fête des patriotes][Journée nationale des Patriotes]: [Towel day][] (25 mai), plus bas
-
-### Autres fêtes intéressantes
-
- * 1er janvier: [Indépendance d'Haïti][]
- * 21 janvier: [MLK day][MLK] (troisième lundi de janvier)
- * 21 janvier: [National Hugging Day][]
- * 25 janvier: [Opposite Day][] ("Day where you do everything opposite")
- * 31 janvier: [National Gorilla Suit Day][]
- * 12 février: [Darwin day][]
- * 14 février: [Saint-Valentin][]
- * 14 mars: [Pi Day][]
- * 15 mars: [journée internationale contre la brutalité policière][JICBP]
- * 17 mars: [Saint-patrick][]
- * 20 mars: [Équinoxe][] (nuit = jour)
- * 4 avril: [420][]
- * 20 avril: [Record store day][] (3e samedi d'avril)
- * 22 avril: [Jour de la terre][] (voir aussi 5 juin)
- * 1er vendredi de mai: [No Pants Day][]
- * 2 mai: [national day of reason][]
- * 12 mai: [Fête des mères][]
- * 25 May: [Towel Day][] (en référence à feu Douglas Adams)
- * 5 juin: [jour de l'environnement][] (voir aussi 22 avril)
- * 16 juin: [Fête des pères][]
- * 21 juin: [Solstice][Solstice] d'été (jour le plus long), [Wold Humanist day][]
- * 22 juillet: [Pi Approximation Day][] or Pi day (14 mars, ci-haut)
- * 13 août: [International Lefthanders Day][]
- * 19 septembre: [International Talk Like a Pirate Day][]
- * 21 septembre: [International Day of Peace][]
- * 23 septembre: [Équinoxe][], [Human Rights Day][]
- * 31 octobre: [Halloween][]
- * 5 novembre: [Guy Fawkes Night][]
- * 28 novembre: [Thanksgiving][]
- * 29 novembre: [Buy Nothing Day][]
- * 14 décembre: [Monkey Day][]
- * 22 décembre: [Solstice][] d'hiver (jour le plus court)
- * 26 décembre - 1er janvier: [Kwanzaa][] (Héritage, unité et culture
-   africaine)
-
-### Autres idées
-
- * autres fêtes religieuses, selon le rapport annuel de [Projet
-   Genèse](http://genese.qc.ca/):
-   * islam
-     * [Ramadan][]: variable, du 6 mai au 4 juin 2019 ([Eid al-Fitr][])
-     * [Eid al-Adha][]: variable, 12 août 2019
-   * judaïsme:
-     * [Rosh Hashanah][]: variable, 1er octobre 2019
-     * [Yom Kippur][]: variable, 9 octobre 2019
-   * hindouisme, bouddhisme:
-     * [Diwali][]: variable, 27 octobre 2019
- * évènements astronomiques majeurs (voir plus bas)
- * [Friendship Day][]
- * [Nanomonestotse][]: préparé le troisième lundi d'octobre, célébré
-   le vendredi suivant
- * 31 octobre - 2 novembre: [day of the dead][]
- * [Poisson d'avril][April fool's day]... autres choses amusante le
-   [premier avril][]:
-   * 1868 – [Edmond Rostand][]
-   * 1873 – [Sergei Rachmaninoff][]
-   * 1908 – Naissance de [Abraham Maslow][]
-   * 1924 – [Royal Canadian Air Force][] formée
-   * 1976 – [Apple Inc.][] fondée
-   * 1999 – Création du [Nunavut][]
-   * 2004 – [Google][] lance [Gmail][]
-   * [Edible book day][]
-   * [Fossil fools day][]
- * [Autres évènements][], spécifiquement sur l'[Anarchisme][]
- * [Sysadmin/IT calendar](https://old.reddit.com/r/sysadmin/comments/9u43lt/a_calendar_of_sysadmin_it_related_events/)
-
-[changement d'heure]: https://en.wikipedia.org/wiki/Daylight_saving_time_by_country
-[premier avril]: https://fr.wikipedia.org/wiki/Poisson_d%27avril
-
-### Fêtes exclues
-
-Ces fêtes sont exclues d'offices parce que nationalistes ou célébrant
-des choses qu'on ne veut pas célébrer.
-
- * 18 février: [Washington's birthday][]
- * 4 juillet: [Independence day][]
- * 14 juillet: [Jour de la bastille][]
- * 15 septembre: [Independence day (mexico)][Mexico]
- * 11 novembre: [Veterans day][] / [Jour du souvenir][]
- * 27 mai: [Memorial day][]
- * 20 mai 2019: [Victoria day][]
- * 14 octobre: [Columbus day][] (note: October 12, 1992 was "International
-   Day of Solidarity with Indigenous People), deuxieme lundi
-   d'octobre: [Indigenous Peoples' Day][], 9 aout: [International
-   day of the world's indigenous people][])
-
-Journées choisies
------------------
-
-On se limite à 4 jours identifiés par mois et/ou un par semaine.
-
- * 1er janvier: [Jour de l'an][]
- * 21 janvier: [MLK day][MLK]
- * 14 février: [Saint-Valentin][]
- * 8 mars: [Fête des femmes][]
- * 11 mars: Début de l'heure avancée (on avance l'heure, deuxième
-   dimanche de mars)
- * 14 mars: [Journée de Pi][]
- * 15 mars: [Journée contre la brutalité policière][JICBP]
- * 20 mars: [Équinoxe][]
- * 1er avril: Naissance d'[Edmond Rostand][]
- * 20 avril: [Fête du pot][420] (au lieu du [Vendredi saint][])
- * 22 avril: [Jour de la terre][jour de la terre] (au lieu de [Pâques][])
- * 1er mai: [Fête des travailleurs][]
- * 12 mai: [Fête des mères][]
- * 25 mai: [Jour de la serviette][Towel Day] (au lieu de la fête des patriotes le 20 mai)
- * 16 juin: [Fête des pères][]
- * 21 juin: [Solstice][] d'été, jour le plus long (au lieu de la St-Jean)
- * 1er juillet: [Jour du déménagement][] (au lieu de la fête nationale)
- * 13 août: [Jour des gauchers][]
- * 2 septembre: [Fête du travail][]
- * 19 septembre: [Jour des pirates][]
- * 21 septembre: [Journée de la paix][]
- * 23 septembre: [Équinoxe][]
- * 14 octobre: [Jour des peuples autochtones][], au lieu de l'[action
-   de grâce][Action de grâce] ou [Colombus day][Columbus day]
- * 31 octobre: [Halloween][]
- * 3 novembre: Début de l'heure normale (on recule l'heure, premier
-   dimanche de novembre, 2:00)
- * 29 novembre: [Journée sans achat][] (au lieu de [Thanksgiving][])
- * 14 décembre: [Fête des singes][]
- * 22 décembre: [Solstice][Solstice] d'hiver, jour le plus court
- * 25 décembre: [Naissance de Newton][Newtonmas] (au lieu de [Noël][])
-
-MANQUANT: évènements astronomiques, voir ci-bas.
-
-[420]: https://en.wikipedia.org/wiki/420_(cannabis_culture)
-[Abraham Maslow]: https://en.wikipedia.org/wiki/Abraham_Maslow
-[Action de grâce]: https://fr.wikipedia.org/wiki/Action_de_gr%C3%A2ce_(Canada)
-[Anarchisme]: https://en.wikipedia.org/wiki/Portal:Anarchism/Anniversaries
-[Apple Inc.]: https://fr.wikipedia.org/wiki/Apple
-[April 1st]: https://en.wikipedia.org/wiki/April_1
-[April fool's day]: https://en.wikipedia.org/wiki/April_Fools%27_Day
-[Autres évènements]: https://en.wikipedia.org/wiki/Lists_of_holidays
-[Boxing day]: https://en.wikipedia.org/wiki/Boxing_Day
-[Buy Nothing Day]: https://en.wikipedia.org/wiki/Buy_Nothing_Day

(Diff truncated)
tone down conclusion
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 3e3f9028..5163f89d 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -253,12 +253,15 @@ ultimately determine which solution is the right one for you.
 
 Ironically, after thorough evaluation of large-file solutions for the
 Debian security tracker, I ended up proposing to rewrite history and
-[split the file by year][] which improved all performance markers by at
-least an order of magnitude. As it turns out, keeping history is
+[split the file by year][] which improved all performance markers by
+at least an order of magnitude. As it turns out, keeping history is
 critical for the security team so any solution that moves large files
 outside of the Git repository is not acceptable to them. Therefore,
 before adding large files into Git, you might want to think about
-organizing your content correctly first.
+organizing your content correctly first. But if large files are
+unavoidable, the Git LFS and git-annex projects allow users to keep
+using their favorite versionning software without breaking the bank on
+hardware.
 
   [commit graph work]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt
   [Git LFS]: https://git-lfs.github.com/

close to the end, tweaks from jake
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index a0d751a2..3e3f9028 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T15:00:00-0500"]]
+[[!meta updated="2018-12-07T15:11:39-0500"]]
 
 [[!toc levels=2]]
 
@@ -32,7 +32,7 @@ images) have this irritating tendency of changing completely when even
 the smallest change is made, which makes delta compression useless
 anyways.
 
-There has been different attempts at fixing this in the past. In 2006,
+There have been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
 duplication between the index and the pack files. Those changes were
 eventually reverted because, as Nicolas Pitre [put it][]: "*that extra
@@ -179,16 +179,14 @@ committing the symbolic links into the Git repository. This design
 turned out to be a little confusing to users, including myself; I have
 managed to shoot myself in the foot more than once using this system.
 
-Since then, git-annex has adopted a different v7 mode that is also
-based on smudge/clean filters, which it called "[unlocked files][]". Like
-Git LFS, unlocked files will double disk space usage by default. However it *is* possible
-to reduce disk space usage by using "thin mode" which uses hard links
-between the internal git-annex disk storage and the work tree. The
-downside is, of course, that changes are immediately performed on files,
-which means previous file versions are automatically discarded. This can
-lead to data loss if users are not careful.
-
-[unlocked files]: https://git-annex.branchable.com/tips/unlocked_files/
+Since then, git-annex has adopted a different v7 mode that is also based on
+smudge/clean filters, which it called "[unlocked files][]". Like Git
+LFS, unlocked files will double disk space usage by default. However it
+*is* possible to reduce disk space usage by using "thin mode" which uses
+hard links between the internal git-annex disk storage and the work
+tree. The downside is, of course, that changes are immediately performed
+on files, which means previous file versions are automatically
+discarded. This can lead to data loss if users are not careful.
 
 Furthermore, git-annex in v7 mode suffers from some of the performance
 problems affecting Git LFS, because both use the smudge/clean filters.
@@ -255,12 +253,12 @@ ultimately determine which solution is the right one for you.
 
 Ironically, after thorough evaluation of large-file solutions for the
 Debian security tracker, I ended up proposing to rewrite history and
-[split the file by year][] which improved all performance markers by
-at least an order of magnitude. As it turns out, keeping history is
+[split the file by year][] which improved all performance markers by at
+least an order of magnitude. As it turns out, keeping history is
 critical for the security team so any solution that moves large files
-outside of the Git repository is not acceptable for that
-team. Therefore, before adding large files into Git, you might want to
-think about organizing your content correctly first.
+outside of the Git repository is not acceptable to them. Therefore,
+before adding large files into Git, you might want to think about
+organizing your content correctly first.
 
   [commit graph work]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt
   [Git LFS]: https://git-lfs.github.com/
@@ -288,6 +286,7 @@ think about organizing your content correctly first.
   [custom transfer protocols]: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md
   [covered the project]: https://lwn.net/Articles/419241/
   [direct mode]: http://git-annex.branchable.com/direct_mode/
+  [unlocked files]: https://git-annex.branchable.com/tips/unlocked_files/
   [ideas]: http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/
   [large number]: http://git-annex.branchable.com/special_remotes/
   [special remote protocol]: http://git-annex.branchable.com/special_remotes/external/

yolo
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 7b0bf080..a0d751a2 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -255,12 +255,12 @@ ultimately determine which solution is the right one for you.
 
 Ironically, after thorough evaluation of large-file solutions for the
 Debian security tracker, I ended up proposing to rewrite history and
-[split the file by year][] which improved all performance markers by at
-least an order of magnitude. As it turns out, keeping history is
+[split the file by year][] which improved all performance markers by
+at least an order of magnitude. As it turns out, keeping history is
 critical for the security team so any solution that moves large files
-outside of the Git repository is not acceptable. Therefore, before
-adding large files into Git, you might want to think about organizing
-your content correctly first.
+outside of the Git repository is not acceptable for that
+team. Therefore, before adding large files into Git, you might want to
+think about organizing your content correctly first.
 
   [commit graph work]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt
   [Git LFS]: https://git-lfs.github.com/

expand size problems following discussion with jon
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 390c3314..7b0bf080 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -19,13 +19,18 @@ should help readers pick the right solution for their needs.
 The problem with large files
 ----------------------------
 
-As readers probably know, Linus Torvalds wrote Git to manage the history
-of the kernel source code, which is a large collection of small files.
-Every file is a "blob" in Git's object store, addressed by its
-cryptographic hash. A new version of that file will store a new blob in
-Git's history, with no deduplication between the two versions. The pack
-file format does offer some binary compression over the repository, but
-in practice this does not offer much help for large {binary} files.
+As readers probably know, Linus Torvalds wrote Git to manage the
+history of the kernel source code, which is a large collection of
+small files.  Every file is a "blob" in Git's object store, addressed
+by its cryptographic hash. A new version of that file will store a new
+blob in Git's history, with no deduplication between the two
+versions. The pack file format can store binary deltas between similar
+objects, but only over a certain window of objects: if many similar
+objects of similar size change in a repository, that algorithm might
+fail to properly deduplicate. In practice, large binary files (say JPG
+images) have this irritating tendency of changing completely when even
+the smallest change is made, which makes delta compression useless
+anyways.
 
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object

more fixes from jake
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index d4847734..390c3314 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T14:45:17-0500"]]
+[[!meta updated="2018-12-07T15:00:00-0500"]]
 
 [[!toc levels=2]]
 
@@ -25,7 +25,7 @@ Every file is a "blob" in Git's object store, addressed by its
 cryptographic hash. A new version of that file will store a new blob in
 Git's history, with no deduplication between the two versions. The pack
 file format does offer some binary compression over the repository, but
-in practice this does not offer much help for {binary} large files.
+in practice this does not offer much help for large {binary} files.
 
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
@@ -134,19 +134,18 @@ were previously hidden in GitHub's cost structure.
 While the actual server-side implementation used by GitHub is closed
 source, there is a [test server][] provided as an example
 implementation. Other Git hosting platforms have also [implemented][]
-support for the LFS [API][], including GitLab, Gitea, and BitBucket,
-while git-fat and GitMedia never got that privilege. LFS does support hosting
-large files on a server other than the central one — a project could run
-its own LFS server, for example — but this will involve a different set
-of credentials, bringing back the difficult user onboarding that
-affected git-fat and GitMedia.
+support for the LFS [API][], including GitLab, Gitea, and BitBucket;
+that level of adoption is something that git-fat and GitMedia never
+achieved. LFS does support hosting large files on a server other than
+the central one — a project could run its own LFS server, for example —
+but this will involve a different set of credentials, bringing back the
+difficult user onboarding that affected git-fat and GitMedia.
 
 Another limitation is that LFS only supports pushing and pulling files
 over HTTP(S) — no SSH transfers. LFS uses some [tricks][] to bypass HTTP
-basic authentication, fortunately. This might also change
-in the future as there are proposals to add [SSH support][], resumable
-uploads through the [tus.io protocol][], and other [custom transfer
-protocols][].
+basic authentication, fortunately. This also might change in the future
+as there are proposals to add [SSH support][], resumable uploads through
+the [tus.io protocol][], and other [custom transfer protocols][].
 
 Finally, LFS can be slow. Every file added to LFS takes up double the
 space on the local filesystem as it is copied to the `.git/lfs/objects`

clarify which files for jon, introduce unlocked files
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 71d7684c..d4847734 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -175,15 +175,17 @@ committing the symbolic links into the Git repository. This design
 turned out to be a little confusing to users, including myself; I have
 managed to shoot myself in the foot more than once using this system.
 
-Since then, git-annex has adopted a different mode that is also based on
-smudge/clean filters, which it called the "v7 mode". Like Git LFS, those
-files will double disk space usage by default. However it *is* possible
+Since then, git-annex has adopted a different v7 mode that is also
+based on smudge/clean filters, which it called "[unlocked files][]". Like
+Git LFS, unlocked files will double disk space usage by default. However it *is* possible
 to reduce disk space usage by using "thin mode" which uses hard links
 between the internal git-annex disk storage and the work tree. The
 downside is, of course, that changes are immediately performed on files,
 which means previous file versions are automatically discarded. This can
 lead to data loss if users are not careful.
 
+[unlocked files]: https://git-annex.branchable.com/tips/unlocked_files/
+
 Furthermore, git-annex in v7 mode suffers from some of the performance
 problems affecting Git LFS, because both use the smudge/clean filters.
 Hess actually has [ideas][] on how the smudge/clean interface could be

restore my work
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 46d15db8..71d7684c 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -25,7 +25,7 @@ Every file is a "blob" in Git's object store, addressed by its
 cryptographic hash. A new version of that file will store a new blob in
 Git's history, with no deduplication between the two versions. The pack
 file format does offer some binary compression over the repository, but
-in practice this does not offer much help for large files.
+in practice this does not offer much help for {binary} large files.
 
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
@@ -135,7 +135,7 @@ While the actual server-side implementation used by GitHub is closed
 source, there is a [test server][] provided as an example
 implementation. Other Git hosting platforms have also [implemented][]
 support for the LFS [API][], including GitLab, Gitea, and BitBucket,
-something git-fat and GitMedia never achieved. LFS does support hosting
+while git-fat and GitMedia never got that privilege. LFS does support hosting
 large files on a server other than the central one — a project could run
 its own LFS server, for example — but this will involve a different set
 of credentials, bringing back the difficult user onboarding that
@@ -143,8 +143,7 @@ affected git-fat and GitMedia.
 
 Another limitation is that LFS only supports pushing and pulling files
 over HTTP(S) — no SSH transfers. LFS uses some [tricks][] to bypass HTTP
-basic authentication, fortunately, but this means that files cannot be
-stored locally, which can make offline work awkward. This might change
+basic authentication, fortunately. This might also change
 in the future as there are proposals to add [SSH support][], resumable
 uploads through the [tus.io protocol][], and other [custom transfer
 protocols][].

work from jon
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 1dab1f3f..46d15db8 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T14:32:05-0500"]]
+[[!meta updated="2018-12-07T14:45:17-0500"]]
 
 [[!toc levels=2]]
 
@@ -25,7 +25,7 @@ Every file is a "blob" in Git's object store, addressed by its
 cryptographic hash. A new version of that file will store a new blob in
 Git's history, with no deduplication between the two versions. The pack
 file format does offer some binary compression over the repository, but
-in practice this does not offer much help for {binary} large files.
+in practice this does not offer much help for large files.
 
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
@@ -104,10 +104,10 @@ hash (currently SHA-256) of the file. This brings the extra feature that
 multiple copies of the same file in the same repository are
 automatically deduplicated, although in practice this rarely occurs.
 
-Git LFS will copy large files to that internal repository on `git add`.
+Git LFS will copy large files to that internal storage on `git add`.
 When a file is modified in the repository, Git notices, the new version
-is copied to the internal repository, and the pointer file is updated.
-The old version is left dangling until the repository is pruned.
+is copied to the internal storage, and the pointer file is updated. The
+old version is left dangling until the repository is pruned.
 
 This process only works for new files you are importing into Git,
 however. If a Git repository already has large files in its history, LFS
@@ -135,7 +135,7 @@ While the actual server-side implementation used by GitHub is closed
 source, there is a [test server][] provided as an example
 implementation. Other Git hosting platforms have also [implemented][]
 support for the LFS [API][], including GitLab, Gitea, and BitBucket,
-while git-fat and GitMedia never got that privilege. LFS does support hosting
+something git-fat and GitMedia never achieved. LFS does support hosting
 large files on a server other than the central one — a project could run
 its own LFS server, for example — but this will involve a different set
 of credentials, bringing back the difficult user onboarding that
@@ -143,7 +143,8 @@ affected git-fat and GitMedia.
 
 Another limitation is that LFS only supports pushing and pulling files
 over HTTP(S) — no SSH transfers. LFS uses some [tricks][] to bypass HTTP
-basic authentication, fortunately. This might also change
+basic authentication, fortunately, but this means that files cannot be
+stored locally, which can make offline work awkward. This might change
 in the future as there are proposals to add [SSH support][], resumable
 uploads through the [tus.io protocol][], and other [custom transfer
 protocols][].
@@ -159,22 +160,21 @@ git-annex
 
 The other main player in large file support for Git is git-annex. We
 [covered the project][] back in 2010, shortly after its first release,
-but it's certainly worth discussing what changed in the eight years
+but it's certainly worth discussing what has changed in the eight years
 since Joey Hess launched the project.
 
 Like Git LFS, git-annex takes large files out of Git's history. The way
 it handles this is by storing a symbolic link to the file in
 `.git/annex`. We should probably credit Hess for this innovation, since
-the Git LFS storage layout is obviously inspired by git-annex's earlier
-design. Git-annex's original design introduced all sorts of problems
-however, especially on filesystems lacking symbolic-link support. So
-Hess has implemented different solutions to this problem. Originally,
-when git-annex detected such a "crippled" filesystem, it switched to
-[direct mode][], which kept files directly in the work tree, while
-internally committing the symbolic links into the Git repository. This
-design turned out to be a little confusing to users, including myself; I
-have managed to shoot myself in the foot more than once using this
-system.
+the Git LFS storage layout is obviously inspired by git-annex. The
+original design of git-annex introduced all sorts of problems however,
+especially on filesystems lacking symbolic-link support. So Hess has
+implemented different solutions to this problem. Originally, when
+git-annex detected such a "crippled" filesystem, it switched to [direct
+mode][], which kept files directly in the work tree, while internally
+committing the symbolic links into the Git repository. This design
+turned out to be a little confusing to users, including myself; I have
+managed to shoot myself in the foot more than once using this system.
 
 Since then, git-annex has adopted a different mode that is also based on
 smudge/clean filters, which it called the "v7 mode". Like Git LFS, those
@@ -186,8 +186,8 @@ which means previous file versions are automatically discarded. This can
 lead to data loss if users are not careful.
 
 Furthermore, git-annex in v7 mode suffers from some of the performance
-problems affecting Git LFS, because of their use of filters. Hess
-actually has [ideas][] on how the smudge/clean interface could be
+problems affecting Git LFS, because both use the smudge/clean filters.
+Hess actually has [ideas][] on how the smudge/clean interface could be
 improved. He proposes changing Git so that it stops buffering entire
 files into memory, allows filters to access the work tree directly, and
 adds the hooks he found missing (for `stash`, `reset`, and
@@ -202,7 +202,7 @@ be editable, which might be counter-intuitive to new users. In general,
 git-annex has some of those unusual quirks and interfaces that often
 come with more powerful software.
 
-And git-annex is much more powerful: it does not only address the
+And git-annex is much more powerful: it not only addresses the
 "large-files problem" but goes much further. For example, it supports
 "partial checkouts" — downloading only some of the large files. I find
 that especially useful to manage my video, music, and photo collections,
@@ -251,11 +251,11 @@ ultimately determine which solution is the right one for you.
 Ironically, after thorough evaluation of large-file solutions for the
 Debian security tracker, I ended up proposing to rewrite history and
 [split the file by year][] which improved all performance markers by at
-least an order of magnitude, bringing performance back to normal. As it
-turns out, keeping history is critical for the security team so all
-solutions that move large files outside of the Git repository are not
-acceptable. Therefore, before adding large files into Git, you might
-want to think about organizing your content correctly first.
+least an order of magnitude. As it turns out, keeping history is
+critical for the security team so any solution that moves large files
+outside of the Git repository is not acceptable. Therefore, before
+adding large files into Git, you might want to think about organizing
+your content correctly first.
 
   [commit graph work]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt
   [Git LFS]: https://git-lfs.github.com/

tweak/remove some awkward stuff jon found and tag a todo
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 2bc78b0a..1dab1f3f 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -25,7 +25,7 @@ Every file is a "blob" in Git's object store, addressed by its
 cryptographic hash. A new version of that file will store a new blob in
 Git's history, with no deduplication between the two versions. The pack
 file format does offer some binary compression over the repository, but
-in practice this does not offer much help for large files.
+in practice this does not offer much help for {binary} large files.
 
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
@@ -135,7 +135,7 @@ While the actual server-side implementation used by GitHub is closed
 source, there is a [test server][] provided as an example
 implementation. Other Git hosting platforms have also [implemented][]
 support for the LFS [API][], including GitLab, Gitea, and BitBucket,
-something git-fat and GitMedia never achieved. LFS does support hosting
+while git-fat and GitMedia never got that privilege. LFS does support hosting
 large files on a server other than the central one — a project could run
 its own LFS server, for example — but this will involve a different set
 of credentials, bringing back the difficult user onboarding that
@@ -143,8 +143,7 @@ affected git-fat and GitMedia.
 
 Another limitation is that LFS only supports pushing and pulling files
 over HTTP(S) — no SSH transfers. LFS uses some [tricks][] to bypass HTTP
-basic authentication, fortunately, but this means that files cannot be
-stored locally, which can make offline work awkward. This might change
+basic authentication, fortunately. This might also change
 in the future as there are proposals to add [SSH support][], resumable
 uploads through the [tus.io protocol][], and other [custom transfer
 protocols][].

jake corrections
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 4a8405e4..2bc78b0a 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,26 +3,25 @@
 -------------------------------
 
 [[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T12:56:59-0500"]]
+[[!meta updated="2018-12-07T14:32:05-0500"]]
 
 [[!toc levels=2]]
 
-By design, Git does not handle large files very well. While there is
-work underway to handle large *repositories* through the [commit graph
-work][], Git's internal design has remained surprisingly constant
-throughout its history, which means that storing large files into Git
-comes with a significant and, ultimately, prohibitive performance cost.
-Thankfully, other projects are helping Git address this challenge. This
-article compares how [Git LFS][] and [git-annex][] address this problem
-and should help readers pick the right solution for their needs.
+Git does not handle large files very well. While there is work underway
+to handle large *repositories* through the [commit graph work][], Git's
+internal design has remained surprisingly constant throughout its
+history, which means that storing large files into Git comes with a
+significant and, ultimately, prohibitive performance cost. Thankfully,
+other projects are helping Git address this challenge. This article
+compares how [Git LFS][] and [git-annex][] address this problem and
+should help readers pick the right solution for their needs.
 
 The problem with large files
 ----------------------------
 
-Git's problem with large files comes from its design. As readers
-probably know, Linus Torvalds wrote Git to manage the history of the
-kernel source code, which is a large collection of small files. Every
-file is a "blob" in Git's internal storage, addressed by its
+As readers probably know, Linus Torvalds wrote Git to manage the history
+of the kernel source code, which is a large collection of small files.
+Every file is a "blob" in Git's object store, addressed by its
 cryptographic hash. A new version of that file will store a new blob in
 Git's history, with no deduplication between the two versions. The pack
 file format does offer some binary compression over the repository, but
@@ -38,8 +37,8 @@ Then in 2009, [Caca Labs][] worked on improving the `fast-import` and
 `pack-objects` Git commands to do special handling for big files, in an
 effort called [git-bigfiles][]. Some of those changes eventually made it
 into Git: for example, since [1.7.6][], Git will stream large files
-directly to a pack file instead of holding it all in memory. But files
-are still kept forever in history.
+directly to a pack file instead of holding them all in memory. But files
+are still kept forever in the history.
 
 An example of trouble I had to deal with is for the Debian security
 tracker, which follows all security issues in the entire Debian history
@@ -53,17 +52,17 @@ ten minutes. So even though that is a simple text file, it's grown large
 enough to cause significant problems for Git, which is otherwise known
 for stellar performance.
 
-Intuitively, the problem is that Git needs to copy files into its
-internal storage to track them. Third-party projects therefore typically
-solve the large-files problem by taking files out of Git. In 2009, Git
-evangelist Scott Chacon released [GitMedia][], which is a Git filter
-that simply takes large files out of Git. Unfortunately, there hasn't
-been an official release since then and it's [unclear][] if the project
-is still maintained. The next effort to come up was [git-fat][], first
-released in 2012 and still maintained. But neither tool has seen massive
-adoption yet. If I would have to venture a guess, it might be because
-both require manual configuration. Both also require a custom server
-(rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits
+Intuitively, the problem is that Git needs to copy files into its object
+store to track them. Third-party projects therefore typically solve the
+large-files problem by taking files out of Git. In 2009, Git evangelist
+Scott Chacon released [GitMedia][], which is a Git filter that simply
+takes large files out of Git. Unfortunately, there hasn't been an
+official release since then and it's [unclear][] if the project is still
+maintained. The next effort to come up was [git-fat][], first released
+in 2012 and still maintained. But neither tool has seen massive adoption
+yet. If I would have to venture a guess, it might be because both
+require manual configuration. Both also require a custom server (rsync
+for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits
 collaboration since users need access to another service.
 
 Git LFS
@@ -84,13 +83,12 @@ checkout. Git only stores that small text file and does so efficiently.
 The downside, of course, is that large files are not version controlled:
 only the latest version of a file is kept in the repository.
 
-After installing LFS, it can be used in any repository by installing the
-right hooks with `git lfs install` then asking LFS to track any given
-file with `git lfs track`. This will add the file to the
-`.gitattributes` file which will make Git run the proper LFS filters.
-It's also possible to add patterns to the `.gitattributes` file, of
-course. For example, this will make sure Git LFS will track MP3 and ZIP
-files:
+Git LFS can be used in any repository by installing the right hooks with
+`git lfs install` then asking LFS to track any given file with
+`git lfs track`. This will add the file to the `.gitattributes` file
+which will make Git run the proper LFS filters. It's also possible to
+add patterns to the `.gitattributes` file, of course. For example, this
+will make sure Git LFS will track MP3 and ZIP files:
 
         $ cat .gitattributes
         *.mp3 filter=lfs -text
@@ -126,28 +124,29 @@ remove other user's locks by using the `--force` flag. LFS can also
 [prune][] old or unreferenced files.
 
 The main [limitation][] of LFS is that it's bound to a single upstream:
-large files are usually stored in the same location as the main Git
+large files are usually stored in the same location as the central Git
 repository. If it is hosted on GitHub, this means a default quota of 1GB
 storage and bandwidth, but you can purchase additional "packs" to expand
-those quotas at 5$/50GB (storage and bandwidth). GitHub also limits the
-size individual files to 2GB. This [upset][] some users surprised by the
-bandwidth fees, which were previously hidden in GitHub's cost structure.
+both of those quotas. GitHub also limits the size of individual files to
+2GB. This [upset][] some users surprised by the bandwidth fees, which
+were previously hidden in GitHub's cost structure.
 
 While the actual server-side implementation used by GitHub is closed
 source, there is a [test server][] provided as an example
 implementation. Other Git hosting platforms have also [implemented][]
 support for the LFS [API][], including GitLab, Gitea, and BitBucket,
 something git-fat and GitMedia never achieved. LFS does support hosting
-on a *different* server — a project could run its own LFS server, for
-example — but this will involve a different set of credentials, bringing
-back the difficult user onboarding that affected git-fat and GitMedia.
+large files on a server other than the central one — a project could run
+its own LFS server, for example — but this will involve a different set
+of credentials, bringing back the difficult user onboarding that
+affected git-fat and GitMedia.
 
 Another limitation is that LFS only supports pushing and pulling files
-over HTTP(S) — no SSH transfers. LFS uses some tricks to bypass HTTPS
-authentication, fortunately, but this means that files cannot be stored
-locally, which can make offline work awkward. This might change in the
-future as there are proposals to add [SSH support][], resumable uploads
-through the [tus.io protocol][], and other [custom transfer
+over HTTP(S) — no SSH transfers. LFS uses some [tricks][] to bypass HTTP
+basic authentication, fortunately, but this means that files cannot be
+stored locally, which can make offline work awkward. This might change
+in the future as there are proposals to add [SSH support][], resumable
+uploads through the [tus.io protocol][], and other [custom transfer
 protocols][].
 
 Finally, LFS can be slow. Every file added to LFS takes up double the
@@ -279,6 +278,7 @@ want to think about organizing your content correctly first.
   [test server]: https://github.com/git-lfs/lfs-test-server
   [implemented]: https://github.com/git-lfs/git-lfs/wiki/Implementations%0A
   [API]: https://github.com/git-lfs/git-lfs/tree/master/docs/api
+  [tricks]: https://github.com/git-lfs/git-lfs/blob/master/docs/api/authentication.md
   [SSH support]: https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md
   [tus.io protocol]: https://tus.io/
   [custom transfer protocols]: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md

another tweak from lwn
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 5b93e7fd..4a8405e4 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T12:33:16-0500"]]
+[[!meta updated="2018-12-07T12:56:59-0500"]]
 
 [[!toc levels=2]]
 
@@ -107,10 +107,9 @@ multiple copies of the same file in the same repository are
 automatically deduplicated, although in practice this rarely occurs.
 
 Git LFS will copy large files to that internal repository on `git add`.
-When a file is modified in the repository, Git will notice thanks to the
-filters, the new version is copied to the internal repository, and the
-pointer file is updated. The old version is left dangling until the
-repository is pruned.
+When a file is modified in the repository, Git notices, the new version
+is copied to the internal repository, and the pointer file is updated.
+The old version is left dangling until the repository is pruned.
 
 This process only works for new files you are importing into Git,
 however. If a Git repository already has large files in its history, LFS

remerge with lwn
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index eee2f832..5b93e7fd 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,7 +3,7 @@
 -------------------------------
 
 [[!meta date="2018-12-06T00:00:00+0000"]]
-[[!meta updated="2018-12-07T09:42:04-0500"]]
+[[!meta updated="2018-12-07T12:33:16-0500"]]
 
 [[!toc levels=2]]
 
@@ -31,8 +31,8 @@ in practice this does not offer much help for large files.
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
 duplication between the index and the pack files. Those changes were
-eventually [reverted by Nicolas Pitre][] as the "*extra object format
-didn't appear to be worth it anymore*".
+eventually reverted because, as Nicolas Pitre [put it][]: "*that extra
+loose object format doesn't appear to be worth it anymore*".
 
 Then in 2009, [Caca Labs][] worked on improving the `fast-import` and
 `pack-objects` Git commands to do special handling for big files, in an
@@ -106,16 +106,16 @@ hash (currently SHA-256) of the file. This brings the extra feature that
 multiple copies of the same file in the same repository are
 automatically deduplicated, although in practice this rarely occurs.
 
-LFS will copy large files in that internal repository on `git add` or
-other operations that similarly modify the index. When a file is
-modified in the repository, that new version is copied to the LFS
-storage and the pointer is updated. The old version is left dangling
-until `git lfs prune` is ran to remove older copies.
+Git LFS will copy large files to that internal repository on `git add`.
+When a file is modified in the repository, Git will notice thanks to the
+filters, the new version is copied to the internal repository, and the
+pointer file is updated. The old version is left dangling until the
+repository is pruned.
 
-LFS only works on new files imported into Git, however. If a
-Git repository already has large files in its history, LFS can
-fortunately "fix" repositories by retroactively rewriting history with
-[git lfs migrate][]. This has all the normal downsides of rewriting
+This process only works for new files you are importing into Git,
+however. If a Git repository already has large files in its history, LFS
+can fortunately "fix" repositories by retroactively rewriting history
+with [git lfs migrate][]. This has all the normal downsides of rewriting
 history, however — existing clones will have to be reset to benefit from
 the cleanup.
 
@@ -153,9 +153,9 @@ protocols][].
 
 Finally, LFS can be slow. Every file added to LFS takes up double the
 space on the local filesystem as it is copied to the `.git/lfs/objects`
-storage. The smudge/clean interface is also slow: it works as a pipe and
-will need to read and write the entire file in memory each time which
-can be prohibitive with files larger than available memory.
+storage. The smudge/clean interface is also slow: it works as a pipe,
+but buffers the file contents in memory each time, which can be
+prohibitive with files larger than available memory.
 
 git-annex
 ---------
@@ -191,19 +191,19 @@ lead to data loss if users are not careful.
 Furthermore, git-annex in v7 mode suffers from some of the performance
 problems affecting Git LFS, because of their use of filters. Hess
 actually has [ideas][] on how the smudge/clean interface could be
-improved. He proposes fixing Git so it stops buffering entire files
-into memory, allowing filters to access the work tree directly, and
-adding hooks he found missing, on `stash`, `reset` and
-`cherry-pick`. Git-annex already implements some tricks to work around
-those problems itself but it would be better for those to be
-implemented in Git natively.
+improved. He proposes changing Git so that it stops buffering entire
+files into memory, allows filters to access the work tree directly, and
+adds the hooks he found missing (for `stash`, `reset`, and
+`cherry-pick`). Git-annex already implements some tricks to work around
+those problems itself but it would be better for those to be implemented
+in Git natively.
 
 Being more distributed by design, git-annex does not have the same
 "locking" semantics as LFS. Locking a file in git-annex means protecting
-it from changes, so files need to actually be "unlocked" to be editable,
-which might be counter-intuitive to new users. In general, git-annex has
-some of those unusual quirks and interfaces that often come with more
-powerful software.
+it from changes, so files need to actually be in the "unlocked" state to
+be editable, which might be counter-intuitive to new users. In general,
+git-annex has some of those unusual quirks and interfaces that often
+come with more powerful software.
 
 And git-annex is much more powerful: it does not only address the
 "large-files problem" but goes much further. For example, it supports
@@ -264,7 +264,7 @@ want to think about organizing your content correctly first.
   [Git LFS]: https://git-lfs.github.com/
   [git-annex]: https://git-annex.branchable.com/
   [improving the pack-file format]: https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/
-  [reverted]: https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/
+  [put it]: https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/
   [Caca Labs]: http://caca.zoy.org/
   [git-bigfiles]: http://caca.zoy.org/wiki/git-bigfiles
   [1.7.6]: https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/

respond to jake's comments
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 04374020..eee2f832 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -30,9 +30,9 @@ in practice this does not offer much help for large files.
 
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on [improving the pack-file format][] to reduce object
-duplication between the index and the pack files, but those changes were
-eventually [reverted][] as the "*extra object format didn't appear to be
-worth it anymore*".
+duplication between the index and the pack files. Those changes were
+eventually [reverted by Nicolas Pitre][] as the "*extra object format
+didn't appear to be worth it anymore*".
 
 Then in 2009, [Caca Labs][] worked on improving the `fast-import` and
 `pack-objects` Git commands to do special handling for big files, in an
@@ -106,12 +106,13 @@ hash (currently SHA-256) of the file. This brings the extra feature that
 multiple copies of the same file in the same repository are
 automatically deduplicated, although in practice this rarely occurs.
 
-LFS will copy large files in that internal repository on `git add`. When
-that file changes, Git will notice thanks to the filters and a new
-`git add` will add the new version of the file in the internal storage
-and change the pointer file to record the new hash.
+LFS will copy large files in that internal repository on `git add` or
+other operations that similarly modify the index. When a file is
+modified in the repository, that new version is copied to the LFS
+storage and the pointer is updated. The old version is left dangling
+until `git lfs prune` is ran to remove older copies.
 
-This only works for new files you are importing into Git, however. If a
+LFS only works on new files imported into Git, however. If a
 Git repository already has large files in its history, LFS can
 fortunately "fix" repositories by retroactively rewriting history with
 [git lfs migrate][]. This has all the normal downsides of rewriting
@@ -190,10 +191,12 @@ lead to data loss if users are not careful.
 Furthermore, git-annex in v7 mode suffers from some of the performance
 problems affecting Git LFS, because of their use of filters. Hess
 actually has [ideas][] on how the smudge/clean interface could be
-improved. This will require changes in Git to avoid buffering the entire
-file in memory, allowing the filter to access the work tree directly,
-and adding extra hooks. Some of those things are already done in
-git-annex by hacking around the filter interface.
+improved. He proposes fixing Git so it stops buffering entire files
+into memory, allowing filters to access the work tree directly, and
+adding hooks he found missing, on `stash`, `reset` and
+`cherry-pick`. Git-annex already implements some tricks to work around
+those problems itself but it would be better for those to be
+implemented in Git natively.
 
 Being more distributed by design, git-annex does not have the same
 "locking" semantics as LFS. Locking a file in git-annex means protecting

import from LWN
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 04b18922..04374020 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -1,264 +1,303 @@
+[[!meta title="Large files with Git: LFS and git-annex"]]
+\[LWN subscriber-only content\]
+-------------------------------
 
-Large files with Git: LFS and git-annex
-=======================================
+[[!meta date="2018-12-06T00:00:00+0000"]]
+[[!meta updated="2018-12-07T09:42:04-0500"]]
+
+[[!toc levels=2]]
 
 By design, Git does not handle large files very well. While there is
 work underway to handle large *repositories* through the [commit graph
-work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), Git's internal design has remained surprisingly constant
-throughout its history which means storing large files into Git comes
-with a significant and, ultimately, prohibitive performance
-cost. Thankfully, other projects are helping Git address this
-challenge. This article compares how [Git LFS](https://git-lfs.github.com/) and [git-annex](https://git-annex.branchable.com/)
-address this problem and should help readers pick the right solution
-for their workflow.
+work][], Git's internal design has remained surprisingly constant
+throughout its history, which means that storing large files into Git
+comes with a significant and, ultimately, prohibitive performance cost.
+Thankfully, other projects are helping Git address this challenge. This
+article compares how [Git LFS][] and [git-annex][] address this problem
+and should help readers pick the right solution for their needs.
 
 The problem with large files
-============================
+----------------------------
 
-Git's problem with large files comes from its design. As our readers
+Git's problem with large files comes from its design. As readers
 probably know, Linus Torvalds wrote Git to manage the history of the
-kernel source code, a large collection of small files. Every file is a
-"blob" in Git's internal storage, addressed by its cryptographic hash. A new
-version of that file will store a new blob in Git's history, with no
-deduplication between the two versions. The pack file format does
-offer some binary compression over the repository, but in practice
-this does not offer much help for large files as the contents are not
-contiguous which limits the compressor's performance.
+kernel source code, which is a large collection of small files. Every
+file is a "blob" in Git's internal storage, addressed by its
+cryptographic hash. A new version of that file will store a new blob in
+Git's history, with no deduplication between the two versions. The pack
+file format does offer some binary compression over the repository, but
+in practice this does not offer much help for large files.
 
 There has been different attempts at fixing this in the past. In 2006,
-Torvalds worked on improving the [pack file format](https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/) to reduce
-object duplication between the index and the pack files, but those
-changes were eventually [reverted](https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/) as the "extra object format 
-didn't appear to be worth it anymore".
-
-Then in 2009, [Caca labs](http://caca.zoy.org/) worked on improving `fast-import` and
-`pack-objects` to handle big files specially, an effort that's called
-[git-bigfiles](http://caca.zoy.org/wiki/git-bigfiles). Some of those changes eventually made it into Git:
-for example, since [1.7.6](https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/) Git will stream large files directly to a
-packfile instead of holding it all in memory. But files are still kept
-forever in history. An example of trouble I had to deal with is the
-Debian security tracker, which follows all security issues in the
-entire Debian history in a single file, which is around
-360,000 lines for a whopping 18MB. The resulting repository takes
-1.6GB of disk space and a *local* clone takes 21 minutes to perform,
-mostly taken by Git resolving deltas. Commit, push, and pull are
-noticably slower than a regular repository, taking anywhere from a few
-seconds to a minute depending one how old the local copy is. And
-running annotate on that large file can take up to ten minutes. So
-even though that is a simple text file, it's grown large enough to
-cause significant problems in Git, which is otherwise known for
-stellar performance.
-
-Intuitively, the problem is Git needs to copy files into its
+Torvalds worked on [improving the pack-file format][] to reduce object
+duplication between the index and the pack files, but those changes were
+eventually [reverted][] as the "*extra object format didn't appear to be
+worth it anymore*".
+
+Then in 2009, [Caca Labs][] worked on improving the `fast-import` and
+`pack-objects` Git commands to do special handling for big files, in an
+effort called [git-bigfiles][]. Some of those changes eventually made it
+into Git: for example, since [1.7.6][], Git will stream large files
+directly to a pack file instead of holding it all in memory. But files
+are still kept forever in history.
+
+An example of trouble I had to deal with is for the Debian security
+tracker, which follows all security issues in the entire Debian history
+in a single file. That file is around 360,000 lines for a whopping 18MB.
+The resulting repository takes 1.6GB of disk space and a *local* clone
+takes 21 minutes to perform, mostly taken up by Git resolving deltas.
+Commit, push, and pull are noticeably slower than a regular repository,
+taking anywhere from a few seconds to a minute depending one how old the
+local copy is. And running annotate on that large file can take up to
+ten minutes. So even though that is a simple text file, it's grown large
+enough to cause significant problems for Git, which is otherwise known
+for stellar performance.
+
+Intuitively, the problem is that Git needs to copy files into its
 internal storage to track them. Third-party projects therefore typically
-solve the large files problem by taking files out of Git. In 2009, Git
-evangelist Scott Chacon released [GitMedia](https://github.com/schacon/git-media), a Git filter which
-simply takes large files out of Git. Unfortunately, there hasn't been
-an official release since then and it's [unclear](https://github.com/alebedev/git-media/issues/15) if the project is
-still maintained. The next effort to come up was [git-fat](https://github.com/jedbrown/git-fat), first
-released in 2012 and still maintained. But neither tool has seen
-massive adoption yet. If I would have to venture a guess, it might be
-because both require manual configuration. Both also require a custom
-server (rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia)
-which limits collaboration as users need access to another service.
+solve the large-files problem by taking files out of Git. In 2009, Git
+evangelist Scott Chacon released [GitMedia][], which is a Git filter
+that simply takes large files out of Git. Unfortunately, there hasn't
+been an official release since then and it's [unclear][] if the project
+is still maintained. The next effort to come up was [git-fat][], first
+released in 2012 and still maintained. But neither tool has seen massive
+adoption yet. If I would have to venture a guess, it might be because
+both require manual configuration. Both also require a custom server
+(rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits
+collaboration since users need access to another service.
 
 Git LFS
-=======
+-------
 
-That was before GitHub released Git LFS in [August 2015](https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/). Like all
-software taking files out of Git, LFS tracks file *hashes* instead
-file *contents*. So instead of adding large files into Git directly,
-LFS adds a point file to the Git repository, which looks like this:
+That was before GitHub [released][] Git Large File Storage (LFS) in
+August 2015. Like all software taking files out of Git, LFS tracks file
+*hashes* instead of file *contents*. So instead of adding large files
+into Git directly, LFS adds a pointer file to the Git repository, which
+looks like this:
 
-    version https://git-lfs.github.com/spec/v1
-    oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393
-    size 12345
+        version https://git-lfs.github.com/spec/v1
+        oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393
+        size 12345
 
 LFS then uses Git's smudge and clean filters to show the real file on
-checkout. Git only stores that small text file and does so
-effficiently. The downside, of course, is that large files are not
-version controlled: only the latest version of a file is kept in the
-repository.
-
-After installing LFS, it can be used in any repository by installing
-the right hooks with `git lfs install` then asking LFS to track any
-given file, with `git lfs track`. This will add the file to
-the `.gitattributes` file which will make Git fire the proper LFS
-filters. It's also possible to add patterns to the `.gitattributes`
-file, of course. For example, this will make sure Git LFS will track
-MP3 and ZIP files:
-
-    $ cat .gitattributes
-    *.mp3 filter=lfs -text
-    *.zip filter=lfs -text
-
-After this configuration, we use Git normally: `git add`, `git commit`,
+checkout. Git only stores that small text file and does so efficiently.
+The downside, of course, is that large files are not version controlled:
+only the latest version of a file is kept in the repository.
+
+After installing LFS, it can be used in any repository by installing the
+right hooks with `git lfs install` then asking LFS to track any given
+file with `git lfs track`. This will add the file to the
+`.gitattributes` file which will make Git run the proper LFS filters.
+It's also possible to add patterns to the `.gitattributes` file, of
+course. For example, this will make sure Git LFS will track MP3 and ZIP
+files:
+
+        $ cat .gitattributes
+        *.mp3 filter=lfs -text
+        *.zip filter=lfs -text
+
+After this configuration, we use Git normally: `git add`, `git commit`,
 and so on will talk to Git LFS transparently.
 
 The actual files tracked by LFS are copied to a path like
-`.git/lfs/objects/{OID-PATH}`, where `{OID-PATH}` is a sharded
-filepath of the form `OID[0:2]/OID[2:4]/OID` and where `OID` is the
-content's hash (currently SHA256) of the file. This brings the extra
-feature that multiple copies of the same file in the same repository
-are automatically deduplicated, although in practice this rarely
-occurs.
-
-How this works in practice is that LFS will copy large files in that
-internal repository on `git add`. When that file changes, git will
-notice thanks to the filters and a new `git add` will add the new
-version of the file in the internal storage and change the pointer
-file to record the new hash.
-
-This only works for new files you are importing into Git, however. If
-a Git repository already has large files in its history, LFS can

(Diff truncated)
progress update
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index 746623d9..c4427efa 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -502,24 +502,27 @@ Checklist:
    * Décembre: ok, DSCF7823.jpg, pic-bois.
  * corriger date d'impression dans le colophon (fait, générée
    automatiquement au rendu PDF)
+ * recentrer la DSCF4890 (porc-épic) - tenté un recadrage
+ * recentrer la page couverture (ligne de coupe à gauche trop proche
+   de la photo), [signalée en amont](https://github.com/profound-labs/wallcalendar/issues/14)
+ * "désmudger" la DSCF6767 (maison) - retiré la réduction de bruit
+ * possiblement sortir les photos en TIFF - pas possible [LaTeX
+   supporte pas les TIF](https://tex.stackexchange.com/questions/89989/add-tif-image-to-latex), mais ca supporte les PNG, mais aucune
+   différence avec le JPG visible à l'oeil nu à 400% dans evince
+ * possiblement faire le [PDF non-compressé][latex-uncompressed], aucune différence
+   visible à l'oeil nu à 400% dans evince
+ * possiblement [sortir le PDF en CMYK][latex-cmyk] - semble pas nécessaire
+   pour mardigrafe, finalement
+ * testé la sortie Adobe RGB: couleurs ternes sur l'écran, ouache.
 
 Tâches restantes:
 
  * faire une page d'accueil pour le projet
  * pointer le lien dans le colophon (avec qr-code, en mode
    [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
- * uniformiser les lignes de coupes? (pas alignées sur l'épreuve de
-   pollo/xerox)
  * choix du papier (lustré des deux bords, selon Lozeau: 240gsm+)
  * choix de la technique de montage (a priori: spirales, Repro-UQAM,
    voir plus bas)
- * recentrer la DSCF4890 (porc-épic)
- * recentrer la page couverture (ligne de coupe à gauche trop proche
-   de la photo)
- * "désmudger" la DSCF6767 (maison)
- * possiblement sortir les photos en TIFF
- * possiblement faire le [PDF non-compressé][latex-uncompressed]
- * possiblement [sortir le PDF en CMYK][latex-cmyk]
  * impression d'une épreuve de test (fais quelques tests avec pollo et
    BEG, pas terminé)
  * correction d'une épreuve
@@ -650,6 +653,11 @@ manuel, mais qui est (surprise!) [disponible chez BEG](https://www.staples.com/X
 100lbs!, 1500 feuilles, 7¢/feuille). Il semble pas y avoir de glosse
 qui fait du recto-verso sur cette machine. (!)
 
+Louis Desjardins, de Mardigrafe, a évalué le projet à 14$/calendrier
+et a effectué une correction (gratuite!) d'une épreuve PDF, relevant
+quelques défauts typographiques, orthographiques et de mise en
+page. Mardigrafe utilisent une [Konica-Minolta BizHub Press C1070](https://www.biz.konicaminolta.com/production/c1070_c1060/index.html).
+
 Repro-UQAM font reliure et coupe. Sur un lot de 20 calendriers, 7.80$
 pour la coupe, 1.50$/reliure en spirale continue (2.20$ pour 10). À
 faire avant le 21 décembre, 24h de tombée, possiblement 2jrs. J'ai

more details on paper
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index a6b146ef..746623d9 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -624,7 +624,7 @@ Faut voir quelle "grammage" choisir. C'est pas évident parce que c'est
 parfois en "lbs" et parfois en gramme. La conversion "naive" semble
 être [1.48gsm/lbs](https://bollorethinpapers.com/en/basic-calculators/gsm-basis-weight-calculator), mais selon [ce tableau](http://coastalprint.com/convert_gsm_to_pounds/) le système impérial varie
 selon la sorte de papier (WTF bond, text, cover, bristol, index,
-tag??). À voir.
+tag??).
 
 Update: fait un test à BEG, sur leurs papiers, bon succès sur du
 "80lbs cover" (216gsm) lustré des deux bords. Mais ce papier est
@@ -635,7 +635,20 @@ de frais de service.
 
 J'ai aussi appelé Omer Deserres pour voir s'ils ont du papier. C'est
 comme BEG: seulement derrière le comptoir. Leur fournisseur est Fuji
-mais ils ne peuvent pas donner leur contact. Ils utilisent une Epson P6000.
+mais ils ne peuvent pas donner leur contact. Ils utilisent une [Epson
+P6000](https://epson.com/Support/Printers/Professional-Imaging-Printers/SureColor-Series/Epson-SureColor-P6000-Standard-Edition/s/SPT_SCP6000SE#manuals).
+
+Tous ces papiers sont un peu trop léger. Selon Lozeau, ça prend du
+240gsm minimum. Or ça c'est beaucoup plus difficile à trouver - même
+le 216gsm de BEG n'est pas en vente ... chez BEG et difficile à
+trouver ailleurs (rien chez Deserre non plus). Après des recherches
+plus approfondies, pollo a trouvé le [manuel de la Xerox](https://www.xerox.com/downloads/usa/en/supplies/rml/AltaLink_C8030_C8035_C8045_C8055_RML_April2017.pdf) qui
+indique quels papiers sont "compatibles" (lire: faits par Xerox). Le
+[3R11686](https://www.staples.com/Xerox-Bold-Super-Gloss-Cover-12-Point-8-1-2-x-11-Case/product_194862) est intéressant (247gsm) mais seulement un côté (C1S). Ça
+nous laisse seulement le 3R11462 (280gsm) qui se feede seulement en
+manuel, mais qui est (surprise!) [disponible chez BEG](https://www.staples.com/Xerox-Bold-Coated-Gloss-Digital-Printing-Paper-100-lb-Cover-8-1-2-x-11-Case/product_194865) (111$ pour
+100lbs!, 1500 feuilles, 7¢/feuille). Il semble pas y avoir de glosse
+qui fait du recto-verso sur cette machine. (!)
 
 Repro-UQAM font reliure et coupe. Sur un lot de 20 calendriers, 7.80$
 pour la coupe, 1.50$/reliure en spirale continue (2.20$ pour 10). À

update après mes tests d'impressions et recherches en print
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index 8760081c..a6b146ef 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -492,7 +492,7 @@ Checklist:
    * Avril: pas sûr, DSCF2305.jpg (runners). était DSCF2175.jpg,
      opitciwan, avant, considérer aussi DSCF2283.JPG (marché)
    * Mai: ok, DSCF4585.RAF (hirondelle)
-   * Juin: ok, DSCF4890.jpg (herisson)
+   * Juin: ok, DSCF4890.jpg (porc-épic)
    * Juillet: ok, DSCF5762.jpg (lac) peut-être remettre DSCF5746.jpg
      si elle sort bien
    * Août: ok, DSCF6767.jpg (maison), peut-être un problème de bruit
@@ -500,18 +500,32 @@ Checklist:
    * Octobre: ok, DSCF7648.jpg (st-gregoire)
    * Novembre: ok, éclaircir? contraste neige?
    * Décembre: ok, DSCF7823.jpg, pic-bois.
+ * corriger date d'impression dans le colophon (fait, générée
+   automatiquement au rendu PDF)
 
 Tâches restantes:
 
  * faire une page d'accueil pour le projet
  * pointer le lien dans le colophon (avec qr-code, en mode
    [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
- * corriger date d'impression dans le colophon
- * choix du papier
- * choix de la technique de montage (spirales à l'UQAM?)
- * impression d'une épreuve de test
+ * uniformiser les lignes de coupes? (pas alignées sur l'épreuve de
+   pollo/xerox)
+ * choix du papier (lustré des deux bords, selon Lozeau: 240gsm+)
+ * choix de la technique de montage (a priori: spirales, Repro-UQAM,
+   voir plus bas)
+ * recentrer la DSCF4890 (porc-épic)
+ * recentrer la page couverture (ligne de coupe à gauche trop proche
+   de la photo)
+ * "désmudger" la DSCF6767 (maison)
+ * possiblement sortir les photos en TIFF
+ * possiblement faire le [PDF non-compressé][latex-uncompressed]
+ * possiblement [sortir le PDF en CMYK][latex-cmyk]
+ * impression d'une épreuve de test (fais quelques tests avec pollo et
+   BEG, pas terminé)
  * correction d'une épreuve
+ * impression finale
 
+[latex-uncompressed]: https://tex.stackexchange.com/a/13081/33322
 Bugs restants upstream (signalés):
 
  * corriger le mois de septembre qui déborde (fixed, remis les notes)
@@ -580,19 +594,29 @@ Imprimeurs possibles:
    à 7 jours ouvrables, probablement le même labo
  * Copie Express (St-Denis/Jean-Talon): 26$/calendrier (1.10$/feuille,
    0.85$/impression)
- * [Mardigrafe](http://mardigrafe.com/): contactés
+ * [Mardigrafe](http://mardigrafe.com/): en contact, demandent du CMYK, [pas dans
+   Darktable](https://discuss.pixls.us/t/print-shop-asks-for-cmyk-any-options/10176/5) mais peut-être possible en [LaTeX][latex-cmyk]
  * [CEGEP](https://agebdeb.org/impressions/): 0.20$/feuille
+ * Centre Japonais de la Photo: 450-688-6530
+ * BEG Place dupuis: 514-843-8647 2, 1, 1
+ * Deserres marché central: 514-908-0505
+ * Lozeau: 514-274-6577
+
+[latex-cmyk]: https://tex.stackexchange.com/a/9973/33322
 
 On a fait des tests avec du papier 148gsm (gram per square meter) mat,
 mais il est clair que ça sortirait mieux sur du papier lustré (vice
-versa).
+versa). Les deux imprimantes utilisées:
+
+ * [Xerox C8045](https://www.office.xerox.com/multifunction-printer/color-multifunction/altalink-c8000-series/enus.html) (beaux bleux, un peu smudgy sur certaines poses)
+ * [Canon image RUNNER ADVANCE C5550i](https://www.usa.canon.com/internet/portal/us/home/products/details/copiers-mfps-fax-machines/multifunction-copiers/imagerunner-advance-c5550i)
 
 Papiers possibles au BEG:
 
  * [HP laser couleur pour dépliants](https://www.staples.ca/fr/HP-Papier-laser-couleur-pour-d%C3%A9pliants-8-1-2-po-x-11-po-lustr%C3%A9/product_608153_1-CA_2_20001): 30$/150 feuilles
    (20¢/feuille), 150gsm / 40 lbs, brillance 97
- * [Staples pour brochures / circulaires](https://www.staples.ca/fr/staples-papier-%C3%A0-brochure-et-circulaire-mat-8-x-11-po/product_SS2006024_1-CA_2_20001#/id='dropdown_610489'): 38$/150 feuilles (25¢/feuille), 48lbs
-   (120gsm ou 170gsm?)
+ * [Staples pour brochures / circulaires](https://www.staples.ca/fr/staples-papier-%C3%A0-brochure-et-circulaire-mat-8-x-11-po/product_SS2006024_1-CA_2_20001#/id='dropdown_610489'): 38$/150 feuilles
+   (25¢/feuille), 48lbs (120gsm ou 170gsm?)
  * [Verso - Papier laser Sterling](https://www.staples.ca/fr/verso-papier-laser-sterling-num%C3%A9rique-lustr%C3%A9-premium-80-lb-8-5-x-11-po-blanc-bte-3000-feuilles-283618/product_2856893_1-CA_2_20001): 153$/3000 feuilles
    (5¢/feuille), 118gsm, 16mil, brillance 94
 
@@ -601,3 +625,29 @@ parfois en "lbs" et parfois en gramme. La conversion "naive" semble
 être [1.48gsm/lbs](https://bollorethinpapers.com/en/basic-calculators/gsm-basis-weight-calculator), mais selon [ce tableau](http://coastalprint.com/convert_gsm_to_pounds/) le système impérial varie
 selon la sorte de papier (WTF bond, text, cover, bristol, index,
 tag??). À voir.
+
+Update: fait un test à BEG, sur leurs papiers, bon succès sur du
+"80lbs cover" (216gsm) lustré des deux bords. Mais ce papier est
+disponible seulement "derrière le comptoir", si on leur commande des
+impressions, pas en magasin. L'impression de test a couté 5.92$ pour 5
+photos, soit 53¢/impression et 0.10¢/papier ("lttr text C2S"), avec 2$
+de frais de service.
+
+J'ai aussi appelé Omer Deserres pour voir s'ils ont du papier. C'est
+comme BEG: seulement derrière le comptoir. Leur fournisseur est Fuji
+mais ils ne peuvent pas donner leur contact. Ils utilisent une Epson P6000.
+
+Repro-UQAM font reliure et coupe. Sur un lot de 20 calendriers, 7.80$
+pour la coupe, 1.50$/reliure en spirale continue (2.20$ pour 10). À
+faire avant le 21 décembre, 24h de tombée, possiblement 2jrs. J'ai
+fait une première reliure avec du "Proclick".
+
+Total des coûts:
+
+ * Papier: 0.65-3.25$/calendrier (5-25¢/feuille)
+ * Impression: 2.60$/calendrier (20¢/feuille)
+ * Reliure: 1.50$/calendrier
+ * Sous-total: 4.75-7.35$/calendrier
+ * 20 calendriers: 87-147$
+ * Coupe et montage: 7.80$ total
+ * Grand total: ~95-155$

address jake comments, resent
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index dddc81e3..04b18922 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -3,7 +3,7 @@ Large files with Git: LFS and git-annex
 =======================================
 
 By design, Git does not handle large files very well. While there is
-work underway to handle large repositories through the [commit graph
+work underway to handle large *repositories* through the [commit graph
 work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), Git's internal design has remained surprisingly constant
 throughout its history which means storing large files into Git comes
 with a significant and, ultimately, prohibitive performance
@@ -18,7 +18,7 @@ The problem with large files
 Git's problem with large files comes from its design. As our readers
 probably know, Linus Torvalds wrote Git to manage the history of the
 kernel source code, a large collection of small files. Every file is a
-"blob" in Git's internal storage, addressed by its checksum. A new
+"blob" in Git's internal storage, addressed by its cryptographic hash. A new
 version of that file will store a new blob in Git's history, with no
 deduplication between the two versions. The pack file format does
 offer some binary compression over the repository, but in practice
@@ -50,7 +50,7 @@ cause significant problems in Git, which is otherwise known for
 stellar performance.
 
 Intuitively, the problem is Git needs to copy files into its
-internal storage to track them. Third projects therefore typically
+internal storage to track them. Third-party projects therefore typically
 solve the large files problem by taking files out of Git. In 2009, Git
 evangelist Scott Chacon released [GitMedia](https://github.com/schacon/git-media), a Git filter which
 simply takes large files out of Git. Unfortunately, there hasn't been
@@ -68,16 +68,16 @@ Git LFS
 That was before GitHub released Git LFS in [August 2015](https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/). Like all
 software taking files out of Git, LFS tracks file *hashes* instead
 file *contents*. So instead of adding large files into Git directly,
-LFS adds text files to the Git repository, which look like this:
+LFS adds a point file to the Git repository, which looks like this:
 
     version https://git-lfs.github.com/spec/v1
     oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393
     size 12345
 
-LFS then uses Git's smudge and clean filters to show the checkout
-correct file. Git only stores that small text file which it can do
-very efficiently. The downside, of course, is that large files are not
-revisionned: only the latest version of a file is kept in the
+LFS then uses Git's smudge and clean filters to show the real file on
+checkout. Git only stores that small text file and does so
+effficiently. The downside, of course, is that large files are not
+version controlled: only the latest version of a file is kept in the
 repository.
 
 After installing LFS, it can be used in any repository by installing
@@ -98,9 +98,16 @@ and so on will talk to Git LFS transparently.
 The actual files tracked by LFS are copied to a path like
 `.git/lfs/objects/{OID-PATH}`, where `{OID-PATH}` is a sharded
 filepath of the form `OID[0:2]/OID[2:4]/OID` and where `OID` is the
-checksum (currently SHA256) of the file. This brings the extra feature
-that multiple copies of the same file in the same repository are
-automatically deduplicated, although in practice this rarely occurs.
+content's hash (currently SHA256) of the file. This brings the extra
+feature that multiple copies of the same file in the same repository
+are automatically deduplicated, although in practice this rarely
+occurs.
+
+How this works in practice is that LFS will copy large files in that
+internal repository on `git add`. When that file changes, git will
+notice thanks to the filters and a new `git add` will add the new
+version of the file in the internal storage and change the pointer
+file to record the new hash.
 
 This only works for new files you are importing into Git, however. If
 a Git repository already has large files in its history, LFS can
@@ -113,9 +120,8 @@ LFS also supports [file locking](https://github.com/git-lfs/git-lfs/wiki/File-Lo
 lock on a file, making it readonly everywhere except on the locking
 repository. This allows users to signal others that they are working
 on a LFS file. Those locks are purely advisory, however, as users can
-remove other users' locks with the `git lfs unlock --force`. LFS also
-expires large files after a delay (3 days) and will only fetch recent
-large files (7 days).
+remove other users' locks with the `git lfs unlock --force`. LFS can
+also [prune](https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-prune.1.ronn) old or unreferenced files.
 
 The main [limitation](https://github.com/git-lfs/git-lfs/wiki/Limitations) of LFS is that it's bound to a single
 upstream: large files are usually stored in the same location as the
@@ -139,7 +145,7 @@ git-fat and GitMedia.
 Another limitation is that LFS only supports pushing/pulling files
 over HTTP(S), no SSH transfers. LFS uses some tricks to bypass HTTPS
 authentication, fortunately, but this means files cannot be stored
-locally which can make offline work ackward. This might change in the
+locally which can make offline work awkward. This might change in the
 future as there are proposals to add [SSH support](https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md), resumable
 uploads through the [tus.io protocol](https://tus.io/) and other [custom transfer
 protocols](https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md).
@@ -164,13 +170,13 @@ handles this is by storing a symbolic link to the file, stored in
 `.git/annex`. We should probably credit Hess for this innovation,
 however, since LFS's storage layout is obviously inspired by
 git-annex's earlier design. Git-annex's original design introduced all
-sorts of problems however, especially on filesystems lacking symlink
+sorts of problems however, especially on filesystems lacking symbolic link
 support. So Hess has implemented different solutions to this
 problem. Originally, when git-annex detected such a "crippled"
 filesystem, it switched to [direct mode](http://git-annex.branchable.com/direct_mode/) which kept files directly
-in the worktree, while internall committing the symlinks into the Git
+in the worktree, while internall committing the symbolic links into the Git
 repository. This design turned out to be a little confusing to users,
-including myself, who managed to shoot myself in the foot more than
+including myself; I have managed to shoot myself in the foot more than
 once using this system.
 
 Since then, git-annex has adopted a different mode which is, like LFS,
@@ -188,14 +194,14 @@ problems affecting LFS, because of their use of filters. Hess actually
 has [ideas](http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/) on how the smudge interface could be improved. This
 will require changes in Git to avoid buffering the entire file in
 memory, allowing the filter to access the work tree directly, and
-adding extra hooks. A few of those hacks are already performed by
+adding extra hooks. Some of those hacks are already performed by
 git-annex by hacking around the filter interface.
 
 Being more distributed by design, git-annex does not have the same
 "locking" semantics as LFS. Locking a file in git-annex means
 protecting it from changes, so files need to actually "unlocked" to be
 editable, which might be counter-intuitive to new users. In general,
-git-annex has a few of those unusual quirks and interfaces, that often
+git-annex has some of those unusual quirks and interfaces that often
 come with more powerful software.
 
 And git-annex is much more powerful: it does not only address the
@@ -203,12 +209,12 @@ And git-annex is much more powerful: it does not only address the
 "partial checkouts", ie. downloading only some of the large files,
 something I find especially useful to manage my video, music, and
 photo collections, as those are too large to fit on my mobile
-devices. Git-annex also has support for location tracking, where
-it knows how many copies of a file exist and where, which is very
-useful for archival purposes. And while LFS is only starting to look
-at other transfer protocols than HTTP, git-annex already supports a
-[large number](http://git-annex.branchable.com/special_remotes/) of services through [special remote protocol](http://git-annex.branchable.com/special_remotes/external/)
-which is fairly easy to implement.
+devices. Git-annex also has support for location tracking, where it
+knows how many copies of a file exist and where, which is useful for
+archival purposes. And while LFS is only starting to look at other
+transfer protocols than HTTP, git-annex already supports a [large
+number](http://git-annex.branchable.com/special_remotes/) of services through [special remote protocol](http://git-annex.branchable.com/special_remotes/external/) which is
+fairly easy to implement.
 
 "Large files" is therefore only scratching the surface of what
 git-annex can do: I have used it to build an [archival system
@@ -222,9 +228,9 @@ USB drives".
 
 Unfortunately, git-annex is not well supported by hosting
 providers. GitLab [used to support it](https://docs.gitlab.com/ee/workflow/git_annex.html), but since it implemented
-LFS, it [dropped support for git-annex completely](https://gitlab.com/gitlab-org/gitlab-ee/issues/1648), citing it was a
-"burden to maintain". Fortunately, thanks to git-annex's flexibility,
-it would eventually be possible to treat [LFS servers as just another
+LFS, it [dropped support for git-annex](https://gitlab.com/gitlab-org/gitlab-ee/issues/1648), citing it was a "burden to
+maintain". Fortunately, thanks to git-annex's flexibility, it would
+eventually be possible to treat [LFS servers as just another
 remote](https://git-annex.branchable.com/todo/LFS_API_support/) which would make git-annex capable of storing files on
 those servers again.
 

links for projects, missing cap
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 9e193060..dddc81e3 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -8,9 +8,9 @@ work](https://github.com/git/git/blob/master/Documentation/technical/commit-grap
 throughout its history which means storing large files into Git comes
 with a significant and, ultimately, prohibitive performance
 cost. Thankfully, other projects are helping Git address this
-challenge. This article compares how Git LFS and git-annex address
-this problem and should help readers pick the right solution for their
-workflow.
+challenge. This article compares how [Git LFS](https://git-lfs.github.com/) and [git-annex](https://git-annex.branchable.com/)
+address this problem and should help readers pick the right solution
+for their workflow.
 
 The problem with large files
 ============================
@@ -92,7 +92,7 @@ MP3 and ZIP files:
     *.mp3 filter=lfs -text
     *.zip filter=lfs -text
 
-After this configuration, we use git normally: `git add`, `git commit`,
+After this configuration, we use Git normally: `git add`, `git commit`,
 and so on will talk to Git LFS transparently.
 
 The actual files tracked by LFS are copied to a path like

Merge branch 'backlog/git-annex' of anarc.at:repos-private/anarc.at into backlog/git-annex
capitalize git
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 00598348..6f0ae27e 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -1,15 +1,15 @@
 
-Large files with git: LFS and git-annex
+Large files with Git: LFS and git-annex
 =======================================
 
 By design, Git does not handle large, changing files very well. While
 there is work underway to handle large repositories in the form of the
-[commit graph work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), git's internal design has remained
+[commit graph work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), Git's internal design has remained
 surprisingly constant in its history which means storing large files
-into git comes with a significant, and ultimately prohibitive,
-performance cost. Thankfully, other projects are helping git address
+into Git comes with a significant, and ultimately prohibitive,
+performance cost. Thankfully, other projects are helping Git address
 this challenge, although in different ways. This article compares how
-git LFS and git-annex handle this problem and should help readers
+Git LFS and git-annex handle this problem and should help readers
 confronted with this problem in picking the right solution for their
 workflow.
 
@@ -19,13 +19,13 @@ The problem with large files
 Git's problem with large files comes from its design. As our readers
 probably know, Linus Torvalds wrote Git to manage the history of the
 kernel source code, which consists of a large collection of small
-files. Every file is a "blob" in git's internal storage, addressed by
+files. Every file is a "blob" in Git's internal storage, addressed by
 its checksum. A new version of that file will store and entire new
-blob in git's history, with no deduplication between the two
+blob in Git's history, with no deduplication between the two
 contents. The pack file format does offer some binary compression over
 the contents, but in practice this does not offer much help for large
 files as the contents are not contiguous which means that,
-effectively, git is slow at handling large files.
+effectively, Git is slow at handling large files.
 
 There has been different attempts at fixing this in the past. In 2006,
 Linus Torvalds worked on improving the [pack file format](https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/) to reduce
@@ -35,34 +35,34 @@ didn't appear to be worth it anymore".
 
 Then in 2009, [Caca labs](http://caca.zoy.org/) worked on improving `fast-import` and
 `pack-objects` to handle big files specially, an effort that's called
-[git-bigfiles](http://caca.zoy.org/wiki/git-bigfiles). Some of those changes eventually made it into git:
-for example, since [1.7.6](https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/) git will write large files directly to a
+[git-bigfiles](http://caca.zoy.org/wiki/git-bigfiles). Some of those changes eventually made it into Git:
+for example, since [1.7.6](https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/) Git will write large files directly to a
 packfile instead of holding it in memory. But files are still kept
 forever in history. An example of trouble I had to deal with is the
 Debian security tracker, which follows all security issues in the
 entire Debian history in a `data/CVE/list` files, which is around
 360,000 lines for a whopping 18MB. The resulting repository takes
 1.6GB of disk space and a *local* clone takes 21 minutes to perform,
-mostly taken by git resolving deltas. Commit, push, and pull are
+mostly taken by Git resolving deltas. Commit, push, and pull are
 noticably slower than a regular repository, taking anywhere from a few
 seconds to a minute depending one how old the local copy is. And
 running annotate on that large file can take up to ten minutes. So
 even though that is a simple text file, it's grown large enough to
-cause significant performance in git, which is otherwise known for
+cause significant performance in Git, which is otherwise known for
 stellar performance.
 
-Intuitively, the problem here is git needs to copy files into its
+Intuitively, the problem here is Git needs to copy files into its
 internal storage to track them. Third projects therefore typically
-solve the large files problem by taking files out of git. In 2009, git
-evangelist Scott Chacon released [git-media](https://github.com/schacon/git-media), a git filter which
-simply takes large files out of git. Unfortunately, there hasn't been
+solve the large files problem by taking files out of Git. In 2009, Git
+evangelist Scott Chacon released [GitMedia](https://github.com/schacon/git-media), a Git filter which
+simply takes large files out of Git. Unfortunately, there hasn't been
 an official release since then and it's [unclear](https://github.com/alebedev/git-media/issues/15) if the project is
 still maintained. The next effort to come up was [git-fat](https://github.com/jedbrown/git-fat), first
 released in 2012 and still maintained. But neither tool has seen
 massive adoption yet. If I would have to venture a guess, I would say
 it is partly because both require manual, per repository,
 configuration. Both also require a custom server (S3, SCP, Atmos or
-WebDAV for git-media, rsync for git-fat) which limits collaboration as
+WebDAV for GitMedia, rsync for git-fat) which limits collaboration as
 users suddenly need access to another service.
 
 Git LFS
@@ -71,36 +71,36 @@ Git LFS
 That was before GitHub released Git LFS in [August 2015](https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/). While the
 actual server-side implementation used by GitHub is closed source,
 they do publish a [test server](https://github.com/git-lfs/lfs-test-server) as an example implementation. Other
-git hosting platforms have also [implemented](https://github.com/git-lfs/git-lfs/wiki/Implementations ) support for the LFS
+Git hosting platforms have also [implemented](https://github.com/git-lfs/git-lfs/wiki/Implementations ) support for the LFS
 [API](https://github.com/git-lfs/git-lfs/tree/master/docs/api), including GitLab, Gitea, and BitBucket, something git-fat
-and git-media never achieved.
+and GitMedia never achieved.
 
 After installing LFS, it can be used in any repository by installing
 the right hooks with `git lfs install` then asking LFS to track any
 given file, with `git lfs track large.iso`. This will add the file to
-the `.gitattributes` file which will make git fire the proper LFS
+the `.gitattributes` file which will make Git fire the proper LFS
 filters. It's also possible to add patterns to the `.gitattributes`
-file, naturally. For example, this will make sure git LFS will track
+file, naturally. For example, this will make sure Git LFS will track
 MP3 and ZIP files:
 
     $ cat .gitattributes
     *.mp3 filter=lfs -text
     *.zip filter=lfs -text
 
-After this configuration, we use git normally: `git add`, `git commit`
-and so on will talk to git LFS transparently.
+After this configuration, we use Git normally: `git add`, `git commit`
+and so on will talk to Git LFS transparently.
 
 The design of LFS is simple. Instead of recording the large file
-history, LFS makes git record a "pointer" to the file. A pointer is a
+history, LFS makes Git record a "pointer" to the file. A pointer is a
 text file that looks like this:
 
     version https://git-lfs.github.com/spec/v1
     oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393
     size 12345
 
-LFS then uses git's smudge and clean filters to show the correct file
+LFS then uses Git's smudge and clean filters to show the correct file
 to the user. So the worktree actually has the large file checked out,
-but git stores only that small text file which it is able to store
+but Git stores only that small text file which it is able to store
 much more efficiently. The downside, of course, is that large files
 are not revisionned: only the latest version of a file is kept in the
 repository.
@@ -112,8 +112,8 @@ checksum (currently SHA256) of the file. This brings the extra feature
 that multiple copies of the same file in the same repository are
 automatically deduplicated, although in practice this rarely occurs.
 
-This only works for new files you are importing into git, however. If
-a git repository already has large files in its history, LFS can
+This only works for new files you are importing into Git, however. If
+a Git repository already has large files in its history, LFS can
 fortunately "fix" repositories by retroactively rewriting history with
 the [git-lfs-migrate](https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn). This has all the normal downsides of
 rewriting history, however - existing clone of such a repository will
@@ -131,7 +131,7 @@ only fetch recent large files (7 days).
 
 The main [limitation](https://github.com/git-lfs/git-lfs/wiki/Limitations) of LFS is that it's bound to a single
 upstream: large files are usually stored in the same location as the
-main git repository. If it is hosted on GitHub, this means a default
+main Git repository. If it is hosted on GitHub, this means a default
 quota of 1GiB storage and bandwidth. GitHub also limits the size
 individual files to 2GiB but you can purchase additional "packs" to
 expand those quotas at 5$/50GB (storage and bandwidth). This
@@ -141,7 +141,7 @@ structure.
 
 LFS does support hosting on a *different* server - a project could run
 its own LFS server, for example - but this will mean different
-credentials than the main git repository which will complicate user
+credentials than the main Git repository which will complicate user
 onboarding.
 
 Another limitation is that LFS only supports pushing/pulling files
@@ -162,13 +162,13 @@ can be prohibitive on some large datasets.
 git-annex
 =========
 
-The other main player in large file support for git is
+The other main player in large file support for Git is
 [git-annex](https://git-annex.branchable.com/). We [covered the project](http://lwn.net/Articles/418337/) back in 2010, shortly
 after the first release, but it's certainly worth discussing what
 changed in the eight years since Joey Hess launched this ambitious
 project.
 
-Like LFS, git-annex takes large files out of git's history. The way it
+Like LFS, git-annex takes large files out of Git's history. The way it
 handles this is by storing a symbolic link to the file, stored in
 `.git/annex`. We should probably credit Hess for this innovation,
 however, since LFS's storage layout is obviously inspired by this
@@ -177,7 +177,7 @@ sorts of problems however, especially on filesystems lacking symlink
 support. So Hess has implemented different solutions to this
 problem. Originally, when git-annex detected such a "crippled"
 filesystem, it switched to [direct mode](http://git-annex.branchable.com/direct_mode/) which kept files directly
-in the worktree, while internall committing the symlinks into the git
+in the worktree, while internall committing the symlinks into the Git
 repository. This design turned out to be a little confusing to users,
 including myself, who managed to shoot myself in the foot more than
 once using this system.
@@ -241,8 +241,8 @@ Conclusion
 ==========
 
 Git LFS and git-annex are both mature and well maintained programs
-which deal efficiently with large files in git. LFS is especially well
-supported by most git hosting providers and is easier to use, but is
+which deal efficiently with large files in Git. LFS is especially well
+supported by most Git hosting providers and is easier to use, but is
 less flexible than git-annex: you will most likely have to pay
 whatever your hosting provider decides to host your content
 there. It's possible to host your content elsewhere, but that
@@ -250,8 +250,8 @@ basically means running your own server right now.
 
 Git-annex, in comparison, allows you to store your content basically

(Diff truncated)
finalize first draft, sent to jake
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 7369d17f..d45e56c1 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -2,15 +2,14 @@
 Large files with git: LFS and git-annex
 =======================================
 
-By design, Git does not handle large files very well. While
-there is work underway to handle large repositories through the
-[commit graph work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), git's internal design has remained
-surprisingly constant in its history which means storing large files
-into git comes with a significant, and ultimately prohibitive,
-performance cost. Thankfully, other projects are helping git address
-this challenge, although in different ways. This article compares how
-git LFS and git-annex handle this problem and should help readers
-confronted with this problem in picking the right solution for their
+By design, Git does not handle large files very well. While there is
+work underway to handle large repositories through the [commit graph
+work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), git's internal design has remained surprisingly constant
+throughout its history which means storing large files into git comes
+with a significant and, ultimately, prohibitive performance
+cost. Thankfully, other projects are helping git address this
+challenge. This article compares how git LFS and git-annex address
+this problem and should help readers pick the right solution for their
 workflow.
 
 The problem with large files
@@ -18,14 +17,13 @@ The problem with large files
 
 Git's problem with large files comes from its design. As our readers
 probably know, Linus Torvalds wrote Git to manage the history of the
-kernel source code, which consists of a large collection of small
-files. Every file is a "blob" in git's internal storage, addressed by
-its checksum. A new version of that file will store a new
-blob in git's history, with no deduplication between the two.
-The pack file format does offer some binary compression over
-the repository, but in practice this does not offer much help for large
-files as the contents are not contiguous which limits the compressor's
-performance.
+kernel source code, a large collection of small files. Every file is a
+"blob" in git's internal storage, addressed by its checksum. A new
+version of that file will store a new blob in git's history, with no
+deduplication between the two versions. The pack file format does
+offer some binary compression over the repository, but in practice
+this does not offer much help for large files as the contents are not
+contiguous which limits the compressor's performance.
 
 There has been different attempts at fixing this in the past. In 2006,
 Torvalds worked on improving the [pack file format](https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/) to reduce
@@ -59,52 +57,44 @@ simply takes large files out of git. Unfortunately, there hasn't been
 an official release since then and it's [unclear](https://github.com/alebedev/git-media/issues/15) if the project is
 still maintained. The next effort to come up was [git-fat](https://github.com/jedbrown/git-fat), first
 released in 2012 and still maintained. But neither tool has seen
-massive adoption yet. If I would have to venture a guess, I would say
-it is partly because both require manual, per repository,
-configuration. Both also require a custom server (S3, SCP, Atmos or
-WebDAV for git-media, rsync for git-fat) which limits collaboration as
-users suddenly need access to another service.
+massive adoption yet. If I would have to venture a guess, it might be
+because both require manual configuration. Both also require a custom
+server (rsync for git-fat; S3, SCP, Atmos, or WebDAV for git-media)
+which limits collaboration as users need access to another service.
 
 Git LFS
 =======
 
-That was before GitHub released Git LFS in [August 2015](https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/). While the
-actual server-side implementation used by GitHub is closed source,
-they do publish a [test server](https://github.com/git-lfs/lfs-test-server) as an example implementation. Other
-git hosting platforms have also [implemented](https://github.com/git-lfs/git-lfs/wiki/Implementations ) support for the LFS
-[API](https://github.com/git-lfs/git-lfs/tree/master/docs/api), including GitLab, Gitea, and BitBucket, something git-fat
-and git-media never achieved.
+That was before GitHub released Git LFS in [August 2015](https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/). Like all
+software taking files out of Git, LFS tracks file *hashes* instead
+file *contents*. So instead of adding large files into git directly,
+LFS adds text files to the git repository, which look like this:
+
+    version https://git-lfs.github.com/spec/v1
+    oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393
+    size 12345
+
+LFS then uses git's smudge and clean filters to show the checkout
+correct file. Git only stores that small text file which it can do
+very efficiently. The downside, of course, is that large files are not
+revisionned: only the latest version of a file is kept in the
+repository.
 
 After installing LFS, it can be used in any repository by installing
 the right hooks with `git lfs install` then asking LFS to track any
-given file, with `git lfs track large.iso`. This will add the file to
+given file, with `git lfs track`. This will add the file to
 the `.gitattributes` file which will make git fire the proper LFS
 filters. It's also possible to add patterns to the `.gitattributes`
-file, naturally. For example, this will make sure git LFS will track
+file, of course. For example, this will make sure git LFS will track
 MP3 and ZIP files:
 
     $ cat .gitattributes
     *.mp3 filter=lfs -text
     *.zip filter=lfs -text
 
-After this configuration, we use git normally: `git add`, `git commit`
+After this configuration, we use git normally: `git add`, `git commit`,
 and so on will talk to git LFS transparently.
 
-The design of LFS is simple. Instead of recording the large file
-history, LFS makes git record a "pointer" to the file. A pointer is a
-text file that looks like this:
-
-    version https://git-lfs.github.com/spec/v1
-    oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393
-    size 12345
-
-LFS then uses git's smudge and clean filters to show the correct file
-to the user. So the worktree actually has the large file checked out,
-but git stores only that small text file which it is able to store
-much more efficiently. The downside, of course, is that large files
-are not revisionned: only the latest version of a file is kept in the
-repository.
-
 The actual files tracked by LFS are copied to a path like
 `.git/lfs/objects/{OID-PATH}`, where `{OID-PATH}` is a sharded
 filepath of the form `OID[0:2]/OID[2:4]/OID` and where `OID` is the
@@ -116,40 +106,41 @@ This only works for new files you are importing into git, however. If
 a git repository already has large files in its history, LFS can
 fortunately "fix" repositories by retroactively rewriting history with
 the [git-lfs-migrate](https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn). This has all the normal downsides of
-rewriting history, however - existing clone of such a repository will
-need to do a hard reset and prune or just perform a fresh clone to
+rewriting history, however - existing clones will have to be reset to
 benefit from the cleanup.
 
 LFS also supports [file locking](https://github.com/git-lfs/git-lfs/wiki/File-Locking), which allows users to claim a
 lock on a file, making it readonly everywhere except on the locking
 repository. This allows users to signal others that they are working
 on a LFS file. Those locks are purely advisory, however, as users can
-remove other users' locks with the `git lfs unlock --force`. This
-feature was introduced in LFS 2.0 and is not supported by all
-servers. LFS also expires large files after a delay (3 days) and will
-only fetch recent large files (7 days).
+remove other users' locks with the `git lfs unlock --force`. LFS also
+expires large files after a delay (3 days) and will only fetch recent
+large files (7 days).
 
 The main [limitation](https://github.com/git-lfs/git-lfs/wiki/Limitations) of LFS is that it's bound to a single
 upstream: large files are usually stored in the same location as the
 main git repository. If it is hosted on GitHub, this means a default
-quota of 1GiB storage and bandwidth. GitHub also limits the size
-individual files to 2GiB but you can purchase additional "packs" to
-expand those quotas at 5$/50GB (storage and bandwidth). This
-[upset](https://medium.com/@megastep/github-s-large-file-storage-is-no-panacea-for-open-source-quite-the-opposite-12c0e16a9a91) some users surprised by the fee, especially on the
-bandwidth fees, which were previously hidden in the GitHub cost
-structure.
-
-LFS does support hosting on a *different* server - a project could run
-its own LFS server, for example - but this will mean different
-credentials than the main git repository which will complicate user
-onboarding.
+quota of 1GiB storage and bandwidth, but you can purchase additional
+"packs" to expand those quotas at 5$/50GB (storage and
+bandwidth). GitHub also limits the size individual files to 2GiB. This
+[upset](https://medium.com/@megastep/github-s-large-file-storage-is-no-panacea-for-open-source-quite-the-opposite-12c0e16a9a91) some users surprised by the bandwidth fees, which were
+previously hidden in GitHub's cost structure.
+
+While the actual server-side implementation used by GitHub is closed
+source, there is a [test server](https://github.com/git-lfs/lfs-test-server) provided as an example
+implementation. Other git hosting platforms have also [implemented](https://github.com/git-lfs/git-lfs/wiki/Implementations )
+support for the LFS [API](https://github.com/git-lfs/git-lfs/tree/master/docs/api), including GitLab, Gitea, and BitBucket,
+something git-fat and git-media never achieved.} LFS does support
+hosting on a *different* server - a project could run its own LFS
+server, for example - but this will involve a different set of
+credentials which brings back difficult user onboarding that affected
+git-fat and git-media.
 
 Another limitation is that LFS only supports pushing/pulling files
-over HTTP(S), no SSH transfers there. LFS uses some tricks to bypass
-HTTPS authentication, fortunately, but this means files cannot be
-stored locally (using a `file://` URL for example) which can be
-frustrating during testing. There is, however, a [proposal to add
-SSH](https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md) support eventually. Other proposals include work on resumable
+over HTTP(S), no SSH transfers. LFS uses some tricks to bypass HTTPS
+authentication, fortunately, but this means files cannot be stored
+locally which can make offline work ackward. This might change in the
+future as there are proposals to add [SSH support](https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md), resumable
 uploads through the [tus.io protocol](https://tus.io/) and other [custom transfer
 protocols](https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md).
 
@@ -157,7 +148,7 @@ Finally, LFS can be slow. Every file added to LFS takes up double the
 space on the local filesystem as it is copied in the `.git/objects`
 storage. The smudge/clean interface is also slow: it works as a pipe
 and will need to read/write the entire file in memory each time which
-can be prohibitive on some large datasets.
+can be prohibitive with files larger than available memory.
 
 git-annex
 =========
@@ -171,7 +162,7 @@ project.
 Like LFS, git-annex takes large files out of git's history. The way it
 handles this is by storing a symbolic link to the file, stored in

(Diff truncated)
first review pass, incomplete
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 00598348..7369d17f 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -2,8 +2,8 @@
 Large files with git: LFS and git-annex
 =======================================
 
-By design, Git does not handle large, changing files very well. While
-there is work underway to handle large repositories in the form of the
+By design, Git does not handle large files very well. While
+there is work underway to handle large repositories through the
 [commit graph work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), git's internal design has remained
 surprisingly constant in its history which means storing large files
 into git comes with a significant, and ultimately prohibitive,
@@ -20,15 +20,15 @@ Git's problem with large files comes from its design. As our readers
 probably know, Linus Torvalds wrote Git to manage the history of the
 kernel source code, which consists of a large collection of small
 files. Every file is a "blob" in git's internal storage, addressed by
-its checksum. A new version of that file will store and entire new
-blob in git's history, with no deduplication between the two
-contents. The pack file format does offer some binary compression over
-the contents, but in practice this does not offer much help for large
-files as the contents are not contiguous which means that,
-effectively, git is slow at handling large files.
+its checksum. A new version of that file will store a new
+blob in git's history, with no deduplication between the two.
+The pack file format does offer some binary compression over
+the repository, but in practice this does not offer much help for large
+files as the contents are not contiguous which limits the compressor's
+performance.
 
 There has been different attempts at fixing this in the past. In 2006,
-Linus Torvalds worked on improving the [pack file format](https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/) to reduce
+Torvalds worked on improving the [pack file format](https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/) to reduce
 object duplication between the index and the pack files, but those
 changes were eventually [reverted](https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/) as the "extra object format 
 didn't appear to be worth it anymore".
@@ -36,11 +36,11 @@ didn't appear to be worth it anymore".
 Then in 2009, [Caca labs](http://caca.zoy.org/) worked on improving `fast-import` and
 `pack-objects` to handle big files specially, an effort that's called
 [git-bigfiles](http://caca.zoy.org/wiki/git-bigfiles). Some of those changes eventually made it into git:
-for example, since [1.7.6](https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/) git will write large files directly to a
-packfile instead of holding it in memory. But files are still kept
+for example, since [1.7.6](https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/) git will stream large files directly to a
+packfile instead of holding it all in memory. But files are still kept
 forever in history. An example of trouble I had to deal with is the
 Debian security tracker, which follows all security issues in the
-entire Debian history in a `data/CVE/list` files, which is around
+entire Debian history in a single file, which is around
 360,000 lines for a whopping 18MB. The resulting repository takes
 1.6GB of disk space and a *local* clone takes 21 minutes to perform,
 mostly taken by git resolving deltas. Commit, push, and pull are
@@ -48,10 +48,10 @@ noticably slower than a regular repository, taking anywhere from a few
 seconds to a minute depending one how old the local copy is. And
 running annotate on that large file can take up to ten minutes. So
 even though that is a simple text file, it's grown large enough to
-cause significant performance in git, which is otherwise known for
+cause significant problems in git, which is otherwise known for
 stellar performance.
 
-Intuitively, the problem here is git needs to copy files into its
+Intuitively, the problem is git needs to copy files into its
 internal storage to track them. Third projects therefore typically
 solve the large files problem by taking files out of git. In 2009, git
 evangelist Scott Chacon released [git-media](https://github.com/schacon/git-media), a git filter which

nouvelle lentille et derniere commande
diff --git a/hardware/camera.mdwn b/hardware/camera.mdwn
index a413972c..f3ddedc2 100644
--- a/hardware/camera.mdwn
+++ b/hardware/camera.mdwn
@@ -140,6 +140,10 @@ Lentilles
   besoin d'un tube approcher du 1x, voir plus bas vue à 420-700$ sur kijiji, payée
   425$, 790$ lozeau. Couvert par un filtre UV B+W ø39mm ("B+W 39 010 UV Haze
   1x E")
+* Fujifilm [23mm f/1.4 R ø62](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf23mmf14_r/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/23mm-f14.htm) ("extraordinary
+  lens"), [fstoppers](https://fstoppers.com/gear/worlds-quickest-lens-review-fuji-xf-23mm-14r-8342) (glowing review), 700-900$ on kijiji,
+  excellente lentille "prime", utile pour le portrait, mais faut être
+  proche... excellente pour la photo de nuit, don d'un ami (!!)
 
 ### Vieux kit
 
@@ -195,14 +199,12 @@ Reference
 
 Évidemment, je magasine encore...
 
-Gogosses:
+Adaptateurs:
 
- 1. [Spare cover kit](https://www.bhphotovideo.com/c/product/1263618-REG/fujifilm_16519522_x_t2_cover_kit.html) (yes, I already lost the flash sync terminal
-    cover), 9$USD, B/O
- 2. un *vrai* doubleur, le [Fujinon Teleconverter XF2X TC WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf2x_tc_wr/) - un
+ 1. un *vrai* doubleur, le [Fujinon Teleconverter XF2X TC WR](http://www.fujifilm.com/products/digital_cameras/x/fujinon_lens_xf2x_tc_wr/) - un
     vrai doubleur, probablement plus fiable, mais un gros [450$ USD
     chez B&H](https://www.bhphotovideo.com/c/product/1254242-REG/fujifilm_16516271_xf_2x_tc_wr.html) et rien chez Lozeau (juste le 1.4x)
- 3. de la meilleure photo astronomique. peut-être avec un un adapteur
+ 2. de la meilleure photo astronomique. peut-être avec un un adapteur
     à téléscope. [80$USD](https://www.telescopeadapters.com/best-sellers/522-2-ultrawide-true-2-prime-focus-adapter.html) pour un adapteur 2", [exemples plus ou
     moins concluants](https://www.lost-infinity.com/fujifilm-x-t1-2-telescope-adapter/). certains prennent de bonnes poses [sans
     aucun adapteur](https://www.dpreview.com/forums/thread/3656867)
@@ -222,8 +224,6 @@ Lentilles:
 
  1. [35mm f/2 R WR ø43](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf2_r_wr/), [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f2.htm), [fstoppers](https://fstoppers.com/gear/fstoppers-reviews-fujifilm-35mm-f2-wr-158227), bonne
     taille, scellée, 350-400$ sur kijiji , 500$ lozeau
- 1. [23mm f/1.4 R ø62](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf23mmf14_r/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/23mm-f14.htm) ("extraordinary lens"),
-    [fstoppers](https://fstoppers.com/gear/worlds-quickest-lens-review-fuji-xf-23mm-14r-8342) (glowing review), 700-900$ on kijiji
  2. [16-55mm f/2.8 R LM WR ø77](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf16_55mmf28_r_lm_wr/): [Rockwell](http://www.kenrockwell.com/fuji/x-mount-lenses/16-55mm-f28.htm), [Phoblographer](https://www.thephoblographer.com/2015/03/12/review-fujifilm-16-55mm-f2-8-lm-wr-fujifilm-x-mount/), huge
     but real nice, 900-1400$
  3. [35mm f/1.4 R ø52](http://www.fujifilm.ca/products/digital_cameras/x/fujinon_lens_xf35mmf14_r/), [Rockwell](https://www.kenrockwell.com/fuji/x-mount-lenses/35mm-f14.htm) ("extraordinary lens"),
@@ -267,7 +267,10 @@ Acheté:
  5. un doubleur cheap, le [Vivitar 62mm 2.2x](https://www.bhphotovideo.com/c/product/1150442-REG/vivitar_viv_62t_62mm_2_2x_telephoto_attachment.html) à 28$. Le [Bower
     VLB3558 3.5x](https://www.bhphotovideo.com/c/product/700003-REG/Bower_VLB3558_VLB3558_3_5x_Telephoto_Lens.html) semblait intéressant, mais il n'est plus en vente
     chez B&H
-
+ 6. [Spare cover kit](https://www.bhphotovideo.com/c/product/1263618-REG/fujifilm_16519522_x_t2_cover_kit.html) (yes, I already lost the flash sync terminal
+    cover), 9$USD, B/O
+ 7. [Vortex Storm Jacket](https://www.bhphotovideo.com/c/product/602711-REG/Vortex_Media_P_SJ_M_B_Pro_SLR_Storm_Jacket.html) 35$USD - le boitier est waterproof, mais
+    pas les lentilles :)
 
 2013-2017 shopping
 ==================

clear out notes, format
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 0e7429fc..00598348 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -1,3 +1,4 @@
+
 Large files with git: LFS and git-annex
 =======================================
 
@@ -256,42 +257,3 @@ double-edged sword and can feel empowering for some users and
 terrifyingly hard for others. Where you stand on the "power-user"
 scale, along with project-specific requirements will ultimately
 determine which solution is the right one for you.
-
-Notes
-=====
-
-git smudge suboptimal:
-https://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/
-
- * commands ran once per file, fixed by supporting long-running
-   support, used by LFS and git-annex
- * files are buffered to memory
- * piping files is inefficient? asked in: http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/#comment-7686e6c6192edfff231c02f53d40f5e8
-
-
-good comparison:
-
-https://gitlab.com/gitlab-org/gitlab-ee/issues/1648#note_27431431
-
-downsides LFS:
-
- * files cannot be dropped from a remote server
- * when download failure, pointer file is present and might be
-   processed by other programs, leading to erroneous results
-
-long-running filters (Lars Schneider)
-https://public-inbox.org/git/1468150507-40928-1-git-send-email-larsxschneider@gmail.com/
-
-clean/smudge filters with direct access:
-https://public-inbox.org/git/1468277112-9909-1-git-send-email-joeyh@joeyh.name/
-
-git-annex features: preferred content, p2p, metadata, gcrypt,
-assistant, adjusted branches, oh my!
-https://git-annex.branchable.com/design/
-
-scalability: https://git-annex.branchable.com/scalability/
-
-- arbitrary large files
-- constant memory usage
-- large number of file (100k), limited by git
-- resumable transfers}

finish todos, need review
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 6c4d8920..0e7429fc 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -1,25 +1,78 @@
 Large files with git: LFS and git-annex
 =======================================
 
-Because of its design, Git does not handle very well large, changing
-files. While there are various patches to handle large repositories,
-{graph stuff} git's design in file storage has remained surprisingly
-unchanged in its history which means storing large, changing files
-into git is still a challenge. Thankfully, other projects are helping
-git address this challenge, although in radically different ways:
-git-annex and git-lfs.
-
-{detail git large files limitations?}
+By design, Git does not handle large, changing files very well. While
+there is work underway to handle large repositories in the form of the
+[commit graph work](https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt), git's internal design has remained
+surprisingly constant in its history which means storing large files
+into git comes with a significant, and ultimately prohibitive,
+performance cost. Thankfully, other projects are helping git address
+this challenge, although in different ways. This article compares how
+git LFS and git-annex handle this problem and should help readers
+confronted with this problem in picking the right solution for their
+workflow.
+
+The problem with large files
+============================
+
+Git's problem with large files comes from its design. As our readers
+probably know, Linus Torvalds wrote Git to manage the history of the
+kernel source code, which consists of a large collection of small
+files. Every file is a "blob" in git's internal storage, addressed by
+its checksum. A new version of that file will store and entire new
+blob in git's history, with no deduplication between the two
+contents. The pack file format does offer some binary compression over
+the contents, but in practice this does not offer much help for large
+files as the contents are not contiguous which means that,
+effectively, git is slow at handling large files.
+
+There has been different attempts at fixing this in the past. In 2006,
+Linus Torvalds worked on improving the [pack file format](https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/) to reduce
+object duplication between the index and the pack files, but those
+changes were eventually [reverted](https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/) as the "extra object format 
+didn't appear to be worth it anymore".
+
+Then in 2009, [Caca labs](http://caca.zoy.org/) worked on improving `fast-import` and
+`pack-objects` to handle big files specially, an effort that's called
+[git-bigfiles](http://caca.zoy.org/wiki/git-bigfiles). Some of those changes eventually made it into git:
+for example, since [1.7.6](https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/) git will write large files directly to a
+packfile instead of holding it in memory. But files are still kept
+forever in history. An example of trouble I had to deal with is the
+Debian security tracker, which follows all security issues in the
+entire Debian history in a `data/CVE/list` files, which is around
+360,000 lines for a whopping 18MB. The resulting repository takes
+1.6GB of disk space and a *local* clone takes 21 minutes to perform,
+mostly taken by git resolving deltas. Commit, push, and pull are
+noticably slower than a regular repository, taking anywhere from a few
+seconds to a minute depending one how old the local copy is. And
+running annotate on that large file can take up to ten minutes. So
+even though that is a simple text file, it's grown large enough to
+cause significant performance in git, which is otherwise known for
+stellar performance.
+
+Intuitively, the problem here is git needs to copy files into its
+internal storage to track them. Third projects therefore typically
+solve the large files problem by taking files out of git. In 2009, git
+evangelist Scott Chacon released [git-media](https://github.com/schacon/git-media), a git filter which
+simply takes large files out of git. Unfortunately, there hasn't been
+an official release since then and it's [unclear](https://github.com/alebedev/git-media/issues/15) if the project is
+still maintained. The next effort to come up was [git-fat](https://github.com/jedbrown/git-fat), first
+released in 2012 and still maintained. But neither tool has seen
+massive adoption yet. If I would have to venture a guess, I would say
+it is partly because both require manual, per repository,
+configuration. Both also require a custom server (S3, SCP, Atmos or
+WebDAV for git-media, rsync for git-fat) which limits collaboration as
+users suddenly need access to another service.
 
 Git LFS
 =======
 
-Git LFS is one solution that quickly gained momentum as GitHub
-released the project as a free software in [August 2015](https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/).  While
-the actual server-side implementation used by GitHub is closed source,
+That was before GitHub released Git LFS in [August 2015](https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/). While the
+actual server-side implementation used by GitHub is closed source,
 they do publish a [test server](https://github.com/git-lfs/lfs-test-server) as an example implementation. Other
 git hosting platforms have also [implemented](https://github.com/git-lfs/git-lfs/wiki/Implementations ) support for the LFS
-[API](https://github.com/git-lfs/git-lfs/tree/master/docs/api), including GitLab, Gitea, and BitBucket.
+[API](https://github.com/git-lfs/git-lfs/tree/master/docs/api), including GitLab, Gitea, and BitBucket, something git-fat
+and git-media never achieved.
 
 After installing LFS, it can be used in any repository by installing
 the right hooks with `git lfs install` then asking LFS to track any
@@ -58,15 +111,13 @@ checksum (currently SHA256) of the file. This brings the extra feature
 that multiple copies of the same file in the same repository are
 automatically deduplicated, although in practice this rarely occurs.
 
-{detail smudge/clean process?}
-
 This only works for new files you are importing into git, however. If
 a git repository already has large files in its history, LFS can
 fortunately "fix" repositories by retroactively rewriting history with
-the [git-lfs-migrate](https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn) {, since 2.2.1}. {This has all the downsides
-of rewriting history, however - existing clone of such a repository
-will need to do a hard reset and prune or just perform a fresh clone
-to benefit from the cleanup.}
+the [git-lfs-migrate](https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn). This has all the normal downsides of
+rewriting history, however - existing clone of such a repository will
+need to do a hard reset and prune or just perform a fresh clone to
+benefit from the cleanup.
 
 LFS also supports [file locking](https://github.com/git-lfs/git-lfs/wiki/File-Locking), which allows users to claim a
 lock on a file, making it readonly everywhere except on the locking
@@ -74,15 +125,8 @@ repository. This allows users to signal others that they are working
 on a LFS file. Those locks are purely advisory, however, as users can
 remove other users' locks with the `git lfs unlock --force`. This
 feature was introduced in LFS 2.0 and is not supported by all
-servers.
-
-
-
-{can fetch only N days:
-https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-config.5.ronn#fetch-settings
-
-can prune whatever is older than N days (default 3) }
-
+servers. LFS also expires large files after a delay (3 days) and will
+only fetch recent large files (7 days).
 
 The main [limitation](https://github.com/git-lfs/git-lfs/wiki/Limitations) of LFS is that it's bound to a single
 upstream: large files are usually stored in the same location as the
@@ -147,11 +191,10 @@ are immediately effective which means previous file versions are
 discarded and can lead to data loss if users are not careful.
 
 Furthermore, git-annex in v7 mode suffers from some of the performance
-problems of LFS as well, because it uses the smudge filters. Hess
-actually has [ideas](http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/) on how the smudge interface could be improved,
-most notably by not buffering the entire file in memory, allowing the
-filter to read and write the work tree by itself, and adding extra
-hooks.
+problems of LFS, because it uses the smudge filters. Hess actually has
+[ideas](http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/) on how the smudge interface could be improved, most notably
+by not buffering the entire file in memory, allowing the filter to
+read and write the work tree by itself, and adding extra hooks.
 
 Being more distributed by design, git-annex does not have the same
 "locking" semantics as LFS. Locking a file in git-annex means
@@ -174,16 +217,6 @@ at other transfer protocols than HTTP, git-annex supports arbitrary
 support for WebDAV, bittorrent, rsync, S3, and an [impressive number
 of protocols](http://git-annex.branchable.com/special_remotes/).
 
-{preferred content, p2p, metadata, gcrypt, assistant, adjusted
-branches, oh my! https://git-annex.branchable.com/design/
-
-https://git-annex.branchable.com/scalability/
-
-- arbitrary large files
-- constant memory usage
-- large number of file (100k), limited by git
-- resumable transfers}
-
 "Large files" is therefore only scratching the surface of what
 git-annex can do: I have used it myself to build an [archival system
 for remote native communities in northern Québec](http://isuma-media-players.readthedocs.org/en/latest/index.html), while others
@@ -203,57 +236,30 @@ flexibility, it would eventually be possible to treat [LFS servers as
 just another remote](https://git-annex.branchable.com/todo/LFS_API_support/) which would make git-annex capable of storing
 files on those servers again.
 
-{try out v7 with calibre and calendes to see how well it works.}
-
 Conclusion
 ==========
 
-{todo}
+Git LFS and git-annex are both mature and well maintained programs
+which deal efficiently with large files in git. LFS is especially well
+supported by most git hosting providers and is easier to use, but is
+less flexible than git-annex: you will most likely have to pay
+whatever your hosting provider decides to host your content
+there. It's possible to host your content elsewhere, but that
+basically means running your own server right now.
+
+Git-annex, in comparison, allows you to store your content basically
+anywhere. It also uses all sorts of tricks to save disk space and
+improve performance, so it should generally be faster than git
+LFS. Learning git-annex, however, feels like learning git: you always
+feel you are not quite there and you can always learn more. It's a
+double-edged sword and can feel empowering for some users and
+terrifyingly hard for others. Where you stand on the "power-user"
+scale, along with project-specific requirements will ultimately
+determine which solution is the right one for you.
 
 Notes
 =====
 
-https://github.com/git-lfs/git-lfs/blob/master/docs/spec.md
-

(Diff truncated)
source
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index 743aaab6..8760081c 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -598,6 +598,6 @@ Papiers possibles au BEG:
 
 Faut voir quelle "grammage" choisir. C'est pas évident parce que c'est
 parfois en "lbs" et parfois en gramme. La conversion "naive" semble
-être 1.48gsm/lbs, mais selon [ce tableau](http://coastalprint.com/convert_gsm_to_pounds/) le système impérial varie
+être [1.48gsm/lbs](https://bollorethinpapers.com/en/basic-calculators/gsm-basis-weight-calculator), mais selon [ce tableau](http://coastalprint.com/convert_gsm_to_pounds/) le système impérial varie
 selon la sorte de papier (WTF bond, text, cover, bristol, index,
 tag??). À voir.

more details on print
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index bb30d40a..743aaab6 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -571,9 +571,33 @@ imprimeur local, test avec une copie avant.
 
 Imprimeurs possibles:
 
- * [Clic Imprimerie](https://www.yelp.com/biz/clickimprimerie-montr%C3%A9al-3): pas cher - mais lequel?
+ * [Clic Imprimerie](https://www.yelp.com/biz/clickimprimerie-montr%C3%A9al-3): pas cher, il paraît, fusionné et déménager, à
+   suivre
  * [Katasoho](http://katasoho.com/2/): camarades, vert, 5000 Iberville, sur le bord sud de
-   la track
+   la track. pas de réponse.
  * [Lozeau](https://lozeau.com/): 20$/calendrier 8.5x11, 5 à 7 jours ouvrables
  * [Jean-Coutu](https://iphoto.jeancoutu.com/fr/Products/Calendars/classic): 20$/calendrier + rabais 30%, identique à Lozeau, 5
    à 7 jours ouvrables, probablement le même labo
+ * Copie Express (St-Denis/Jean-Talon): 26$/calendrier (1.10$/feuille,
+   0.85$/impression)
+ * [Mardigrafe](http://mardigrafe.com/): contactés
+ * [CEGEP](https://agebdeb.org/impressions/): 0.20$/feuille
+
+On a fait des tests avec du papier 148gsm (gram per square meter) mat,
+mais il est clair que ça sortirait mieux sur du papier lustré (vice
+versa).
+
+Papiers possibles au BEG:
+
+ * [HP laser couleur pour dépliants](https://www.staples.ca/fr/HP-Papier-laser-couleur-pour-d%C3%A9pliants-8-1-2-po-x-11-po-lustr%C3%A9/product_608153_1-CA_2_20001): 30$/150 feuilles
+   (20¢/feuille), 150gsm / 40 lbs, brillance 97
+ * [Staples pour brochures / circulaires](https://www.staples.ca/fr/staples-papier-%C3%A0-brochure-et-circulaire-mat-8-x-11-po/product_SS2006024_1-CA_2_20001#/id='dropdown_610489'): 38$/150 feuilles (25¢/feuille), 48lbs
+   (120gsm ou 170gsm?)
+ * [Verso - Papier laser Sterling](https://www.staples.ca/fr/verso-papier-laser-sterling-num%C3%A9rique-lustr%C3%A9-premium-80-lb-8-5-x-11-po-blanc-bte-3000-feuilles-283618/product_2856893_1-CA_2_20001): 153$/3000 feuilles
+   (5¢/feuille), 118gsm, 16mil, brillance 94
+
+Faut voir quelle "grammage" choisir. C'est pas évident parce que c'est
+parfois en "lbs" et parfois en gramme. La conversion "naive" semble
+être 1.48gsm/lbs, mais selon [ce tableau](http://coastalprint.com/convert_gsm_to_pounds/) le système impérial varie
+selon la sorte de papier (WTF bond, text, cover, bristol, index,
+tag??). À voir.

feynman quote, from twitter
https://twitter.com/ProfFeynman/status/1068522142365413382
diff --git a/fortunes.txt b/fortunes.txt
index 1743121f..bebe767f 100644
--- a/fortunes.txt
+++ b/fortunes.txt
@@ -1097,3 +1097,7 @@ It is capitalism and government which stand for disorder and
 violence. Anarchism is the very reverse of it; it means order without
 government and peace without violence.
                         - Alexander Berkman
+%
+The ultimate test of your knowledge is your capacity to convey it to
+another.
+                        - Richard Feynman

first exhaustive, still incomplete, draft
diff --git a/blog/git-annex.mdwn b/blog/git-annex.mdwn
index 12979160..6c4d8920 100644
--- a/blog/git-annex.mdwn
+++ b/blog/git-annex.mdwn
@@ -1,4 +1,5 @@
-Handling large files with git
+Large files with git: LFS and git-annex
+=======================================
 
 Because of its design, Git does not handle very well large, changing
 files. While there are various patches to handle large repositories,
@@ -8,6 +9,8 @@ into git is still a challenge. Thankfully, other projects are helping
 git address this challenge, although in radically different ways:
 git-annex and git-lfs.
 
+{detail git large files limitations?}
+
 Git LFS
 =======
 
@@ -75,10 +78,10 @@ servers.
 
 
 
-can fetch only N days:
+{can fetch only N days:
 https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-config.5.ronn#fetch-settings
 
-can prune whatever is older than N days (default 3) 
+can prune whatever is older than N days (default 3) }
 
 
 The main [limitation](https://github.com/git-lfs/git-lfs/wiki/Limitations) of LFS is that it's bound to a single
@@ -108,8 +111,8 @@ protocols](https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.
 Finally, LFS can be slow. Every file added to LFS takes up double the
 space on the local filesystem as it is copied in the `.git/objects`
 storage. The smudge/clean interface is also slow: it works as a pipe
-and will need to read/write the entire file each time which can be
-prohibitive on some large datasets.
+and will need to read/write the entire file in memory each time which
+can be prohibitive on some large datasets.
 
 git-annex
 =========
@@ -122,7 +125,7 @@ project.
 
 Like LFS, git-annex takes large files out of git's history. The way it
 handles this is by storing a symbolic link to the file, stored in
-`.git/annex`. We should probably credit Hes for this innovation,
+`.git/annex`. We should probably credit Hess for this innovation,
 however, since LFS's storage layout is obviously inspired by this
 git-annex's earlier design. Git-annex's original design introduced all
 sorts of problems however, especially on filesystems lacking symlink
@@ -141,15 +144,71 @@ LFS, those files will double disk space usage by default. However it
 uses hardlinks between the internal git-annex disk storange and the
 work tree. The downside is, of course, that changes performed on files
 are immediately effective which means previous file versions are
-discarded. This can lead to data loss if users are not careful.
-
-https://git-annex.branchable.com/tips/unlocked_files/
-
-
-git smudge suboptimal: https://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/
-
-Git-annex does allow managing large files into git, but it does much
-more.
+discarded and can lead to data loss if users are not careful.
+
+Furthermore, git-annex in v7 mode suffers from some of the performance
+problems of LFS as well, because it uses the smudge filters. Hess
+actually has [ideas](http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/) on how the smudge interface could be improved,
+most notably by not buffering the entire file in memory, allowing the
+filter to read and write the work tree by itself, and adding extra
+hooks.
+
+Being more distributed by design, git-annex does not have the same
+"locking" semantics as LFS. Locking a file in git-annex means
+protecting it from changes, so files need to actually "unlocked" to be
+editable, which might be counter-intuitive to new users. In general,
+git-annex has a few of those unusual quirks and interfaces, that often
+come with more powerful software.
+
+And git-annex is much more powerful: it does not only address the
+"large files problem" but goes much further. For exmaple, it supports
+"partial checkouts", ie. downloading only some of the large files,
+something I find especially useful to manage my video, music, and
+photo collections, as those are too large to fit on my mobile
+devices. I can instead download only my favorite artists or most
+recent photos. Git-annex also has support for location tracking, where
+it knows how many copies of a file exist and where, which is very
+useful for archival purposes. And while LFS is only starting to look
+at other transfer protocols than HTTP, git-annex supports arbitrary
+"special remotes" that are fairly easy to implement. It ships with
+support for WebDAV, bittorrent, rsync, S3, and an [impressive number
+of protocols](http://git-annex.branchable.com/special_remotes/).
+
+{preferred content, p2p, metadata, gcrypt, assistant, adjusted
+branches, oh my! https://git-annex.branchable.com/design/
+
+https://git-annex.branchable.com/scalability/
+
+- arbitrary large files
+- constant memory usage
+- large number of file (100k), limited by git
+- resumable transfers}
+
+"Large files" is therefore only scratching the surface of what
+git-annex can do: I have used it myself to build an [archival system
+for remote native communities in northern Québec](http://isuma-media-players.readthedocs.org/en/latest/index.html), while others
+have built a [similar system in Brazil](https://github.com/RedeMocambos/baobaxia). It's also used by the
+scientific community in projects like [GIN](https://web.gin.g-node.org/) and [Datalad](https://www.datalad.org/), which
+manages terabytes of data. Another example is the [The Japanese
+American Legacy Project](http://www.densho.org/) which manages "upwards of 100 terabytes of
+collections, transporting them from small cultural heritage sites on
+USB drives".
+
+Unfortunately, git-annex is less well supported by hosting
+providers. GitLab [used to support it](https://docs.gitlab.com/ee/workflow/git_annex.html), but since it implemented
+LFS, it [dropped support for git-annex completely](https://gitlab.com/gitlab-org/gitlab-ee/issues/1648), citing it was a
+"burden to maintain" and problems making it work properly with their
+continuous integration system. Fortunately, thanks to git-annex's
+flexibility, it would eventually be possible to treat [LFS servers as
+just another remote](https://git-annex.branchable.com/todo/LFS_API_support/) which would make git-annex capable of storing
+files on those servers again.
+
+{try out v7 with calibre and calendes to see how well it works.}
+
+Conclusion
+==========
+
+{todo}
 
 Notes
 =====
@@ -179,4 +238,43 @@ on smudge:
 only some settings are in .lfsconfig:
 https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-config.5.ronn#lfsconfig
 
-mostly written in go
+lfs mostly written in go
+
+git-media: https://github.com/alebedev/git-media/ no release since 0.1
+in 2009, last comit in sept 2015, unclear what is the reference
+version: https://github.com/alebedev/git-media/issues/15 and original
+version was forked: https://github.com/schacon/git-media
+
+git-bigfiles in 2009: http://caca.zoy.org/wiki/git-bigfiles
+
+some improvements in 2006 in the pack file format (93821bd97) to
+reduce object duplication, but were eventually reverted (726f852b0)
+
+git-fat: https://github.com/jedbrown/git-fat
+
+
+
+git smudge suboptimal:
+https://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/
+
+ * commands ran once per file, fixed by supporting long-running
+   support, used by LFS and git-annex
+ * files are buffered to memory
+ * piping files is inefficient? asked in: http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/#comment-7686e6c6192edfff231c02f53d40f5e8
+
+
+good comparison:
+
+https://gitlab.com/gitlab-org/gitlab-ee/issues/1648#note_27431431
+
+downsides LFS:
+
+ * files cannot be dropped from a remote server
+ * when download failure, pointer file is present and might be
+   processed by other programs, leading to erroneous results
+
+long-running filters (Lars Schneider)
+https://public-inbox.org/git/1468150507-40928-1-git-send-email-larsxschneider@gmail.com/
+
+clean/smudge filters with direct access:
+https://public-inbox.org/git/1468277112-9909-1-git-send-email-joeyh@joeyh.name/

status update again
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index 3ee07592..bb30d40a 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -468,27 +468,21 @@ J'ai fait plus de travail sur le module LaTeX. L'auteur a fourni des
 correctifs qui font le gros du travail et j'ai pu établir un premier
 brouillon!
 
-Les choses qui restent à faire:
+Checklist:
 
  * confirmer les dates (voir plus haut, fait)
- * vérifier dates:
+ * vérifier dates: (fait)
    * ... des changements d'heures (fait)
    * ... de tous les autre? (on va dire que oui)
-* ajouter les évènements astronomiques
+ * ajouter les évènements astronomiques (fait)
  * établir le contenu de la dernière page
    * photo en exergue de l'auteur (fait)
    * remerciements aux réviseurs-euses (fait)
    * explications des dates (fait)
    * sommaire du projet (fait)
-   * lien QR-code vers cette page? peut-être en mode [halftone](https://jsfiddle.net/lachlan/r8qWV/)? ou
-     vers la gallerie ou un URL plus permanent mais publique...
-   * date, lieu (fait, a mettre a jour)
+   * date, lieu (fait)
    * explications astronomiques (dates UTC-4, fait)
    * description des photos (fait)
- * choix du papier
- * choix de la technique de montage (spirales à l'UQAM?)
- * impression d'une épreuve de test
- * correction d'une épreuve
  * choix final des photos:
    * Cover: ok, DSCF2561.jpg (mur)
    * Janvier: ok, DSCF0879.jpg (du pain et des roses), avec lightroom
@@ -507,6 +501,17 @@ Les choses qui restent à faire:
    * Novembre: ok, éclaircir? contraste neige?
    * Décembre: ok, DSCF7823.jpg, pic-bois.
 
+Tâches restantes:
+
+ * faire une page d'accueil pour le projet
+ * pointer le lien dans le colophon (avec qr-code, en mode
+   [halftone](https://jsfiddle.net/lachlan/r8qWV/)) vers la page d'accueil
+ * corriger date d'impression dans le colophon
+ * choix du papier
+ * choix de la technique de montage (spirales à l'UQAM?)
+ * impression d'une épreuve de test
+ * correction d'une épreuve
+
 Bugs restants upstream (signalés):
 
  * corriger le mois de septembre qui déborde (fixed, remis les notes)

wtf
diff --git a/communication/photo.mdwn b/communication/photo.mdwn
index 3a910d25..3ee07592 100644
--- a/communication/photo.mdwn
+++ b/communication/photo.mdwn
@@ -472,10 +472,9 @@ Les choses qui restent à faire:
 
  * confirmer les dates (voir plus haut, fait)
  * vérifier dates:
-   * retirer Nanomonestotse?
- * ajouter les évènements astronomiques
    * ... des changements d'heures (fait)
    * ... de tous les autre? (on va dire que oui)
+* ajouter les évènements astronomiques
  * établir le contenu de la dernière page
    * photo en exergue de l'auteur (fait)
    * remerciements aux réviseurs-euses (fait)

Archival link:

The above link creates a machine-readable RSS feed that can be used to easily archive new changes to the site. It is used by internal scripts to do sanity checks on new entries in the wiki.

Created . Edited .