TLPL; j'ai changé de logiciel pour la gestion de mon blog, de Drupal à Ikiwiki.

TLDR; I have changed my blog from Drupal to Ikiwiki. will continue operating for a while to give a chance to feed aggregators to catch that article. It will also give time to the Internet archive to catchup with the static stylesheets (it turns out it doesn't like Drupal's CSS compression at all!) An archive will therefore continue being available on the internet archive for people that miss the old stylesheet.

I have redirected the URL to the new blog location, This will be my last blog post written on Drupal, and all new content will be available on the new URL. RSS feed URLs should not change.

  1. Why
  2. What
  3. When
  4. Who
  5. How
    1. MySQL commandline
    2. Views export
    3. Python script
      1. Generating dump
      2. Calling the conversion script
      3. Files and old URLs
      4. Remaining issues


I have migrated away from Drupal because it is basically impossible to upgrade my blog from Drupal 6 to Drupal 7. Or if it is, I'll have to redo the whole freaking thing again when Drupal 8 comes along.

And frankly, I don't really need Drupal to run a blog. A blog was originally a really simple thing: a web blog. A set of articles written on the corner of a table. Now with Drupal, I can add ecommerce, a photo gallery and whatnot to my blog, but why would I do that? and why does it need to be a dynamic CMS at all, if I get so little comments?

So I'm switching to ikiwiki, for the following reason:

Migrating will mean abandoning the barlow theme, which was seeing a declining usage anyways.


So what should be exported exactly. There's a bunch of crap in the old blog that i don't want: users, caches, logs, "modules", and the list goes on. Maybe it's better to create a list of what I need to extract:


I had planned to do this before summer 2015, but it turned out being fairly easy and fun, so i spent two evenings working on a script on feb 5th and 6th, and finally turned off the Drupal site on monday february 9th.


Well me, who else. You probably really don't care about that, so let's get to the meat of it.


How to perform this migration... There are multiple paths:

Both approaches had issues, and I found a third way: talk directly to mysql and generate the files directly, in a Python script. But first, here are the two previous approaches I know of.

MySQL commandline

LeLutin switched using MySQL requests, although he doesn't specify how content itself was migrated. Comments importing is done with that script:

echo "select n.title, concat('| [[!comment   format=mdwn|| username=\"',, '\"|| ip=\"', c.hostname, '\"|| subject=\"', c.subject, '\"|| date=\"', FROM_UNIXTIME(c.created), '\"|| content=\"\"\"||', b.comment_body_value, '||\"\"\"]]') from node n, comment c, field_data_comment_body b where n.nid=c.nid and c.cid=b.entity_id;" | drush sqlc | tail -n +2 | while read line; do if [ -z "$i" ]; then i=0; fi; title=$(echo "$line" | sed -e 's/[    ]\+|.*//' -e 's/ /_/g' -e 's/[:(),?/+]//g'); body=$(echo "$line" | sed 's/[^|]*| //'); mkdir -p ~/comments/$title; echo -e "$body" > ~/comments/$title/comment_$i._comment; i=$((i+1)); done

Kind of ugly, but beats what i had before (which was "nothing").

I do think it is the good direction to take, to simply talk to the MySQL database, maybe with a native Python script. I know the Drupal database schema pretty well (still! this is D6 after all) and it's simple enough that this should just work.

Views export

screenshot of views 2.x

mvc recommended views data export on Lelutin's blog. Unfortunately, my experience with the views export interface has been somewhat mediocre so far. Yet another reason why I don't like using Drupal anymore is this kind of obtuse dialogs:

I clicked through those for about an hour to get JSON output that turned out to be provided by views bonus instead of views_data_export. And confusingly enough, the path and format_name fields are null in the JSON output (whyyy!?). views_data_export unfortunately only supports XML, which seems hardly better than SQL for structured data, especially considering I am going to write a script for the conversion anyways.

Basically, it doesn't seem like any amount of views mangling will provide me with what i need.

Nevertheless, here's the failed-export-view.txt that I was able to come up with, may it be useful for future freedom fighters.

Python script

I ended up making a fairly simple Python script to talk directly to the MySQL database.

The script exports only nodes and comments, and nothing else. It makes a bunch of assumptions about the structure of the site, and is probably only going to work if your site is a simple blog like mine, but could probably be improved significantly to encompass larger and more complex datasets. History is not preserved so no interaction is performed with git.

Generating dump

First, I imported the MySQL dump file on my local mysql server for easier development. It is 13.9MiO!!

mysql -e 'CREATE DATABASE anarcatblogbak;'
ssh "cd ; drush sql-dump" | pv | mysql anarcatblogbak

I decided to not import revisions. The majority (70%) of the content has 1 or 2 revisions, and those with two revisions are likely just when the node was actually published, with minor changes. ~80% have 3 revisions or less, 90% have 5 or less, 95% 8 or less, and 98% 10 or less. Only 5 articles have more than 10 revisions, with two having the maximum of 15 revisions.

Those stats were generated with:

SELECT title,count(vid) FROM anarcatblogbak.node_revisions group
by nid;

Then throwing the output in a CSV spreadsheet (thanks to mysql-workbench for the easy export), adding a column numbering the rows (B1=1,B2=B1+1), another for generating percentages (C1=B1/count(B$2:B$218)) and generating a simple graph with that. There were probably ways of doing that more cleanly with R, and I broke my promise to never use a spreadsheet again, but then again it was Gnumeric and it's just to get a rough idea.

There are 196 articles to import, with 251 comments, which means an average of 1.15 comment per article (not much!). Unpublished articles (5!) are completely ignored.

Summaries are also not imported as such (break comments are ignored) because ikiwiki doesn't support post summaries.

Calling the conversion script

The script is in It is called with:

./ -u anarcatblogbak -d anarcatblogbak blog -vv

The -n and -l1 have been used for first tests as well. Use this command to generate HTML from the result without having to commit and push all:

ikiwiki --plugin meta --plugin tag --plugin comments --plugin inline  . ../

More plugins are of course enabled in the blog, see the setup file for more information, or just enable plugin as you want to unbreak things. Use the --rebuild flag on subsequent runs. The actual invocation I use is more something like:

ikiwiki --rebuild --no-usedirs --plugin inline --plugin calendar --plugin postsparkline --plugin meta --plugin tag --plugin comments --plugin sidebar  . ../

I had problems with dates, but it turns out that I wasn't setting dates in redirects... Instead of doing that, I started adding a "redirection" tag that gets ignored by the main page.

Files and old URLs

The script should keep the same URLs, as long as pathauto is enabled on the site. Otherwise, some logic should be easy to add to point to node/N.

To redirect to the new blog, rewrite rules, on original blog, should be as simple as:

Redirect /

When we're sure:

Redirect permanent /

Now, on the new blog, some magic needs to happen for files. Both /files and /sites/ need to resolve properly. We can't use symlinks because ikiwiki drops symlinks on generation.

So I'll just drop the files in /blog/files directly, the actual migration is:

cp $DRUPAL/sites/ $IKIWIKI/blog/files
rm -r .htaccess css/ js/ tmp/ languages/
rm foo/bar # wtf was that.
rmdir *
sed -i 's#/sites/' blog/*.mdwn
sed -i 's#' blog/*.mdwn
chmod -R -x blog/files
sudo chmod -R +X blog/files

A few pages to test images:

There are some pretty big files in there, 10-30MB MP3s - but those are already in this wiki! so do not import them!

Running fdupes on the result helps find oddities.

The meta guid directive is used to keep the aggregators from finding duplicate feed entries. I tested it with Liferea, but it may freak out some other sites.

Remaining issues

More progress information in the script itself.

indieweb / fedweb

Salut Antoine,

Pas évident maintenir un Drupal personnel...

Serais-tu intéressé par une petite rencontre mercredi soir pour discuter d'indieweb et fedweb? (et mes débuts de notes sur - aussi éditable off-line, youppi!)

En fait, je pense aussi au Smallest Federated Wiki et à la migration de vers un Single Page App - et comparé à ikiwiki. Bref, je crois qu'il y a moyen de fédérér toutes sortes d'affaires et j'aimerais connaitre tes impressions sur tout ça.

À bientôt!

Comment by robin
salut robin! oui, ça

salut robin!

oui, ça m'intéresse, mais malheureusement je peux pas être présent. je vous souhaite une bonne rencontre...

Comment by anarcat
I think I'm going to move to Jekyll. Lots of people have written conversion scripts, including from drupal, which makes me think it shouldn't be too bad. My first ever migration away from drupal!
Comment by mvc
about jekyll

i think jekyll is great! however, it is more geek-oriented than ikiwiki is from my point of view (believe it or not) as ikiwiki has a web interface and supports comments (such as this one) that lambda users can input without knowledge of git. it is also somewhat integrated with github, a proprietary software company which silos i try to avoid..

but yeah, had i known there was a migration script for jekyll, i might have worked on improving that to support ikiwiki (so comments) and more... oh well, it was fun writing python. :)

Comment by anarcat []
Comments on this page are closed.
Created . Edited .