I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.

  1. Why
  2. How
    1. Other useful tasks
    2. Live access to a running test
    3. Unification with libvirt
    4. Boot time optimizations
      1. Grub
      2. systemd-networkd
  3. Nitty-gritty details no one cares about
    1. Fixing hang in sbuild cleanup
    2. Disgression on the diversity of VM-like things
    3. pbuilder vs sbuild
  4. Who

Why

I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt.

I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots...

I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), unshare (with schroot --chroot-mode=unshare), or whatever: I didn't feel those offer the level of isolation that is provided by qemu.

The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.

How

Basically, you need this:

sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable-autopkgtest-amd64.img unstable https://deb.debian.org/debian

Then to make this used by default, add this to ~/.sbuildrc:

# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-autopkgtest-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-autopkgtest-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';

Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:

# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ('--ram-size=4096', '--cpus=2');
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-autopkgtest-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @_qemu_options, '/srv/sbuild/qemu/%r-autopkgtest-%a.img'];

This configuration will:

  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests

Note that the VM created by sbuild-qemu-create have an unlocked root account with an empty password.

Other useful tasks

Note that some of the commands below (namely the ones depending on sbuild-qemu-boot) assume you are running Debian 12 (bookworm) or later.

And yes, this is all quite complicated and could be streamlined a little, but that's what you get when you have years of legacy and just want to get stuff done. It seems to me autopkgtest-virt-qemu should have a magic flag starts a shell for you, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there.

Maybe because the authors consider the above to be simple enough (see also bug #911977 for a discussion of this problem).

Live access to a running test

When autopkgtest starts a VM, it uses this funky qemu commandline:

qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

... which is a typical qemu commandline, I'm sorry to say. That gives us a VM with those settings (paths are relative to a temporary directory, /tmp/autopkgtest-qemu.w1mlh54b/ in the above example):

In other words, it's possible to access the VM with:

nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2

The nc socket interface is ... not great, but it works well enough. And you can probably fire up an SSHd to get a better shell if you feel like it.

Unification with libvirt

Those images created by autopkgtest can actually be used by libvirt to boot real, fully operational battle stations, sorry, virtual machines. But it needs some tweaking.

First, we need a snapshot image to work with, because we don't want libvirt to work directly on the pristine images created by autopkgtest:

  sudo qemu-img create -f qcow2 -o backing_file=/srv/sbuild/qemu/unstable-autopkgtest-amd64.img,backing_fmt=qcow2  /var/lib/libvirt/images/unstable-autopkgtest-amd64.img 10G
  sudo chown libvirt-qemu '/var/lib/libvirt/images/unstable-autopkgtest-amd64.img'

Then this VM can be adopted fairly normally in virt-manager. Note that it's possible that you can set that up through the libvirt XML as well, but I haven't quite figured it out.

One twist I found is that the "normal" networking doesn't seem to work anymore, possibly because I messed it up with vagrant. Using the bridge doesn't work either out of the box, but that can be fixed with the following sysctl changes:

net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-arptables=0

That trick was found in this good libvirt networking guide.

Finally, networking should work transparently inside the VM now. To share files, autopkgtest expects a 9p filesystem called sbuild-qemu. It might be difficult to get it just right in virt-manager, so here's the XML:

<filesystem type="mount" accessmode="passthrough">
  <source dir="/home/anarcat/dist"/>
  <target dir="sbuild-qemu"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</filesystem>

The above shares the /home/anarcat/dist folder with the VM. Inside the VM, it will be mounted because there's this /etc/fstab line:

sbuild-qemu /shared 9p trans=virtio,version=9p2000.L,auto,nofail 0 0

By hand, that would be:

mount -t 9p -o trans=virtio,version=9p2000.L sbuild-qemu /shared

I probably forgot something else important here, but surely I will remember to put it back here when I do.

Note that this at least partially overlaps with hosting.

Boot time optimizations

Grub

echo 'GRUB_TIMEOUT=1' > /etc/default/grub.d/grub_timeout.cfg
update-grub

systemd-networkd

In /etc/systemd/network/ether.network:

[Match]
Type=ether
# Could also be Name=eth0 or Name=!lo

[Network]
DHCP=yes
EmitLLDP=true

Then to switch:

# WARNING: only on a console that will survive network shutdown!
systemctl disable --now networking.service ; \
systemctl enable --now systemd-networkd

Nitty-gritty details no one cares about

Fixing hang in sbuild cleanup

I'm having a hard time making heads or tails of this, but please bear with me.

In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter.

At least in lib/Sbuild/Build.pm, we can see this:

my $is_cloned_session = (defined ($session->get('Session Purged')) &&
             $session->get('Session Purged') == 1) ? 1 : 0;

[...]

if ($is_cloned_session) {
$self->log("Not cleaning session: cloned chroot in use\n");
} else {
if ($purge_build_deps) {
    # Removing dependencies
    $resolver->uninstall_deps();
} else {
    $self->log("Not removing build depends: as requested\n");
}
}

The schroot builder defines that parameter as:

    $self->set('Session Purged', $info->{'Session Purged'});

... which is ... a little confusing to me. $info is:

my $info = $self->get('Chroots')->get_info($schroot_session);

... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there...

ChrootUnshare.pm is way more explicit:

$self->set('Session Purged', 1);

I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right?

For some reason, before I added this line to my configuration:

$purge_build_deps = 'never';

... the "Cleanup" step would just completely hang. It was quite bizarre.

Disgression on the diversity of VM-like things

There are a lot of different virtualization solutions one can use (e.g. Xen, KVM, Docker or Virtualbox). I have also found libguestfs to be useful to operate on virtual images in various ways. Libvirt and Vagrant are also useful wrappers on top of the above systems.

There are particularly a lot of different tools which use Docker, Virtual machines or some sort of isolation stronger than chroot to build packages. Here are some of the alternatives I am aware of:

Take, for example, Whalebuilder, which uses Docker to build packages instead of pbuilder or sbuild. Docker provides more isolation than a simple chroot: in whalebuilder, packages are built without network access and inside a virtualized environment. Keep in mind there are limitations to Docker's security and that pbuilder and sbuild do build under a different user which will limit the security issues with building untrusted packages.

On the upside, some of things are being fixed: whalebuilder is now an official Debian package (whalebuilder) and has added the feature of passing custom arguments to dpkg-buildpackage.

None of those solutions (except the autopkgtest/qemu backend) are implemented as a sbuild plugin, which would greatly reduce their complexity.

I was previously using Qemu directly to run virtual machines, and had to create VMs by hand with various tools. This didn't work so well so I switched to using Vagrant as a de-facto standard to build development environment machines, but I'm returning to Qemu because it uses a similar backend as KVM and can be used to host longer-running virtual machines through libvirt.

The great thing now is that autopkgtest has good support for qemu and sbuild has bridged the gap and can use it as a build backend. I originally had found those bugs in that setup, but all of them are now fixed:

So we have unification! It's possible to run your virtual machines and Debian builds using a single VM image backend storage, which is no small feat, in my humble opinion. See the sbuild-qemu blog post for the annoucement

Now I just need to figure out how to merge Vagrant, GNOME Boxes, and libvirt together, which should be a matter of placing images in the right place... right? See also hosting.

pbuilder vs sbuild

I was previously using pbuilder and switched in 2017 to sbuild. AskUbuntu.com has a good comparative between pbuilder and sbuild that shows they are pretty similar. The big advantage of sbuild is that it is the tool in use on the buildds and it's written in Perl instead of shell.

My concerns about switching were POLA (I'm used to pbuilder), the fact that pbuilder runs as a separate user (works with sbuild as well now, if the _apt user is present), and setting up COW semantics in sbuild (can't just plug cowbuilder there, need to configure overlayfs or aufs, which was non-trivial in Debian jessie).

Ubuntu folks, again, have more documentation there. Debian also has extensive documentation, especially about how to configure overlays.

I was ultimately convinced by stapelberg's post on the topic which shows how much simpler sbuild really is...

Who

Thanks lavamind for the introduction to the sbuild-qemu package.

sbuild-qemu-boot
I spotted that 'sbuild-qemu-boot' was added in 0.83 that looks to provide console access to the vm, though I've not had a chance to experiment with it yet, it might help with two of the task you mention in your "Remaining work" section.
Comment by Nick Brown
Isn't too much a full VM?

Why not a container?

While the article is fascinating and useful, I find it overwhelming setting up all of this to just build a package.

You can obtain the same level of security/isolation, with 1/100 of the effort.

Am I wrong?

Comment by Antenore
why a VM

Why not a container?

A "container" doesn't actually exist in the Linux kernel. It's a hodge-podge collection of haphazard security measures that are really hard to get right. Some do, most don't.

Besides, which container are you refering to? I know of unshare, LXC, LXD, Docker, podman... it can mean so many things that it actually loses its meaning.

I find Qemu + KVM to be much cleaner, and yes, it does provide a much stronger security isolation than a container.

Comment by anarcat
Re: why a VM

Thanks for your answer. In general yes, a VM is more secured as the system resources are separated, in that sense you're are right. My main point was in terms of effort. Playing with croups, namespaces, and the selinux beast is surely haphazard, but nobody does it manually. The typical use case are the CI/CD pipelines, where the developer just choose the base image to use. It's just few lines of JSON or yaml and we are done. There's no need to update the system, configure services, keep an eye on the resources, etc.

Again, how you do it is amazing and clean, but surely not a scalable solution

I personally use a mix of different solutions based on the customer needs, often it's podman or Docker, but not always.

Comment by Antenore
Re: why a VM

My main point was in terms of effort.

But what effort do you see in maintaining a VM image exactly?

Playing with croups, namespaces, and the selinux beast is surely haphazard, but nobody does it manually.

The typical use case are the CI/CD pipelines, where the developer just choose the base image to use.

(I also use Docker to run GitLab CI pipelines, for the record, but that's not the topic here.)

See, that's the big lie of containers. I could also just "choose a base image" for a VM, say from Vagrant boxes or whatever. I don't do that, because I like to know where the heck my stuff comes from. I pull it from official Debian mirrors, so I know exactly what's in there.

When you pull from Docker hub, you have a somewhat dubious trace of where those images come from. We (Debian members) maintain a few official images, but I find their built process to be personnally quite convoluted and confusing.

It's just few lines of JSON or yaml and we are done.

See the funny thing right there is I don't even know what you're talking about here, and I've been deploying containers for years. Are you refering to a Docker compose YAML file? Or a Kubernetes manifest? Or the container image metadata? Are you actually writing that by hand!?

That doesn't show me how you setup security in your countainers, nor the guarantees it offers. Do you run the containers as root? Do you enable user namespaces? selinux or apparmor? what kind of seccomp profile will that build need?

Those are all questions I'd need to have answered if I'd want any sort of isolation even remotely close to what a VM offers.

There's no need to update the system, configure services, keep an eye on the resources, etc.

You still need to update the image. I don't need to configure services or keep an eye on resources in my model either.

Again, how you do it is amazing and clean, but surely not a scalable solution

It seems you're trying to sell me on the idea that containers are great and scale better than VMs for the general case, in an article where I specifically advise users to use VM for specific case of providing stronger isolation for untrusted builds. I don't need to scale those builds to thousands of builds a day (but i will note that the Debian buildd's have been doing this for a long time without containers).

I personally use a mix of different solutions based on the customer needs, often it's podman or Docker, but not always.

Same, it's not one size fits all. The topic here is building Debian packages, and I find qemu to be a great fit.

I dislike containers, but it's sometimes the best tool for the job. Just not in this case.

Comment by anarcat
Why a VM

Why not a container?

You can obtain the same level of security/isolation, with 1/100 of the effort.

Containers can have a good level of security/isolation, but VM isolation is still stronger.

In any case, two clear advantages that VMs have over containers are that (1) one can test entire systems, which (2) may also have a foreign architecture. There are also a number of other minor advantages to using QEMU of course, e.g. snapshotting.

For example, I maintain the keyutils package, and in the process have discovered architecture-specific bugs in the kernel, for architectures I don't have physical access to. I needed to run custom kernels to debug these, and I can't to that with containers or on porterboxes.

As a co-maintainer of scikit-learn, I've also discovered a number of upstream issues in scikit-learn and numpy for the architectures that upstreams can't/don't really test in CI (e.g. 32-bit ARM). I've run into all kinds of issues with porterboxes (e.g.: not enough space), which I don't have with a local image.

So in my case, there's no way around sbuild-qemu and autopkgtest-virt-qemu anyway. And (echoing anarcat here) on the plus side: KVM + QEMU just feel much cleaner for isolation.

If host=guest arch and KVM is enabled, the emulation overhead is negligible, and I guess the boot process could be sped up with a trick or two. Most of the time is spent in the BIOS resp. UEFI environment.

Comment by Christian Kastner
optimizing qemu

If host=guest arch and KVM is enabled, the emulation overhead is negligible, and I guess the boot process could be sped up with a trick or two. Most of the time is spent in the BIOS resp. UEFI environment.

Do say more about this! I would love to get faster bootup, that's the main pain point right now. It does feel like runtime performance impact is negligible (but I'd love to improve on that too), but startup time definitely feels slow.

Are you familiar with Qemu's microvm platform? How would we experiment with stuff like that in the sbuild-qemu context? How do I turn on host=guest?

Thanks for the feedback!

Comment by anarcat
Faster bootup

Do say more about this! I would love to get faster bootup, that's the main pain point right now. It does feel like runtime performance impact is negligible (but I'd love to improve on that too), but startup time definitely feels slow.

Well, on my local amd64 system, a full boot to a console takes about 9s, 5-6s of which are spent in GRUB, loading the kernel and initramfs, and so on.

If one doesn't need the GRUB menu, I guess 1s could be shaved off by setting GRUB_TIMEOUT=0 in /usr/share/sbuild/sbuild-qemu-create-modscript, then rebuilding a VM.

It seems that the most time-consuming step is loading the initramfs and the initial boot, and I while I haven't looked into it yet, I feel like this could also be optimized. Minimal initramfs, minimal hardware, etc.

Are you familiar with Qemu's microvm platform? How would we experiment with stuff like that in the sbuild-qemu context?

I've stumbled over it about a year ago, but didn't get it to run -- I think I had an older QEMU environment. With 1.7 now in bullseye-backports, I need to give it a try again soon.

However, as far as I understand it, microvm only works for a very limited x86_64 environment. In other words, this would provide only the isolation features of a VM, but not point (1) and (2) of my earlier comment.

Not that I'm against that, on the contrary, I'd still like to add that as a "fast" option.

firecracker-vm (Rust, Apache 2.0, maintained by Amazon on GitHub) also provides a microvm-like solution. Haven't tried it yet, though.

How do I turn on host=guest?

That should happen automatically through autopkgtest. sbuild-qemu calls sbuild, sbuild bridges with autopkgtest-virt-qemu, autopkgtest-virt-qemu has the host=guest detection built in.

It's odd that there's nothing in the logs indicating whether this is happening (not even with --verbose or --debug), but a simple test is: if the build is dog-slow, as in 10-15x slower than native , it's without KVM :-)

Note that in order to use KVM, the building user must be in the 'kvm' group.

Comment by Christian Kastner
Comments on this page are closed.
Created . Edited .