The other day I noticed a post on one of my Misskey Antennas about “a container runtime written in Rust” called youki. This piqued my interest especially since its repository is under the containers org, which is also the home of Podman and crun for example. I’ll be honest and admit that I’m still not very clear about the exact responsibilities of such a runtime. It gets especially fuzzy when both “high level” runtimes like containerd and docker, and low level runtimes (that can serve as the “backend” of high level ones) like kata and youki are referred to just as “runtimes.” Anyway I decided to give it a try and see if I could get it to work with my k3s setup. It really wasn’t as easy as I’d hoped.

plasma ball digital wallpaper

Installing extra runtimes

Installing crun was surprisingly easy: it’s in apt, so apt-get install crun did the trick.

Since youki is a much less mature project (it only just hit its 0.1.0 release), it’s not yet in the apt repositories. But installing isn’t that difficult and it’s one of the few things that’s documented. The roll is the usual: download the tarball from the Github release, unpack and then move the youki executable somewhere on the PATH (by default /usr/local/bin/youki). Or build from source if you’re not afraid of dragons that there be.

brown coral under the body of water with sun streaks in closeup photography

Setting up k3s to use a different runtime

Sadly at this point the only runtime that k3s auto-discovers is the nvidia container runtime, and even that has to be then added manually as a RuntimeClass to the cluster. For other runtimes the procedure to set them up with k3s isn’t documented at all nor is it obvious. I was lucky to run into some Slack logs on the matter. Reminder: don’t use Slack or Discord as primary support/documentation. It’s not searchable, just a walled garden that people have no way to find any information in.

The way to set up k3s to use youki, crun or any other container runtime is by adding a config.toml.tmpl file where the k3s (autogenerated) containerd configuration config.toml is (by default /var/lib/rancher/k3s/agent/etc/containerd/config.toml just so it’s easy to find). The config.toml.tmpl file is expected to be a Go template file. The default template can be found in the source code. What I did was copy the contents of that template string, paste it to a config.toml.tmpl file, and add my own configuration to the end of it as plain text (remember, k3s at this point does not auto-discover these runtimes).

The configuration lines for youki and crun are like below. Obviously you need to adjust the BinaryName to wherever you saved the binary (these are the defaults).

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.youki]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.youki.options]
    BinaryName = "/usr/local/bin/youki"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun.options]
    BinaryName = "/usr/bin/crun"

Once this is done, you need to get k3s to update its config, and for some reason systemctl reload-or-restart k3s.service did not do the trick, so I just restarted the whole node. Once the config is updated (you can check by looking at the generated config.toml in the same folder as the template) the runtimes should be added to the cluster as RuntimeClass resources.

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: youki
handler: youki
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: crun
handler: crun

Once this is done, they can be used in Pod specs through the runtimeClassName setting.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-youki
spec:
  runtimeClassName: youki
  containers:
  - name: nginx
    image: nginx

One issue I ran into with k3s was that I couldn’t get it(‘s containerd) to work with these extra runtimes by default. What I tried was setting the default_runtime_name under [plugins."io.containerd.grpc.v1.cri".containerd] in the containerd config (which is well documented), but when I then restarted k3s, Pods kept going into CrashLoopBackoff and I couldn’t find any obvious hints why. But that gave me a reason to try something else…

Setting up k0s to use a different runtime

Unlike k3s, k0s actually documents how to use different runtimes. The docs recommend to hard-code the customized containerd configuration, but I decided to take a slightly different route. I changed the /etc/k0s/containerd.toml to make it manually managed (removing the magic comment) and included my own config in addition to k0s’s.

version = 2
imports = [
        "/run/k0s/containerd-cri.toml",
        "/etc/k0s/cri-runtimes.toml",
]

Then I added my own cri-runtimes.toml there that adds the runtimes I wanted. Unlike k3s, k0s(‘s containerd) had no issues using an alternative runtime as default.

Version = 2

[plugins."io.containerd.grpc.v1.cri".containerd]
  default_runtime_name = "crun"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.youki]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.youki.options]
    BinaryName = "/usr/local/bin/youki"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun]
  runtime_type = "io.containerd.runc.v2"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.crun.options]
    BinaryName = "/usr/bin/crun"

Finally I added the RuntimeClass resources the same. I really liked how smooth and (compared to k3s) easy configuring k0s was—my only problem was that it comes with no load balancer so I’d need to install and manage MetalLB myself which was not in the scope of this adventure. Maybe I’ll try setting up k0s later to play around with Calico, but that’s not this time.