Tag: kubernetes

Longhorn trash weighing me down

Last year I gave Longhorn a try. It was a nice proposition for me: I was using local storage anyway, so the idea that pods would be independent of the nodes sounded delicious. Except the price was way too much.

At the time I thought that the only problem was Prometheus, which in itself is pretty heavy on disk IO, but it turns out the issue was with Longhorn itself. Also, as I found out, I wasn’t thorough enough deleting Longhorn stuff, which resulted in quite a few headaches this year.

white cow statue beside brown tree

Setting up additional container runtimes

The other day I noticed a post on one of my Misskey Antennas about “a container runtime written in Rust” called youki. This piqued my interest especially since its repository is under the containers org, which is also the home of Podman and crun for example. I’ll be honest and admit that I’m still not very clear about the exact responsibilities of such a runtime. It gets especially fuzzy when both “high level” runtimes like containerd and docker, and low level runtimes (that can serve as the “backend” of high level ones) like kata and youki are referred to just as “runtimes.” Anyway I decided to give it a try and see if I could get it to work with my k3s setup. It really wasn’t as easy as I’d hoped.

plasma ball digital wallpaper

Adding Grafana annotations based on Flux CD events

Earlier this year I wrote about adding Grafana annotations based on Argo CD events. Since this year I actually got around testing out Flux some more, it was natural to follow up with adding some Grafana annotations based on Flux events.


The Weave GitOps UI for Flux

Unlike Argo, Flux has no built-in UI. CD tools should be invisible most of the time, just quietly running in the back, getting changes in the cluster. This is fine for “most of the time” when there are no changes to the applications I’m making myself or the cluster itself. It can just look at the various helm charts and other (git) sources and deploying the changes as configured. It’s nice that I can be sure that my cluster has the latest possible versions of everything running within 10-15 minutes of release (depending on the Flux interval configured) without ever touching kubectl.


Trying FluxCD

When you create a whole FluxCD new setup from zero, it’s really easy: use flux bootstrap. it does “everything” for you. In my case I tried this setup first for last year’s advent calendar. Except back then I had an expedition scheduled for the first half of December, so this took the back seat and was eventually forgotten. Therefore in this year past everything I did back then slowly sank into oblivion, so I again had to start pretty much from zero.

I did set up the repo and a cluster (on Civo) back then, but I quickly tore down the cluster when I realized I wouldn’t be able to test it all out. The repo stayed though, so now i was starting afresh with a k3s cluster spanning three nanodes (that’s a linode referral link with a $100 credit over 60 days) and a repository already set up from last year.


Prometheus vs Longhorn, fight!

Last year I had a brief affair with Longhorn. It’s a tool that abstracts the interface to interact with volumes (in my case in Kubernetes) from where the underlying data actually lives. In my case, my cluster consists of three small nodes, and back then most of the data lived in local-path provisioned volumes. Using local-path means that the data is physically on the host machine instead of some virtually mounted filesystem. This also means that once an application has a PVC, it can’t be assigned to any other node (it results in a “conflict”).


Adding Grafana annotations based on Argo CD events

I was thinking about how nice it would be if I could see on my main Grafana dashboard (the only one I use at this point actually) when there are new versions of something deployed. This way if by chance there is a problem with something afterwards I can see at a glance what could have gone wrong. Also I really like just looking at that Grafana dashboard and see that everything is alive and well. (Except when it isn’t.)

The Grafana chart was upgraded! Nice CPU burn…

When I accidentally Longhorn CSI

Symptoms: CPU load on all the nodes, but not the pods. Looking at Grafana, I noticed that CPU load on some of my nodes was constantly very high. At the same time, even the total CPU use of all the pods summed wasn’t above 0.4. What gives? This usually gives that the control plane is getting fried by something. It may be trying to relieve disk pressure, or in this case, trying to revive CSI.

Trying to figure out what was causing problems I checked the pods in kube-system with kubectl get pods -n kube-system. It quickly became apparent that there is a problem: disk-related pods like csi-resizer, csi-snapsotter and csi-provisioner were in CrashLoopBackOff.

I’ll be quite honest in that I’m not sure what the problem was. A few searches later I came to the conclusion that an earlier node reboot had left the pods with a corrupted DNS cache or something along those lines. Basically every issue I found with the symptoms I was seeing came down to DNS problems (longhorn/longhorn#2225, longhorn/longhorn#3109, rancher/k3os#811).

Alas I haven’t touched any of the networking machinery of Kubernetes (nor configured any of it for k3s) so my first idea was just the good old one from IT Crowd: “have you tried turning it off and on again?” So I did. Luckily another restart of the afflicted nodes solved the issue. I’m glad it did because I dread what I’d have had to do otherwise.


Steps to a more stable k3s cluster

It’s all too easy to kill a k3s cluster. I’ve been using k3s for years now and I’ve had plenty of adventures tweaking various aspects of running it. Before it’d take just a small change to an Argo Application to trigger a cascading failure. Hopefully now it’s a bit more resilient. Just a bit.

ship helm

charts/stable and git references

Helm was meant to be the package manager for Kubernetes. One common problem for package managers is “how do I find my packages?” Many package systems opt for having a default central repository for stuff. Distros have their central repos for apt. Programming languages too: for Node it’s npm, for Ruby it’s RubyGems, for Java it’s Maven central, for Clojure it’s Clojars. Of course most if not all systems have a way to add other package repositories or at least some other means to pull in dependencies (referencing git commits for example).

For Helm the central repository of charts/stable used to be the obvious default. You can of course add other repositories too, but defaults are powerful and many people will just give up if something is not available in the default source. On the other hand, having everything in one place puts a huge burden on the maintainers of that one place, as was the case of charts/stable. So they deprecated it.