Tag: gitops

Adding Grafana annotations based on Flux CD events

Earlier this year I wrote about adding Grafana annotations based on Argo CD events. Since this year I actually got around testing out Flux some more, it was natural to follow up with adding some Grafana annotations based on Flux events.


The Weave GitOps UI for Flux

Unlike Argo, Flux has no built-in UI. CD tools should be invisible most of the time, just quietly running in the back, getting changes in the cluster. This is fine for “most of the time” when there are no changes to the applications I’m making myself or the cluster itself. It can just look at the various helm charts and other (git) sources and deploying the changes as configured. It’s nice that I can be sure that my cluster has the latest possible versions of everything running within 10-15 minutes of release (depending on the Flux interval configured) without ever touching kubectl.


Trying FluxCD

When you create a whole FluxCD new setup from zero, it’s really easy: use flux bootstrap. it does “everything” for you. In my case I tried this setup first for last year’s advent calendar. Except back then I had an expedition scheduled for the first half of December, so this took the back seat and was eventually forgotten. Therefore in this year past everything I did back then slowly sank into oblivion, so I again had to start pretty much from zero.

I did set up the repo and a cluster (on Civo) back then, but I quickly tore down the cluster when I realized I wouldn’t be able to test it all out. The repo stayed though, so now i was starting afresh with a k3s cluster spanning three nanodes (that’s a linode referral link with a $100 credit over 60 days) and a repository already set up from last year.


Dealing with DiskPressure

My 4-node k3s cluster (where this blog is hosted too) kept dying every now and then. Looking at kubectl describe nodes it quickly became evident that this was caused by the nodes running out of disk space. Once a node gets tainted with HasDiskPressure, pods might get evicted and the kubelet will be using (quite a lot of) CPU trying to free disk space by garbage collecting container images and freeing ephemeral storage.

My setup by default uses local storage (the local-path provider) where volumes are actually local folders on the node. This means that pods that use persistence are stuck with the same node forever and can’t just move around. This makes eviction a problem, since they have nowhere else to go. It also means that disk usage is actually disk usage on the node, and not on some block volume over the network.


GitOps with Argo

It’s been a year since I wrote about bootstrapping a cluster with Argo and using Argo Rollouts for canary deploys based on Prometheus metrics. Since then many things have changed. I moved from Digital Ocean to Linode (mostly because Linode has a Tokyo region) and from a single-node k3s “cluster” to a 4-node one. But most of how I use Argo CD for GitOps hasn’t changed.

orange and brown tree branch