Tag: kubernetes

Upgrading my cluster

My cluster is now running on k3s 1.20.6 and Argo CD 2.0.0 with its Helm chart at 3.2.2. Actually, upgrading Argo itself wasn’t much of a problem. I just changed the targetRevision of the Application and it was up and running in a few minutes. Then a few days later things got interesting.

There were no downtimes, but I noticed that Argo started failing to sync itself. Apparently a new minor version of the Helm chart came out (though it was still the same application version) that added support for the networking.k8s.io/v1 version of Ingress. However, it also accidentally broke clusters running Kubernetes before 1.19. And mine was one such.

While the Argo people are figuring out how to fix this (if), I decided to go and take this opportunity to upgrade my cluster. This wasn’t as painless as it should’ve been though.


Dealing with DiskPressure

My 4-node k3s cluster (where this blog is hosted too) kept dying every now and then. Looking at kubectl describe nodes it quickly became evident that this was caused by the nodes running out of disk space. Once a node gets tainted with HasDiskPressure, pods might get evicted and the kubelet will be using (quite a lot of) CPU trying to free disk space by garbage collecting container images and freeing ephemeral storage.

My setup by default uses local storage (the local-path provider) where volumes are actually local folders on the node. This means that pods that use persistence are stuck with the same node forever and can’t just move around. This makes eviction a problem, since they have nowhere else to go. It also means that disk usage is actually disk usage on the node, and not on some block volume over the network.


GitOps with Argo

It’s been a year since I wrote about bootstrapping a cluster with Argo and using Argo Rollouts for canary deploys based on Prometheus metrics. Since then many things have changed. I moved from Digital Ocean to Linode (mostly because Linode has a Tokyo region) and from a single-node k3s “cluster” to a 4-node one. But most of how I use Argo CD for GitOps hasn’t changed.

orange and brown tree branch

How does Kubernetes select labels?

kubectl has the feature to select objects by filtering on labels using the -l flag. Labels are key-value pairs attached to objects as metadata and they don’t have to be unique. I’ve most often seen them used to identify what project or app an individual resource belongs to. Helm uses labels to mark resources with the app, chart and revision they belong to.

But wait, if they’re not unique and there is a way to select multiple values with set operators, how does that work? The database backing Kubernetes by default, etcd is a key-value store. While it can natively select multiple records by prefix matching, it’d be hard to imagine labels working like that. There are many of them and the selectors are complex.

So I dove into Kubernetes’s source code to figure out how it works.


GitOps and Kubernetes persistence

A while back I wrote about bootstrapping a Kubernetes cluster. I’ve been refining the setup so that it requires as little manual kubectl‘ing as possible. I still use ArgoCD to get everything rolling, and there is one bit that kept going red: persistent volumes.


ArgoCD bootstrap cluster

I wish it could be completely automated… But for now I’ve just automated as much as possible (and convenient). The ingredients:

  • Helm
  • Sealed Secrets
  • Argo CD and Argo Rollouts
  • traefik
  • Prometheus and Grafana

I have a repository for the purposes of playing around with Kubernetes tooling like this – and hopefully turn it into an actual application eventually. I have big plans and lots of stuff I want to try out, but time is limited. All the code examples in this post use the namespaces and naming choices in the repository. The folder structure (relevant to this bit) is like…

system
 ├┬ apps
 │└─ (bootstrapped Argo CD app manifests)
 ├┬ argo
 │└─ the local "umbrella chart" for Argo CD and Argo Rollouts
 ├┬ bootstrap
 │└─ boilerplate project and application manifests
 └┬ manifests
  └─ manifests I didn't bother turning into a Helm chart
     referenced by the raw-manifests.yaml application

メトリクスで完全自動カナリヤデプロイ

アプリコードを変えました。後は機械がなんとかするはずのところ、そこから何十分もかかる手動のデプロイ作業が待っていた。アプリが動く台が増えるとその時間も台数の分だけ倍増する。

理想は、コードの変更がgit上で主ブランチにコミットとして現れたら、CI/CDパイプラインがコンテナイメージを生成して(マージ前にすでにテストが通っている前提)本番環境にカナリヤの方式でデプロイされる。


Kubernetes local storage

Kubernetes使おうとすると早い段階で永続化の話が自然と出てくる。データベースをクラスタ内で動かすとか、ユーザー界の理由もありうるし、クラスタの土台となるツールが永続ボリューム(Persistent Volume)を要求することもある。


Telegraf on Kubernetes with InfluxDB Cloud

I’ve had most of my stuff running a k3s “cluster” for the past half a year or so. The whole setup runs on a single $5-a-month Digital Ocean droplet with 1vCPU and 1GB of memory.

Needless to say, it doesn’t take much to bring the whole thing to its knees. While it has no issues dealing with the little traffic my blog receives, I would accidentally bring it down occasionally when I install a Helm chart that turned out to be much heavier than I’d thought.


k3s

Having played around with the managed Kubernetes offerings of various cloud players (DO, AWS, GCP), I was wondering if it was possible to do this cheap. My site doesn’t have much traffic or anything complicated really, so running it off a $5 DO droplet is reasonable. Sadly managed Kubernetes offerings won’t come out so cheap. (Sure I could leech off the starting $300 GCP credit for a year then keep hopping accounts, but…)

Then I read about k3s. The people behind Rancher made it as a lightweight (but functionally complete) Kubernetes distro. Lightweight, they say… Just how light? (Imagine a weird maniac light in my eyes here.) Could I run it on a $5 droplet?