On a sudden impulse, today I updated the Traefik in my cluster to v3. Unlike the transition from v1 to v2, v2 to v3 was very smooth. I only needed a few tiny tweaks to the configuration. One of those was that HTTP/3 support is no longer “experimental” and so it shouldn’t be configured as such. This started me on a weird quest on getting my sites to use H3.
Upgrading my cluster
My cluster is now running on k3s 1.20.6 and Argo CD 2.0.0 with its Helm chart at 3.2.2. Actually, upgrading Argo itself wasn’t much of a problem. I just changed the targetRevision
of the Application
and it was up and running in a few minutes. Then a few days later things got interesting.
There were no downtimes, but I noticed that Argo started failing to sync itself. Apparently a new minor version of the Helm chart came out (though it was still the same application version) that added support for the networking.k8s.io/v1
version of Ingress
. However, it also accidentally broke clusters running Kubernetes before 1.19. And mine was one such.
While the Argo people are figuring out how to fix this (if), I decided to go and take this opportunity to upgrade my cluster. This wasn’t as painless as it should’ve been though.
GitOps with Argo
It’s been a year since I wrote about bootstrapping a cluster with Argo and using Argo Rollouts for canary deploys based on Prometheus metrics. Since then many things have changed. I moved from Digital Ocean to Linode (mostly because Linode has a Tokyo region) and from a single-node k3s “cluster” to a 4-node one. But most of how I use Argo CD for GitOps hasn’t changed.
ArgoCD bootstrap cluster
I wish it could be completely automated… But for now I’ve just automated as much as possible (and convenient). The ingredients:
- Helm
- Sealed Secrets
- Argo CD and Argo Rollouts
- traefik
- Prometheus and Grafana
I have a repository for the purposes of playing around with Kubernetes tooling like this – and hopefully turn it into an actual application eventually. I have big plans and lots of stuff I want to try out, but time is limited. All the code examples in this post use the namespaces and naming choices in the repository. The folder structure (relevant to this bit) is like…
system ├┬ apps │└─ (bootstrapped Argo CD app manifests) ├┬ argo │└─ the local "umbrella chart" for Argo CD and Argo Rollouts ├┬ bootstrap │└─ boilerplate project and application manifests └┬ manifests └─ manifests I didn't bother turning into a Helm chart referenced by the raw-manifests.yaml application
メトリクスで完全自動カナリヤデプロイ
アプリコードを変えました。後は機械がなんとかするはずのところ、そこから何十分もかかる手動のデプロイ作業が待っていた。アプリが動く台が増えるとその時間も台数の分だけ倍増する。
理想は、コードの変更がgit上で主ブランチにコミットとして現れたら、CI/CDパイプラインがコンテナイメージを生成して(マージ前にすでにテストが通っている前提)本番環境にカナリヤの方式でデプロイされる。
-
Recent Posts
Tags
ale anime art beer blog clojure code coffee deutsch emo english fansub filozófia food gaming gastrovale geek hegymász jlc kaja kubernetes kultúra language literature live magyar movie másnap politika rant sport suli szolgálati közlemény travel társadalom ubuntu university weather work zene 日本 日本語 百名山 艦これ 軽音-
Recent Posts
Tags
ale anime art beer blog clojure code coffee deutsch emo english fansub filozófia food gaming gastrovale geek hegymász jlc kaja kubernetes kultúra language literature live magyar movie másnap politika rant sport suli szolgálati közlemény travel társadalom ubuntu university weather work zene 日本 日本語 百名山 艦これ 軽音七大陸最高峰チャレンジ