Tag: argo

Upgrading my cluster

My cluster is now running on k3s 1.20.6 and Argo CD 2.0.0 with its Helm chart at 3.2.2. Actually, upgrading Argo itself wasn’t much of a problem. I just changed the targetRevision of the Application and it was up and running in a few minutes. Then a few days later things got interesting.

There were no downtimes, but I noticed that Argo started failing to sync itself. Apparently a new minor version of the Helm chart came out (though it was still the same application version) that added support for the networking.k8s.io/v1 version of Ingress. However, it also accidentally broke clusters running Kubernetes before 1.19. And mine was one such.

While the Argo people are figuring out how to fix this (if), I decided to go and take this opportunity to upgrade my cluster. This wasn’t as painless as it should’ve been though.


GitOps with Argo

It’s been a year since I wrote about bootstrapping a cluster with Argo and using Argo Rollouts for canary deploys based on Prometheus metrics. Since then many things have changed. I moved from Digital Ocean to Linode (mostly because Linode has a Tokyo region) and from a single-node k3s “cluster” to a 4-node one. But most of how I use Argo CD for GitOps hasn’t changed.

orange and brown tree branch

ArgoCD bootstrap cluster

I wish it could be completely automated… But for now I’ve just automated as much as possible (and convenient). The ingredients:

  • Helm
  • Sealed Secrets
  • Argo CD and Argo Rollouts
  • traefik
  • Prometheus and Grafana

I have a repository for the purposes of playing around with Kubernetes tooling like this – and hopefully turn it into an actual application eventually. I have big plans and lots of stuff I want to try out, but time is limited. All the code examples in this post use the namespaces and naming choices in the repository. The folder structure (relevant to this bit) is like…

system
 ├┬ apps
 │└─ (bootstrapped Argo CD app manifests)
 ├┬ argo
 │└─ the local "umbrella chart" for Argo CD and Argo Rollouts
 ├┬ bootstrap
 │└─ boilerplate project and application manifests
 └┬ manifests
  └─ manifests I didn't bother turning into a Helm chart
     referenced by the raw-manifests.yaml application

メトリクスで完全自動カナリヤデプロイ

アプリコードを変えました。後は機械がなんとかするはずのところ、そこから何十分もかかる手動のデプロイ作業が待っていた。アプリが動く台が増えるとその時間も台数の分だけ倍増する。

理想は、コードの変更がgit上で主ブランチにコミットとして現れたら、CI/CDパイプラインがコンテナイメージを生成して(マージ前にすでにテストが通っている前提)本番環境にカナリヤの方式でデプロイされる。