It’s been a year since I wrote about bootstrapping a cluster with Argo and using Argo Rollouts for canary deploys based on Prometheus metrics. Since then many things have changed. I moved from Digital Ocean to Linode (mostly because Linode has a Tokyo region) and from a single-node k3s “cluster” to a 4-node one. But most of how I use Argo CD for GitOps hasn’t changed.

orange and brown tree branch

Keeping manifests up to date for you

I have a separate repo where I manage my cluster. Argo tracks the master branch of that repo, mostly because it’s just me, myself and I and I don’t have to be too cautious about breaking something with a bad commit. The repo contains the definitions of what applications should Argo track in what repositories (and yes, that includes the repo in question too).

Helm charts (like Grafana for example) track the chart’s repo and update automatically to latest when there is a new release. Even when stable/charts was still a thing and there were tons of changes to its master branch every day, Argo is smart to only pay attention to the changes affecting the charts I use.

This also means that if you have your cluster definitions in the same repository as your application code and you track a branch (instead of pinning releases to specific commits, which is possible) it won’t re-create your deployments just because someone fixed a typo in README.md. It’ll note that there were no changes to your manifests and tell you your cluster is up-to-date.

Furthermore Argo CD is absolutely read-only. It does not have or need write access to the git repos it tracks. Just how Kubernetes is about making sure that whatever is in your cluster is what your manifests say in its etcd database (and converges any differences to the desired state), Argo CD is about making sure that the manifests Kubernetes has are the same as those in your git repo. This is why Argo CD can’t send itself into an infinite commit loop: it doesn’t write anything to git. (It can run into a loop if you manage your Application manifests and you mess up commit pinning.)

Releasing

brown spiral staircase with brown wooden railings

So then how do you release a new version of your app? You update the manifests in the git repo. I do this with a GitHub Action for my static site about my Seven Summits challenge: whenever there is a push to master, it builds the static site with Jekyll, builds and pushes a docker image out of it, and updates the site’s manifest.

While Argo CD sees the commits pushed to master, it also sees that the manifests did not change, so it just moves on. However when the Action updates the image tag in the manifest, Argo sees the change and reflects the changes to the cluster immediately. (As for why the GitHub action doesn’t trigger itself by committing to master, I guess it’s just smart like that.)

Now this is assuming that you tag your images (and really why wouldn’t you). I use a sed script to change the image tag, but I’d hope there are some smarter-better-faster-stronger solutions for that out there (I think it’s pretty easy with kustomize). sed is enough for what I’m doing.

If you insist on using latest then I guess having something watch your docker registry and murder your pods one after another (preventing downtime) to force a re-pull of the image and re-build your pods would work, but you lose the ability to roll back changes and that’s really not good.

master walking away

man wearing black shirt and brown pants walking beside wall with graffiti

Argo tracks the branches for you automatically, so a scenario like “this pod crashed and when it restarted it is now a different image because master changed” just cannot happen. Even if the target branch changes often, applications don’t get updated unless there are changes to their manifests. Pin the images used in the deployment to specific tags and update the manifests when there is a release. Argo will do the rest.

If you use latest images, Kubernetes will (by default) re-pull the image if you re-create the pod (by deleting it). But just like above, you lose your ability to roll back a broken deployment like this, which is definitely not something you want. “We’ve never had a broken deploy” – you’ll definitely start caring the first time it happens.

You don’t have to

white red and green wooden street sign

Argo doesn’t have to update your stuff automatically. You can click the “sync” button in the UI or run the argocd app sync CLI command if you prefer to run manual. This would mean that no automatic magic will happen and changes will only get reflected in the cluster when you want it. (Which in my case is “always” so I just use automatic.)

The defining principle of GitOps is that git is your source of truth. It doesn’t have to be one git repo: even I have Argo track literally a dozen repositories (various helm charts, pet projects and static sites). It can be one repo too: Argo Applications can watch only a certain path in a repo for changes, allowing for multiple sourcing a single repo.

However, if you don’t use git as your source of truth but something else (in the case of latest images, your docker registry), then yours is not a GitOps use case. I personally love the safety that the clean, declarative, versioned nature of this setup gives me. But I also see many people who rather “just track latest” and have something do the updating for them without an ability to roll back. I admire their courage (?) of doing that in production.