I think many people of my profession got recommended a certain article by Medium in their weekly digest. The launch-introduction post by Garden got my attention too. I’ve been trying to figure out how to deal with developing on Kubernetes, so every drop of information in that regard is much welcome.
What I expected
I wanted Garden to make developing “in the cloud” as easy as they claim it it. Not having to run potentially very heavy stuff locally in Docker is a very nice prospect. I could add a list of “I’m looking at you”s, but while Elasticsearch and Yarn (building) come to mind at first, the long tail is long.
So what I hoped to get was a way to easily sync into a Kubernetes cluster without breaking already present production stuff, but still getting a quick feedback loop. Of course instant feedback, like developing locally with JS hot reloading or a REPL language may be too much to hope for, but dreams are good.
What I didn’t get
Let’s get over with the sour bits. My biggest problem with Garden is that it has no convenient way to deal with Kubernetes Secrets at this point. This isn’t a problem as long as you don’t need it, but as soon as there is a database to connect to or IAM credentials to inject, it gets painful. Garden can define secrets on its own, but as far as I could see, referencing existing secrets is not possible.
Another annoyance is that with garden dev
, every single change triggers a new rebuild. So if you’re editing multiple files in parallel (for example a model and the corresponding controller, or frontend API definitions and store mappings), then three saves will mean having to wait for three builds to finish until you can be sure the remote is up to date. Since I’m reflexively hitting ctrl+s every few lines, I often had 3-4 rebuilds queued up working on a single file. I understand how cancelling running builds might be troublesome, but I don’t think a queue is necessary. Just have “the currently running build” and “the next build.”
I’m a Digital Ocean user (writing this post on a single-node self-managed k3s “cluster”), and Garden’s in-cluster building currently doesn’t seem to work with DO’s managed Kubernetes. There’s already an issue for it though, so hopefully it gets fixed soon. It worked without problems on GKE.
Lastly, tlsCertificate
secrets have to reside in the namespace that Garden creates for the project. This is probably a bug, since the config has a namespace field for the secretRef
, but it doesn’t seem to work.
What I got
It’s just plain impressive. I was testing it out with a Svelte template app to see how things work, and I was blown away. While the rsync
-based hot reloading feature had some weird hiccups (sorry I don’t have the logs at hand anymore), I’m not sure if that was an issue with my setup or what. Even without the --hot-reload
option, changes got reflected quite quick (between 5 and 15sec).
The setup is easy. Just pipe curl into bash (haha)! Then for K8s it’s necessary to initialize the cluster with garden plugin kubernetes cluster-init. Usually this is a one-time thing, but certain changes to project garden.yml’s may require running it again (such as enabling the nginx ingressController).
Writing said yaml’s isn’t complicated either. After setting up cluster certificate generation with cert-manager, this was all I needed to get going. The forceSsl
/ tlsCertificate
bit is optional, but I like my stuff HTTPS.
Things to find out
I still don’t know how resource-hungry this all is. Since it does in-cluster image building, and the dashboard showed the garden-system namespace eat quite a lot of memory, I’m kinda glad I didn’t try it on my k3s node. It ran without issue backed by 3 nodes on GKE.
Another thing I’d like to play with is doing Clojure REPL-based development paired with Garden’s rsync hot reloading. I’d have Garden sync the files, and I could do (reload) in the REPL once I figure the sync is done (which should be pretty fast). Unlike the rebuild-redeploy without hot reloading, this wouldn’t break the REPL connection, making it a really exciting possibility.