Tag: code

自分でアドカレを作ってみた

アドカレのブログでアドカレ作った話を書くのはさすがにちょっとメタすぎる気もするが、学ぶことは少なくなかったのでまとめる。思いの外スムーズにいったこともあって、小規模ながらもおもしろかった。


Hot reloading and reitit in the frontend

Ever since I first ran into the Metosin libraries, I’ve been using many of them. One of the most known and used of those is probably reitit, a routing library that can be used both on the server and in the browser. Recently I’ve had more opportunities using ClojureScript in the browser, and I noticed something that was confusing for a few moments.

closeup photo of fox

Trying FluxCD

When you create a whole FluxCD new setup from zero, it’s really easy: use flux bootstrap. it does “everything” for you. In my case I tried this setup first for last year’s advent calendar. Except back then I had an expedition scheduled for the first half of December, so this took the back seat and was eventually forgotten. Therefore in this year past everything I did back then slowly sank into oblivion, so I again had to start pretty much from zero.

I did set up the repo and a cluster (on Civo) back then, but I quickly tore down the cluster when I realized I wouldn’t be able to test it all out. The repo stayed though, so now i was starting afresh with a k3s cluster spanning three nanodes (that’s a linode referral link with a $100 credit over 60 days) and a repository already set up from last year.


Prometheus vs Longhorn, fight!

Last year I had a brief affair with Longhorn. It’s a tool that abstracts the interface to interact with volumes (in my case in Kubernetes) from where the underlying data actually lives. In my case, my cluster consists of three small nodes, and back then most of the data lived in local-path provisioned volumes. Using local-path means that the data is physically on the host machine instead of some virtually mounted filesystem. This also means that once an application has a PVC, it can’t be assigned to any other node (it results in a “conflict”).


Migrating from Docker to Podman

I’ve been eyeing Podman for a while. Being able to build and run containers without the Docker daemon hanging around in the background as an omnipotent demigod sounds nice. Less stuff running = good. Installing it is trivial on Ubuntu (unlike installing Docker): apt-get install podman and it’s done. As the documentation states, it can even be aliased to docker to make migration even smoother. It has tons of features that I don’t think are there in Docker: generating Kubernetes YAMLs from running containers, using container LABELs to make run commands simpler and so on.

Which is all nice and it’s to stay for sure, but I had to realize that at this point this won’t have much of an effect on me (anymore). I hardly ever build container images locally, instead using some CI (Github Actions) to do it automatically. Not having to remember all the build options (even if just in .bash_history) is nice, and if I have to put it into code anyway, I might as well automate it completely.

As for running containers, podman run will be now my go-to for experimentation (instead of docker run). However, most of the time I use docker-compose to orchestrate a local development environment. It’s possible to use Podman’s system service to use docker-compose, but then is there much of a difference from running that and having the Docker daemon running? It’s also possible to use (simple) Kubernetes Pod/Service YAMLs to achieve something similar, and I’ll be sure to consider that the next time. But I don’t think I’ll make the effort to change these stuff for existing projects on my current computer. Not unless Docker kills free users altogether…


Adding Grafana annotations based on Argo CD events

I was thinking about how nice it would be if I could see on my main Grafana dashboard (the only one I use at this point actually) when there are new versions of something deployed. This way if by chance there is a problem with something afterwards I can see at a glance what could have gone wrong. Also I really like just looking at that Grafana dashboard and see that everything is alive and well. (Except when it isn’t.)

The Grafana chart was upgraded! Nice CPU burn…

Fighting an “Unknown MySQL error” when connecting to Aurora from Rails

The other day I was struggling with a very weird error when upgrading to Ruby 3. The initial migrations for a Rails app would fail with “Mysql2::Error: Unknown MySQL error (ActiveRecord::StatementInvalid)”, but only in certain environments. The error would occur when Rails tried to check what migrations are already applied by looking at the schema_migrations table.

2022-01-19 09:12:59.205 +0000 [DEBUG]    (1.7ms)  SELECT GET_LOCK('4252831219231700070', 0)
/usr/local/bundle/gems/mysql2-0.5.3/lib/mysql2/client.rb:131: warning: rb_tainted_str_new_cstr is deprecated and will be removed in Ruby 3.2
2022-01-19 09:12:59.222 +0000 [DEBUG]    (2.0ms)  SELECT `schema_migrations`.`version` FROM `schema_migrations` ORDER BY `schema_migrations`.`version` ASC
2022-01-19 09:12:59.224 +0000 [DEBUG]    (1.8ms)  SELECT RELEASE_LOCK('4252831219231700070')
rails aborted!
ActiveRecord::StatementInvalid: Mysql2::Error: Unknown MySQL error
/usr/local/bundle/gems/mysql2-0.5.3/lib/mysql2/client.rb:131:in `_query'

When I accidentally Longhorn CSI

Symptoms: CPU load on all the nodes, but not the pods. Looking at Grafana, I noticed that CPU load on some of my nodes was constantly very high. At the same time, even the total CPU use of all the pods summed wasn’t above 0.4. What gives? This usually gives that the control plane is getting fried by something. It may be trying to relieve disk pressure, or in this case, trying to revive CSI.

Trying to figure out what was causing problems I checked the pods in kube-system with kubectl get pods -n kube-system. It quickly became apparent that there is a problem: disk-related pods like csi-resizer, csi-snapsotter and csi-provisioner were in CrashLoopBackOff.

I’ll be quite honest in that I’m not sure what the problem was. A few searches later I came to the conclusion that an earlier node reboot had left the pods with a corrupted DNS cache or something along those lines. Basically every issue I found with the symptoms I was seeing came down to DNS problems (longhorn/longhorn#2225, longhorn/longhorn#3109, rancher/k3os#811).

Alas I haven’t touched any of the networking machinery of Kubernetes (nor configured any of it for k3s) so my first idea was just the good old one from IT Crowd: “have you tried turning it off and on again?” So I did. Luckily another restart of the afflicted nodes solved the issue. I’m glad it did because I dread what I’d have had to do otherwise.


Steps to a more stable k3s cluster

It’s all too easy to kill a k3s cluster. I’ve been using k3s for years now and I’ve had plenty of adventures tweaking various aspects of running it. Before it’d take just a small change to an Argo Application to trigger a cascading failure. Hopefully now it’s a bit more resilient. Just a bit.

ship helm

PDF export with the Ruby 3.0.3 docker image

You might happen to use the wicked_pdf gem for PDF output in your Rails app. You might happen to use the wkhtmltopdf-binary gem to provide the required binaries. You might want to get the above to work on the latest (at this point 3.0.3-bullseye) Ruby docker image. Short answer: give up. A bit longer answer: it’s easier than you think.