You might happen to use the wicked_pdf gem for PDF output in your Rails app. You might happen to use the wkhtmltopdf-binary gem to provide the required binaries. You might want to get the above to work on the latest (at this point 3.0.3-bullseye) Ruby docker image. Short answer: give up. A bit longer answer: it’s easier than you think.
Patching delayed_job for Ruby 3
Monkey patching is bad. That’s where you should start from. It can cause trouble where you’d least expect it, conflicts with libraries you’d least expect in ways you’d least expect. And yet here I am sharing code for patching the delayed_job gem to (more or less) work with Ruby 3. Doesn’t this violate my own policies? There are a few choices.
- give up upgrading to Ruby 3 altogether
- monkey patch delayed_job as an emergency fix and make time to figure out what to do
- contribute to delayed_job making sure the gem is solid on Ruby 3
- get rid of all the
.delay
calls and switch to another async job library
Stuff that broke in Rails 6.1
Rails uses a “shifted” “semantic” “versioning” which pretty much comes down to the following. Major version: “we’ll most definitely break everything you ever depended on, half of them without warning.” Minor version: “we’ll probably break many stuff you depend on, some of them without warning.” Patch version: “we might accidentally some core APIs, but we promise it’s not intentional (or documented).” Knowing that, I still embarked on the grand endeavor of upgrading from Ruby on Rails 6.0.4.1 to 6.1.4.1. What could possibly go wrong, right?

Breaking bad, or versioning is hard
Rich Hickey will tell you that breaking changes are horrible and versioning is stupid. The idea is nice. No breaking changes, ever. You get the API design of whatever you’re building perfectly at the first try. Oh wait. Obviously no one can do that, and no one could ever do that.
The question then becomes just how long exactly are you willing to carry the dead weight of code you don’t really want to carry anymore. Or rather even, how long exactly are you able to pay the costs of maintaining a possibly very problematic old API design.
What’s the deal with types?
I’ve never used Haskell. I won’t claim I’m good at Rust. I mostly work with Ruby and Clojure, both dynamic languages where you don’t really need to worry about types. But then of course that’s not true. Even if you put Rails’s magic aside, it’s way too easy to write code that accidentally works (in an absolutely unintended fashion).

What’s an ideal database?
I’ve been reading about and considering language design choices (for my new pet project), and one thing I really like (though I rarely actually use in action) is Clojure’s transducers. I couldn’t find it in the talk introducing them, but I vaguely recall someone vaguely recalling that Rich Hickey said Clojure’d have much less laziness if he’d found the idea of transducers sooner.
Then in a completely different thought process (maybe there could be transducers, process transformations for thought processes as well?) about databases. I was considering databases I used so far, things I tried to achieve with them, the difficulties and nice things.
Writing a lisp-ish compiler in Rust
It was a while back that I got a notice from Shibuya lisp that the 100th event is coming up. It’s a (Common) Lisp/Clojure meetup in Tokyo (though since covid, online). I don’t know if it’s a common thing among lispers, but everyone there seems to at least try writing their own lisp (and talk about it) somewhere down the path.
Before I wasn’t that interested. I could do most of what I wanted to do in Clojure without too much pain. Then I tried writing a (performant) wrapper around Netty and it got a bit more painful. Things like nth
calls on function argument lists started showing up on my flame charts (testing with 100 million requests) and rough edges around interop cut my hands (hello proxy
and abstract classes).
Upgrading my cluster
My cluster is now running on k3s 1.20.6 and Argo CD 2.0.0 with its Helm chart at 3.2.2. Actually, upgrading Argo itself wasn’t much of a problem. I just changed the targetRevision
of the Application
and it was up and running in a few minutes. Then a few days later things got interesting.
There were no downtimes, but I noticed that Argo started failing to sync itself. Apparently a new minor version of the Helm chart came out (though it was still the same application version) that added support for the networking.k8s.io/v1
version of Ingress
. However, it also accidentally broke clusters running Kubernetes before 1.19. And mine was one such.
While the Argo people are figuring out how to fix this (if), I decided to go and take this opportunity to upgrade my cluster. This wasn’t as painless as it should’ve been though.
Clojerlを使ってみる
ErlangはEricssonがはるか昔に開発した通信環境用の言語で、ものすごく頑丈で安定している環境として有名。実際に世界中のモバイル通信環境で用いられ99.9999999% (“nine nines”) の可用性を誇っている。最近ではRuby風でErlangのBEAMを実行環境とするElixirが流行っている印象がある。分散を前提にしている関数型言語としてClojureに近いと感じた。そして実際にBEAMの上でClojureを実装しているものがある。
Clojureが最初はJVMと.NETのCLRも対象にしていたが後JVMだけになり、またClojureScriptの登場でJavaScript上で動くようになった。だから根本的なところは案外実行環境に依存しないところもあるかもしれない。BEAM上でClojureを実装しているClojerlはあくまでもコミュニティーからのもので正式なClojure版ではないが、試したかったBEAMと選べるなら選ぶぐらい好きなClojureが合ったものに当たるから触ってみざるをえなかった。
-
Recent Posts
Tags
ale anime art beer blog clojure code coffee deutsch emo english fansub fest filozófia food gaming gastrovale geek hegymász jlc kaja kultúra language literature live magyar movie másnap politika rant sport suli szolgálati közlemény travel társadalom ubuntu university weather work zene 日本 日本語 百名山 艦これ 軽音-
Recent Posts
Tags
ale anime art beer blog clojure code coffee deutsch emo english fansub fest filozófia food gaming gastrovale geek hegymász jlc kaja kultúra language literature live magyar movie másnap politika rant sport suli szolgálati közlemény travel társadalom ubuntu university weather work zene 日本 日本語 百名山 艦これ 軽音七大陸最高峰チャレンジ