And I’m not talking about the meme with the guy in the corner thinking “they have no idea.” I’ve been playing around with k0s again, trying out alternative runtime and network plugins. It of course has to be dual-stack in 2026, but that’s where I ran into some unexpected problems.
Namely if I didn’t specify --node-cidr-mask-size then it complained that “New CIDR set failed; the node CIDR size is too big.” Looking at journalctl -xeu k0scontroller.service I noticed it seemed to be passing --node-cidr-mask-size=0, which is of course too big (if true).
So I tried setting it, and instead I was rewarded with a different error: “usage of –node-cidr-mask-size is not allowed with dual-stack clusters.” Hmm. So if I configure it, it’s an error. If I don’t, then it’s also an error. That sounds like a problem.
Of course I didn’t give up. I tried setting --node-cidr-mask-size-ipv4 and ipv6, but those didn’t make a difference for the errors’ behavior. I even looked at the k0s source code to make sure I’m not running into an argument parsing bug, but that wasn’t it either. So somewhere inside the Kubernetes machinery, something that is not the node-cidr-mask-size option(s), was causing an error blamed on that option. What else could be “too big” of a mask?
I’ve been reusing numbers from the k3s dual-stack setup, which meant I had a /56 pod (IPv6podCIDR: fd00:dead:beef::/56) and a /112 service mask (IPv6serviceCIDR: fd00:dead:cafe::/112). The /112 was even smaller than the /108 noted in the k0s docs. So I tried shrinking the /56 and voila! that fixed it. I ended up using /108, since that’s still more than enough.