We’ve got a printer in the office. I’m not sure how the network is organized, but it’s on a different IP range than the rest of the dev network. And for some reason I couldn’t get it to work.
No one else on the team had any issues with it even though we’re all on the same network supposedly with the same settings. I just couldn’t connect to it no matter how I tried. I tried various drivers, thinking maybe it’s some protocol issue, but it wasn’t.
Long story short
Docker was getting in the way.
The long story
The printer is on the IP range 172.17.0.0/16 (at least somewhere in there), while the rest of the network is on the usual 192.168.0.0/16 (might be smaller, but that’s not relevant). At first I thought that I typo’d the printer’s IP and that’s why I couldn’t connect. However, after a short experimentation it turned out that the whole range was behaving weird.
It turns out that the range in question is associated with the docker0
network bridge that Docker uses for its network virtualization. I won’t pretend I understand all of its nuances, but I think it then creates a similar bridge for each network (for example in docker-compose
). On my home machine this means that the routing table looks something like this:
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a980a6c7b439 172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-64c55541cb43 172.29.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-3335d0540b1f
Which means that the printer was there all right available at the IP address I was given, just my local routing table directed any access attempts to it through docker0
, where of course there was no printer waiting.
The reason this wasn’t happening to anyone else was that I’m the only one using a linux machine. Everyone else uses either a Mac or some Windows machine, so in their cases Docker runs inside a VM, which “protected” them from this side-effect.
The solution
There are (at least) two ways this can be solved.
The ugly way is to bring down all running Docker containers and delete the network bridge. This is what I do currently because I only use the printer once every few months, so it wasn’t worth bothering with the clean way. All it takes is using route del
to get rid of the troublesome routing table entry.
route del -net 172.17.0.0 netmask 255.255.0.0 gw 0.0.0.0
The clean way is to configure Docker’s networking through the configuration file at /etc/docker/daemon.json
. If there were more (and more often used) network resources affected by this network collision, this is the way I’d use.