When trying to pass data between a Docker container and the host, using ADD
in the Dockerfile might be sufficient at first. However, it’s one way, get burned in the image and so very inflexible.
The usual solution is to mount folders using docker
‘s -v
option. It’s simple, easy to use and pretty reliable. Just add -v "$(pwd):/root"
and the current folder will be mounted to the /root
folder in the container.
Using volumes is nice because they’re (can be) two way and (can) sync in real-time. Now you don’t need to rebuild your image every time you fix a typo. -v
has pretty deep configuration options too, in case you want to go down the rabbit hole.
I prefer going through the looking glass though, using docker-compose
. It’s a tool that allows you to easily compose multiple containers into one system. Common setup would be having your app server image, maybe the frontend served from a separate container, and a database and a Redis cache.
When using docker-compose
, volumes can be defined in docker-compose.yml
. The options are again very rich.
I had a problem though. In my work environment, everyone using Docker uses Mac. However I have a religious opposition to using piles of stinking trash for work, so I run Linux. On Mac, until very recently it was pretty much mandatory to use the docker-sync tool to mount volumes, otherwise you’d face prohibitively slow syncing speeds between the NFS host and containers. And having to wait five minutes for your Rails app to start every time you have to restart the container for some environment change is not fun.
Now I could just as well install docker-sync
on Linux too, but I wanted to see if I could mount external volumes with docker-compose
without having to change docker-compose.yml
(which everyone else uses too).
Shortcut to the conclusion: it’s possible, but it’s hacky. If you don’t juggle around volumes much, it might as well work, but docker-sync
is probably a safer, better option. Then again, don’t expect miracles.
So what did I do? For most of the process, be extremely annoyed. It’s plain ridiculous how docker -v
, docker-compose
‘s volumes and docker volume
supposedly deal with the same stuff, yet none of those have any consistency in how or what they expect in their options. It’s horrible “User Experience.”
You can easily mount a folder with docker -v
as shown above, and mounting one in your docker-compose.yml
is pretty trivial too. So… it shouldn’t be hard to create a named volume just like that too, right? Dream on.
docker volume
has basically no options, and it feels more like an API meant for plugin developers than for users. After hours of looking, I still had no idea how to mount a folder from the host’s file system in a container.
I actually got fed up with trying and resorted to hacking. What I did: create a named volume, extract its mount point from docker volume inspect
, delete that mount point and create a symlink there to the location I want mounted. I haven’t done any performance testing nor high throughput syncs, but it seems to work for now just fine. Except you can’t delete the “hacked” named volume, because it craps itself when it sees the symlink. It seems to work without problem across partitions.
$ docker volume rm my-named-volume Error response from daemon: unable to remove volume: remove my-named-volume: Unable to remove a directory of out the Docker root /var/lib/docker: /home/valerauko/Containers/my-synced-folder
Now some code:
#/bin/sh fake_volume() { point=`docker volume inspect $1 -f "{{.Mountpoint}}"`; cd "$point/.."; rm -r _data; cd -; ln -s "$(pwd)/$2" "$point" } docker volume create app-volume fake_volume app-volume docker volume create gems-volume fake_volume gems-volume ../gems
I put that into a single sh file and run it to set up my environment. It only needs to be run once (when the volumes are created) and needs sudo
because Docker’s stuff (at least on my install) are all owned by root:root permissions 700.
This also means that if some process (like Bundler’s install or Webpack’s packaging) creates files from inside the container, those will show up as owned by root. This might disrupt your git checkouts, as you’ll get permission denied errors on them. Can be resolved by running chown -R youruser:yourgroup .
in the folder.
fake_volume
accepts two parameters: the first is the name of the volume to hack, and the second is an optional path relative to the current folder where to symlink that volume. It assumes that Docker uses the _data
folder (which by default it does currently), so if that were to change in some future version, the script would break.
Or this experiment may break your system. Don’t do it unless you know what you’re doing (and it’s your responsibility).