6
votes
Using NFS mount with docker containers
When I first setup my NUC, I wanted to setup docker on it so that all the information is stored on the NFS mount I setup on my Synology mount. Meaning volumes and anything of that kind.
One issue that came up however, was that if my router experience a temporary glitch, the docker containers would then also experience an issue since they were trying to access information stored on the mounts and my system would freeze and I had to force a shutdown to get the mount to work correctly.
Which makes me wonder, what is the recommended way to have docker containers store their information on an NFS mount while also allowing taking into account that sometimes a networking issue or router issue might happen?
The way I do it at this point is to have the majority of my containers' data (excluding things like media and other files with large sizes) stored on the same machine that runs the docker containers. I use a filesystem directory mount instead of a docker volume. I then just back these up regularly to my NAS.
For large data, I create an NFS mount on the docker host, mapped to my NAS. I then still use local file system directory mounts for the containers, but they are at risk of having issues if the network connection to the NAS dropped.
There's a handful of ways to address this, most involve response vs mitigation. Containers have health check capabilities, and you could code in a check that makes sure a file is accessible, and shut down the container (exit the main process) otherwise. For container creation (existing containers back or up recreation from a pre-defined source like a docker-compose file), if you use ansible or something with access to the host as well, you can test the file share before starting the container. Some tools may even let you try to remount the share on the host, wait a delay until network comes back, etc.
This is how I set it up in my docker compose.
Add a health check as well