12 votes

Tips for Docker security on a NAS?

How do you make sure that your Docker containers don't go rogue and start snooping around or contacting external servers that they shouldn't be talking to? Is there a network traffic monitoring program that I could use? Or a service that would notify me about vulnerabilities in containers that I have installed?

Some background:

Last year, I asked help setting up my new Synology NAS, and many of you wonderful people offered some really, really good advice. I have recently started to play around with Docker containers more, and I am a little uneasy about the idea that my NAS is home to my files, my own scripts, and Docker containers made by other people, and that it is always on and these containers have constant internet access. I don't have the time (or frankly the skills) to verify the contents of the containers beyond making sure that they come from reputable sources, but I would like to have a bit more peace of mind and make sure that things remain private and secure.

My setup at the moment is the following: I have a Synology DS923+ and I manage Docker containers with Synology's Container Manager, using docker compose files. I have so far put all containers into the same virtual network (perhaps something I need to think about), which is a separate IP range from my other devices, and has internet access through my DNS. I use Synology's DNS Server (for everything in my home network) and Reverse Proxy so that I can use local domain names and HTTPS. For HTTPS, I have made myself a certificate authority and created the necessary certificates and installed them on my devices. No ports are opened on the router and things like UPnP are turned off. I use Tailscale to access my home network when not at home. And while I have not yet done so, I have been considering setting up some firewall rules, for instance to restrict access to the DSM. I use 2FA for the NAS and its SSH is turned on only when I need to use it.

16 comments

  1. [11]
    iamnotree
    Link
    Someone correct me if I'm wrong, but as long as you have your docker containers running on their own docker network and that traffic is not setup to pass to other docker networks or the host...

    Someone correct me if I'm wrong, but as long as you have your docker containers running on their own docker network and that traffic is not setup to pass to other docker networks or the host network then you are good.

    Other than that and my personal preference to not trust tailscale, I think your good.

    8 votes
    1. [4]
      CaptainAM
      Link Parent
      To add to this, check your permissions on each container as well. Don't use root users and never ever run a container privileged!!

      To add to this, check your permissions on each container as well. Don't use root users and never ever run a container privileged!!

      6 votes
      1. [2]
        teaearlgraycold
        Link Parent
        Oh boy. You're saying my docker containers shouldn't run apps as root?

        Oh boy. You're saying my docker containers shouldn't run apps as root?

        2 votes
        1. vord
          Link Parent
          Yea pretty much. It's a huge oversight for a lot of containers. Creating a nonroot user in the container and running the process as that avoids a whole lot of exploits.

          Yea pretty much. It's a huge oversight for a lot of containers.

          Creating a nonroot user in the container and running the process as that avoids a whole lot of exploits.

          1 vote
      2. vili
        Link Parent
        This helpful reminder made me question whether I actually understand what I am doing. When I spin up a container with a docker compose file, I use PUID and PGID variables in the environment...

        This helpful reminder made me question whether I actually understand what I am doing. When I spin up a container with a docker compose file, I use PUID and PGID variables in the environment section, with PUID pointing to a special docker user that I have created, and PGID to its user group. This user has very limited access to the file system, among other things.

        But now that I read into it more, I see that not all containers support PUID and PGID. I had thought that it is a Docker standard, but it isn't? Additionally, I have no idea how to confirm what user a given container is actually running under. How can I do that?

        If I run "id" on the container itself, or if I run "docker exec mycontainer id" from the host machine, the response always notes root as the user. But, if I understand this correctly (and this may be a big if), this makes sense, as that just lists the container's internal user, which tends to be root, and is not the same as (but is mapped to?) the user that runs the container on the host machine?

        Are we even talking about the same thing when you say not to use root users?

        1 vote
    2. [6]
      vili
      Link Parent
      Thanks! What about a container that needs Internet access and secretly contains code that contributes to a bot network or something similar? Would there be any way for me to detect that?

      Thanks! What about a container that needs Internet access and secretly contains code that contributes to a bot network or something similar? Would there be any way for me to detect that?

      1. [3]
        tigerthelion
        Link Parent
        Just adding to what @krellor said. Your concerns about containers 'calling home' or 'sniffing around' are in general valid, but if you stick to widely popular containers its probably not something...

        Just adding to what @krellor said. Your concerns about containers 'calling home' or 'sniffing around' are in general valid, but if you stick to widely popular containers its probably not something to be concerned about. If you don't have the ability to validate the code, you can at least go to the git repo, check the issues board, and look at the number of downloads there too. I'd be far more focused on inbound intrusion and from your post, it seems like you have done well to mitigate that.

        I am not familiar with the Synology DNS module, but perhaps it has a logging mechanism? If not, you could install something like PiHole which does log all DNS queries, their source, and their destination. You could also block those types of requests using PiHole's filter functions. Probably not perfect as it only captures traffic that needs to resolve an IP from a domain name, but it would give you a general view of what's going on. Tools like Wireshark can be useful as well, but I think that's more about packet inspection.

        2 votes
        1. [2]
          vili
          Link Parent
          I generally do that when installing a new container, but I must confess that I don't really see myself doing that when updating containers. I have done some Node.js development in recent years and...

          If you don't have the ability to validate the code, you can at least go to the git repo, check the issues board, and look at the number of downloads there too.

          I generally do that when installing a new container, but I must confess that I don't really see myself doing that when updating containers. I have done some Node.js development in recent years and while the npm package manager is far from perfect, I have found its way to notify me about vulnerabilities in my installed packages helpful. I'll try to look for something like that for my Docker containers.

          PiHole

          This could indeed be helpful, and I have been considering setting it up for other (understandable) reasons anyway. Thanks for pointing out how it could help me with Docker monitoring as well. I've also seen Wireshark mentioned here and there, I think I'll need to take a closer look at that as well.

          1 vote
          1. tigerthelion
            Link Parent
            Completely reasonable. If I am ever concerned about a particular container I star the repo and get notified when a release is created. Usually, the release has changelogs and sometimes I browse...

            but I must confess that I don't really see myself doing that when updating containers.

            Completely reasonable. If I am ever concerned about a particular container I star the repo and get notified when a release is created. Usually, the release has changelogs and sometimes I browse the commits (am a software dev as well). Normally I am not at all concerned about popular containers so this is very rare.

            1 vote
      2. [2]
        krellor
        Link Parent
        Where are you getting containers? Many of the containers on docker hub reference git repos with their code. Alternatively, it's not hard to containerize many existing applications, and most of the...

        Where are you getting containers? Many of the containers on docker hub reference git repos with their code. Alternatively, it's not hard to containerize many existing applications, and most of the containers you are running are probably just wrappers around another freely available software.

        As far as how to detect that, running a more robust home firewall. A firewall with an IDS, traffic profiling, content and application filtering would let you do that to varying extents. Many of those have a license cost. Probably the cheapest I know would be Arista NG firewall, which is really just untangle which they bought, and it's $50/year for the home license. You can even do HTTPS intercept if you want to bother with managing certs on your devices.

        Also, if the malicious traffic required DNS to resolve their target, then a DNS filter service might help, like ad guard. It would redirect queries to malicious or blocked domains to a landing page.

        1 vote
        1. vili
          Link Parent
          Synology's Container Manager contains an image registry and handles the downloads. Basically, it's a slightly broken GUI for Docker Hub. I'll look into firewalls, thanks for the suggestion.

          Where are you getting containers?

          Synology's Container Manager contains an image registry and handles the downloads. Basically, it's a slightly broken GUI for Docker Hub.

          I'll look into firewalls, thanks for the suggestion.

          1 vote
  2. Weldawadyathink
    Link
    Be wary of passing docker.sock to a container. For various reasons, docker.sock always gives complete root access to the host system, no matter how locked down you make it. In fact, allowing users...

    Be wary of passing docker.sock to a container. For various reasons, docker.sock always gives complete root access to the host system, no matter how locked down you make it. In fact, allowing users to run docker without sudo gives the same complete root access bypassing sudo entirely. That being said I setup most of my servers to either default login to root or allow docker without root for the convenience.

    6 votes
  3. [2]
    ShroudedScribe
    Link
    Consider podman instead. It's mostly docker compatible. I personally spin those containers up with ansible instead of docker-compose and I've found it's more elegant. As far as containers that use...

    Consider podman instead. It's mostly docker compatible. I personally spin those containers up with ansible instead of docker-compose and I've found it's more elegant.

    As far as containers that use the internet, there's really no way to know if they're behaving appropriately unless you're doing some monitoring. You could monitor bandwidth usage and see if they're generating more traffic than usual. You could monitor connections more precisely in a handful of ways, but I don't have any recommendations for that.

    I'm interested in better suggestions for monitoring too. There was a semi-recent rogue plugin that got installed on my emby server (container) that I was only made aware of after updating emby, seeing it shut down promptly, and reading about it in the log. I'm a little upset that I couldn't find more details on it beyond a forum thread, but at least there was something done about it by the devs.

    5 votes
    1. ButteredToast
      Link Parent
      On the topic of the Emby plugin, there really needs to be some kind of architectural revolution for plugins across the whole of desktop software. For things like media servers, plugins could run...

      On the topic of the Emby plugin, there really needs to be some kind of architectural revolution for plugins across the whole of desktop software.

      For things like media servers, plugins could run in something like a sandboxed webassembly environment where the only possible inputs and outputs are those surfaced by an API and the only connections allowed are to a static list of URLs. There’s always going to be exploits of course but that’d for practical purposes make plugins an ineffective vector.

      1 vote
  4. kenc
    Link
    The OWASP foundation has a great cheatsheet for Docker Security. Sometimes, containers1 require mounting docker.sock to run: volumes: - "/var/run/docker.sock:/var/run/docker.sock" In these cases,...

    The OWASP foundation has a great cheatsheet for Docker Security.

    Sometimes, containers1 require mounting docker.sock to run:

    volumes:
    - "/var/run/docker.sock:/var/run/docker.sock"
    

    In these cases, Traefik's documentation suggest using a socket proxy that acts like a firewall for the Docker socket.


    1: Traefik, Watchtower etc.

    4 votes
  5. ShroudedScribe
    Link
    "Static list of URLs" is a fantastic idea. I could theoretically create some advanced networking restrictions to do so, but it's a bit harder with emby as a whole since I do allow outside...

    "Static list of URLs" is a fantastic idea. I could theoretically create some advanced networking restrictions to do so, but it's a bit harder with emby as a whole since I do allow outside (internet) connections to it from family.