7
votes
Full blown SSH servers within Docker containers?
Trying to get a sense on how the networking would go down?
If I had one public IP address and say 4 Docker containers on the host, how would the SSH connections work? Would I have to reserve ports for each container?
How do you assign IP addresses per container? Also, would the port forwarding have to be done on both the router and the host machine itself? I'm personally not looking to expose SSH, but was wondering how you would manage multiple docker images on the same machine and get them their own internal IP addresses so they can be forwarded outside the network to a DNS or DDNS.
This is a question that is attempting to be answered by the likes of Kubernetes/orchestrators. There are definitely ways to get routing to work correctly as all containers are essentially NAT'd and then that network is bridged via your host to the outside world.
I assume you want to be able to SSH into your docker containers so that you can manage them, this isn't really how they are designed to run, as any changes you may make when they are running will be lost if they are restarted. It's better to think of them as immutable virtualmachines. And if you do want to hop into a container while it's running, then there is always the possibility of using ssh to get to the host machine, then
docker exec -it <docker-container-name> /bin/bash
.When it comes to inbound traffic, there are a few options... I mentioned them in my top-level comment, but I think it may be worth illustrating what I do on my own server.
My setup is a single server running an nginx instance (one ip address), which I have pointed all of my domains at (foo.com, bar.com, baz.com). Then I have configured nginx to forward any traffic destined for foo.com to go to the container I have specified as the handler for foo.com. This can all be done using the jwilder/nginx-proxy container that I linked to in my other comment. I should add a disclaimer that in this setup, you will not be able to arbitrarily expose many ports, or directly ssh into any container.
I think we would benefit from understanding what it is you are trying to do with your docker containers. This would help us to give you more specific advice on how to configure your server/network
There are a few options, although I would preface this with, 'you are going against industry standards when it comes to containers`. I'm sure there are several people here (me included) who would be happy to chat about your requirements and maybe discuss alternative solutions.
With that disclaimer out of the way:
Short answer is yes, with only one public ip you will have to expose each docker container's SSH port on a separate host-port (+ the port for whatever service is being exposed from the container).
The other option is to route an ip-block (ie, 8 ip address) to the host-machine, then configure the containers to be assigned a public ip, and have relevent traffic forward to the port. This is supported natively can make your networking, service management considerably easier to reason about.
Another, third option is to use a proxy with host/route-matching: https://github.com/jwilder/nginx-proxy. Although I'm not sure if this supports raw TCP/UDP, it may only be HTTP.
One quick warning about exposing ports on your host-machine. When you host a port publicly on your host, then it will punch through any IPtables (firewall) rules you may have set (there are exceptions to this, but this is the default). This means that if you have a firewall setup on the host, consider it useless for any safety.
Good luck with this, and I am interested in other solutions to this.
Alright, let me clarify the end goal. I do like the suggestions though, I'm mainly curious if anything changes after I lay out the desired finishes product.
TL;DR end goal: shell provider -- irc (irssi, weechat) with either screen or tmux or a bouncer. Up to the user.
Ideally, I'd have a template Dockerfile that would spin up a very minimal Ubuntu/Debian container. Depending on the pre-selected config options, the container would spin up with any of the packages mentioned in the TL;DR above. From here, I'd be able to issue end users some credentials (likely just have them supply an SSH key prior to spin up) and away they'd go. They would each be allocated some system resources to run a very minimal shell with the express main purpose being to run a terminal based irc session (maybe a bouncer... we'll cross that bridge when we get there).
I'm thinking that one external address should suffice, then just map different ports back to their respective containers. IPTables isn't exactly a huge concern right now but I do see how that could become an issue...
This is mainly an experiment although I'd like to see if something could be scaled into a small small (micro?) business. Who knows.
Thanks for the clarification, your discussion helps :).
based on this, it should definitely be possible using a reverse-proxy. I had a quick google, and it seems that nginx now supports reverse-proxying TCP streams[1]. Great news, now you can do subdomain based matching to route different users to different containers.
ie. user1.example.com -> container1
In terms of configuring nginx to dynamically update routes as you register new users, I would recommend the jwilder repo as a starting point[2].
I think the linked resources should provide a starting point for you to play with. It does require that you have some way to set DNS records (you can use your hosts file for testing). I would be very interested in seeing if you get this working, and what you think of it.
ps. Regarding resource restrictions, docker has builtin tools for that.
Good luck!
[1] https://unix.stackexchange.com/questions/290223/how-to-configure-nginx-as-a-reverse-proxy-for-different-port-numbers
[2] https://github.com/jwilder/nginx-proxy
If you're looking for something more similar to virtual machines, I would recommend LXD instead: https://linuxcontainers.org/lxd/introduction/
Heard of these. What if I was running all this on an existing VPS? Does that change the infrastructure set up with these VMs?
I believe if you can run Docker, you can run LXC. It's the same tech (cgroups).
Since they act more like actual VMs, it's a different idea than Docker, ie. you're not just running a single app in a container but you can run all system services (SSH being the one you're interested in).
I've mostly used it on my local system (eg I have Ubuntu 16.04, but I also have a container with 14.04 to run some old code that I don't have time to update), but I plan to set this up on a Linode VPS in the next month to migrate some ancient stuff that's running on Debian Lenny.
How small can the LXC VMs get. Ideally, I'd be able to run the whole things on anything from a 1gb VPS to... the sky is the limit.
In terms of RAM or disk space?
Disk space depends on the distro and so on; LXD uses ZFS or BTRFS so if you have multiple containers running from the same image, you only need to store the differences (ie. the OS space is only used once).
RAM depends on what's running in the container... so you can go as low as you want to go by reducing the number of services running in the container.
Exactly what I wanted to hear. Thank you!
Don't. Just jump onto the underlying host and exec in.
Also: if you are execing into your containers in prod you are losing a huge quantity of benefits that containers were supposed to be bringing you in the first place: that is to say an inspection of the version tag should tell you what is in there. You allow people to jump in willy-nilly and that's all gone.
You want to do inspection? Just use sysdig. It knows all about containers. You can use
ip netns
to get packet captures without entering the container.Basically you don't need to exec in prod, and shouldn't want to either. Immutable infra all the way!
I think this is good advice in general (as well as the sibling thread), but as OP seems to be new to the container ecosystem, it may be confusing to recommend what are essentially 'bleeding-edge' techniques for managing containers.
Disclaimer: bleeding-edge does not mean unstable. Just that the mindset is changing in industry towards immutable/managed-infra, and not everyone is there yet (but soon hopefully!)
While I mostly agree with your top point, I disagree that you should never exec in in prod. Sometimes you need to debug an application in prod, and the only way to do it involves execcing into the container. Like two weeks ago when I needed to get a heap dump of an application running in Docker.
I don't know everything about every runtime but I'd be surprised if you couldn't get that heap dump without jumping into the container.
Perhaps I should have emphasised this
more. In my experience as soon as you allow one reasonable breach of boundary people just want to pile in for any old thing. I am just a bit jaded I guess.
Willy-nilly I agree, yes. I work in a massive company, and only our small devops team has the ability to do this.