35
votes
What do you do to secure your Linux server when exposing ports to the internet?
I've been self-hosting for a few years. However, now I have a single server hosting all of my things whereas, before, I had multiple old machines physically isolating services. I'm getting ready to host a game server or few (at least one for Tilde) on this machine.
While I'm not a neophyte to Linux, I'm not a guru and definitely not an infosec expert.
Given that, what steps do you take to secure a Linux server, LXCs, and docker containers that are receiving port-forwarded internet traffic?
FWIW, I expect I'll have an LXC running docker containers. I can instead run a VM doing same.
Advice welcome!
In no particular order:
Some things I've been meaning to setup but keep getting distracted from doing:
Heavy plus to podman. Not only is getting containers out of root great, I've also found that it's nice to just not have to deal with docker's daemon anymore. I initially switched completely to podman because rootless docker is so painful to get working right and it's been smooth enough that I've never looked back.
For every use case I've personally encountered it also "just works" not have a real docker install and to instead just alias docker to be podman. Probably seems a bit silly to do, but some tools (ex. vscode dev containers) just assume that they can run an executable named "docker" and podman tends to be plenty compatible.
Won't pulling images daily expose you to more potential breakages or security risks, if the service that generates your image gets bought out or compromised?
Granted, docker is a good idea for all of your hosted services, but I've been lazy about it personally since I use a pretty anemic home server. Whenever I upgrade the hardware I'd like to adopt docker for my services but I'm a little afraid I'm introducing a gnarly layer of potential breakages for no real security gain.
Yes and no? Sure, I might accidentally pull down a 0day or a compromised image.. But pulling daily makes it much less likely I'm sitting for a week with an unpatched 0day because I missed the news.
That's where building your own images comes in to play. Lets you repave with OS updates separately from app updates....which is extra important if the app doesn't release regular security updates themselves.
The ideal seems to be having a very lightweight app containers like distroless. Most apps don't depend on having ssh access for example so if you can fully isolate your dependencies then there will be very few container updates that need to be deployed because there simply aren't any security updates for your small number of dependencies.
However--there is a big caveat in all of this... with something like "normal" security updates (via apt, dnf, etc) there is some level of due diligence that you don't get by relying merely on major/minor version tags. This due diligence can be automated through tests but having a few humans in the updates pipeline helps to smooth out anything weird like packages being renamed where the old version is still HTTP 200 but the new version with a security patch is under a different name, etc
Everything in docker +1. docker-compose files all over the place. I really should version these....
I've broken a few services by updating images nightly. But then I'd tend to be far less concerned about regular updates for non-internet facing services.
ACK on no public-facing SSH. That would make me nervous. I'm not even seriously considering exposing HTTP server ports. Definitely HTTPS, probably using LetsEncrypt, if I went that way.
Interesting to note: publickey SSH is immune to bruteforcing and MITM attacks (though you can end up on a honeypot if you ignore the host keys not matching warning--though that never gives them access to your server because the key exchange is not replayable... but if you use ssh-agent the honeypot can then ask ssh-agent to connect to the real server! (depending on your configuration)).
But yeah, for a more targeted attack, denial of service is pretty easy via sshd. So if only for that reason keep it private network
Definitely not in infosec either and you probably have more Linux system administration experience than me and I barely do most the advice I'm about to give very well or vigilantly but, most of us small time individuals following the basics like monitoring logs, setting up SSH keys, keeping file and directory permissions appropriate, and using a wireguard VPN to access any exposed services will probably be okay for a majority of threat actors and script kiddies running IP scans for open ports.
Fail2ban or Crowdsec with geo blocking and other security measures enabled will help an additional percentage, although nothing is truly foolproof and sometimes I worry these give a false sense of security.
If you're concerned with ransomware consider using ZFS and setting up snapshots so you can rollback.
I'm not a gamer so I'm unsure of any game server specific security measures one should take.
Ah, good. I use ZFS. Haven't set up rollbacks.
Good suggestions!
I always try to keep my Debian install up-to-date, but major OS upgrades sometimes require enough effort that I often lag a big behind. Fortunately they tend to backport security updates a bit, but I wonder what I'll do if I ever hit a true breaking change that disrupts one of my services.
I've often considered switching to Ubuntu on my server since major upgrades are basically a nonissue one-liner. But I'd love to hear if anyone else worries about this, and if any other distro solves the problem better or easier (without snap garbage that I have to manually disable).
Besides my OS and wireguard, I don't worry about it, because Wireguard is the only service I directly expose to the outside world. Everything else is local to my network.
Honestly, my biggest worry is my router. I keep considering an OpenWRT One so I can guarantee security updates for a long time, but I'm concerned about breaking the network for any length of time and eating up days with tedious configuration.
I make sure that I always have a router with the MAC cloned from my current router so that if the main router breaks for any reason, I have drop-in basic functionality.
That's an excellent idea, especially since my current ISP requires no modem and simply provides an ethernet hookup. I strongly suspect they whitelist by MAC address. I've been looking at the OpenWRT One and some of Flirc's routers, perhaps spoofable MAC will make up my mind!
Yea, you setup the new one as a double-nat behind the old one. Once it's setup, clone the mac, swap them out.
My pair of WRT1900AC has A/B firmware, so the secondary one gets used as a dumb access point, but has the previous-version main-router firmware as the alternate firmware.
Update process goes:
So every patch they switch rolls, and as such either can function for the other by booting the backup firmware. I'll probably grab a third if a cheapie comes my way to just have as a drop-in replacement without losing my AP. I live in a rancher so I need that horizontal coverage.
I instantly became jealous...but only because after a recent move, the current ISP is a piece of crap...and i long for the days with my previous isp when i had the freedom to use whatever router, ethernet, dchp setup, etc. :-)
Same regarding OpenWRT. My router is old.
I'm a complete generalist in IT, but I've been in IT for 28 years. I'm also responsible for cyber security and anything with a plug, apparently.
Depending on how complex you want to make this stuff, it can get crazy, but if you're running a nice stable and security patched OS, plus using containers/VMs you'll be fine with everything written in here by others.
For me, the absolute base minimum:
Yeah, that'll keep you safe. Patching is something that should be done when required, it's not something that has to be done unless a CVE is out or a dev flagged something on their own code. Over the years I've seen products push out the latest version of something and that has the flaw when the older version doesn't. It's how Debian works their stable releases. Bleeding edge can mean getting cut.
Some self-hosters almost brag about how you need to VPN into their network to do anything. That's great for them, but a pain in the arse for anyone that is accessing your infrastructure for services. I want family and friends to easily access my Jellyfin for the content. Not one of them is tech savvy and would want to run a service or software in front of the JF client to allow access, they just want to hit an URL and go.
I've never had any issues with this set up. I'm not saying it cannot be breached because the mantra should always be "When, not if."
Final note, if you're opening up your home systems then you best encrypt your personal data locally, make sure you backup with 3-2-1 and always be envious of what the other folks do, then make yours better.
Question about this.... I thought that's what fail2ban would do anyways? I have a VPS and when I look at the logs I see stuff like this:
Is there a way to set up fail2ban to recognize this and ban them? Looking at this current attack they're doing this for multiple users over a wide range of ports. Also, I thought UFW and security groups wouldn't even allow these attempted connections to start with?
Edit: TBH it isn't running anything too important so I haven't been paying attention to it for a while, but this thread is reminding me that I should maybe learn/set it up correctly.
Edit2: Also, I thought my AWS security group would limit incoming access to random ports???
This will hopefully explain jails and bans.
https://www.naturalborncoder.com/2024/10/installing-and-configuring-fail2ban/
Also, why sshd and Fail2ban maybe misconfigured on Debian as it's looking at the wrong log file.
Edit: Sorry this isn't overly useful on the reply front, I'm just heading to bed. I can share my config with you tomorrow for a few services if you wish, including Jellyfin and Jellyseerr?
Configs would be super helpful. I'm not using Jellyfin though, but maybe that would help someone else.
As for the wrong log file thing with fail2ban; I did sort that out some time ago and it does seem to be logging to a file in /var/log/ and it IS banning people, its just not banning people attempting connections on other ports. I did just go into the sshd_config and the port wasn't specified so I did that and tomorrow I'll look and see, but I thought UFW would block those attempted connections at that level.
Anyways. I'll read that link and see if it helps.
As someone who has helped run a PaaS, this speaks to me.
I've struggled with syslog (there, I admit it!) but centralized logging has very much been a goal and WIP. Similarly, I want centralized monitoring and alerting but I'm new to Prometheus and Loki. At least I have metrics in Grafana so that's something. But, yes, logs.
VLANs, want. I need better network hardware for this...
Generally speaking, you can follow your OS's official server hardening guide, or a credible institution's server hardening guide for your OS (e.g. NIST). Often, there are tools that can help you automate this, including pre-hardened server images. There are enough things you can (or should) do for server hardening that a single comment on Tildes is probably going to only contain a small subset of those suggestions, if any.
As other have said, ideally you use a vpn to access anything on the server you want. However that isn't always feasible. Since I am using a reverse proxy anyway I often set it up with basic authentication as well as an extra authentication layer where I have set up fail2ban specifically to monitor authentication requests.
There are some technical limitations, I had one service where the combination of traffic effectively kept me in a neverending authentication loop. But overall I am pretty happy with this approach as it allows me to access these select services without a VPN but where bad actors can't easily see what is running and access it.
I've been a big fan of CentOS / AlmaLinux in the past--and there is still some consideration to be found there... ConfigServer Security and Firewall (CSF) is still being updated, for example.
In a world where it is easy to destroy and recreate servers, this kind of system tweaking isn't nearly as important as it used to be. But vulnerabilities still exist in containers so it is good to have some kind of system to--if not automatically canary test and deploy security updates--at the very least have an inventory of the software that is being used across your infrastructure so that you can eventually develop some kind of system to prevent embarrassing large-scale automated exploits of known (old) vulns.
You can't really prevent application 0days via strong security posture alone but having a well-tested deployment system can at least make the process of deploying the fix (once you have it) more painless.
You likely don't need a stateful firewall like CSF. Something that makes configuration easy like FirewallD or ufw is really all you need. Just block all the ports you aren't using. There's a lot you can do at the network level to control traffic (virtual networking, out of band management, etc). Just keep things as simple as possible. Adding things to make things "more secure" (subjective) usually is a net negative and only serves to increase attack surface.
You might like this! https://gitlab.com/lxd_nomad/lxd_nomad
More sophisticated than me, for sure. Still, thanks! Some research to do here.
I haven't been building my containers from source. I know: it's safer, at least if it's a reasonably well-trodden OSS project. I see using images as akin to running an arbitrary executable on any OS.
LXD looks like a bit too much for me.
With Proxmox, I keep a template LXC image. But it would be nice to have some sort of centralized way of deploying changes to containers. Haven't needed that much though.
@elight Funny you brought this up, as i am trying to think up a newer architecture for my homelab/home network...because like you, i host a few services on the same server/host...but what i want to begin to do is use something like tailscale or some such vp/reverse proxy...Why? So that i can start my "defenses" from not allowing anyone from outside any access at all...and then beyond that, apply other server-level hardening that others have noted in this post already. Using a vpn/rev.proxy is not a panacea, but i figure its not a bad start. Also, because my current isp is crap, limited, and generally awful - and blocks lots more stuff than my previous isp - i will need to pivot to how i have historically enabled access to my internally-run homelab services. Good luck on your journey!
A lot of people have already commented this same stuff or similar, but I'll add my input as well.
I'm not an infosec professional, so I'd welcome any input from others who are.
I'm so clueless on this stuff that I take strategies that completely avoid forwarding ports. This cluelessness will be apparent in the following text.
The two solutions that I'm aware of are Tailscale, which lets you create a virtual LAN that you can connect to from anywhere but requires an app and login, and then Cloudflare Tunnel which can authenticate users to an email whitelist and has a good free tier. I believe both of these completely negate risks related to open ports on the web. I currently use Tailscale to access a home server and enjoy it but wish I could more easily share it with friends.
That's my general personal VPS setup:
For direct remote access to the server and its Postgres database from my personal computer I use key-based SSH.
I have ufw and fail2ban. Pretty standard setup from what I understand.
I have cron-apt configured to email me notifications about packages which can be updated. I firmly believe that everything that can should be updated asap.
I run all self-hosted apps behind Caddy's reverse proxy. Caddy has automatic https by default, which is awesome. Some apps are also behind Caddy's basic auth for a bit of additional security. Caddy also allowed me to block a bunch of crawlers on the websites I host, which is neat.
I have a script for making copies of important configs into a dedicated Samba share, which I then manually move to my computer (I'll get it a bit more automated eventually).
I have automatic weekly full VPS backups in case things go wrong and I have to do a full machine rollback. It's Digital Ocean's paid service fyi.
Agree with most suggestions so far, but interesting to see no one mention port knocking. Sure it's a bit harder to setup, and does not replace any of the other suggestions itt, but imo it significantly reduces the attack vector of exposing ports to the internet.