35 votes

What do you do to secure your Linux server when exposing ports to the internet?

I've been self-hosting for a few years. However, now I have a single server hosting all of my things whereas, before, I had multiple old machines physically isolating services. I'm getting ready to host a game server or few (at least one for Tilde) on this machine.

While I'm not a neophyte to Linux, I'm not a guru and definitely not an infosec expert.

Given that, what steps do you take to secure a Linux server, LXCs, and docker containers that are receiving port-forwarded internet traffic?

FWIW, I expect I'll have an LXC running docker containers. I can instead run a VM doing same.

Advice welcome!

29 comments

  1. [7]
    vord
    Link
    In no particular order: Automate offsite backups. Verify them periodically. The greatest risk to your system is you. Automate daily system updates. Reboot weekly. No services on the base VM....

    In no particular order:

    • Automate offsite backups. Verify them periodically. The greatest risk to your system is you.
    • Automate daily system updates. Reboot weekly.
    • No services on the base VM. Everything is in docker. As such, I use a roller like Fedora or Tumbleweed.
    • Pull images daily to update docker services.
    • Firewall on the server and the router
    • If it's http, no it's not. Everything open to the world is through the https proxy, game servers excepted.
    • ssh-audit, ssh is not world accessible. Password login disabled, keys only.

    Some things I've been meaning to setup but keep getting distracted from doing:

    • Implement fail2ban
    • Alerting on unusual traffic
    • Switch from docker to rootless podman
    • Building all of my own images from the source repo dockerfiles instead of pulling from docker hub (and thus insuring proper security updates there)
    18 votes
    1. zestier
      Link Parent
      Heavy plus to podman. Not only is getting containers out of root great, I've also found that it's nice to just not have to deal with docker's daemon anymore. I initially switched completely to...

      Heavy plus to podman. Not only is getting containers out of root great, I've also found that it's nice to just not have to deal with docker's daemon anymore. I initially switched completely to podman because rootless docker is so painful to get working right and it's been smooth enough that I've never looked back.

      For every use case I've personally encountered it also "just works" not have a real docker install and to instead just alias docker to be podman. Probably seems a bit silly to do, but some tools (ex. vscode dev containers) just assume that they can run an executable named "docker" and podman tends to be plenty compatible.

      9 votes
    2. [3]
      DynamoSunshirt
      Link Parent
      Won't pulling images daily expose you to more potential breakages or security risks, if the service that generates your image gets bought out or compromised? Granted, docker is a good idea for all...

      Won't pulling images daily expose you to more potential breakages or security risks, if the service that generates your image gets bought out or compromised?

      Granted, docker is a good idea for all of your hosted services, but I've been lazy about it personally since I use a pretty anemic home server. Whenever I upgrade the hardware I'd like to adopt docker for my services but I'm a little afraid I'm introducing a gnarly layer of potential breakages for no real security gain.

      6 votes
      1. [2]
        vord
        Link Parent
        Yes and no? Sure, I might accidentally pull down a 0day or a compromised image.. But pulling daily makes it much less likely I'm sitting for a week with an unpatched 0day because I missed the...

        Yes and no? Sure, I might accidentally pull down a 0day or a compromised image.. But pulling daily makes it much less likely I'm sitting for a week with an unpatched 0day because I missed the news.

        That's where building your own images comes in to play. Lets you repave with OS updates separately from app updates....which is extra important if the app doesn't release regular security updates themselves.

        6 votes
        1. xk3
          (edited )
          Link Parent
          The ideal seems to be having a very lightweight app containers like distroless. Most apps don't depend on having ssh access for example so if you can fully isolate your dependencies then there...

          The ideal seems to be having a very lightweight app containers like distroless. Most apps don't depend on having ssh access for example so if you can fully isolate your dependencies then there will be very few container updates that need to be deployed because there simply aren't any security updates for your small number of dependencies.

          However--there is a big caveat in all of this... with something like "normal" security updates (via apt, dnf, etc) there is some level of due diligence that you don't get by relying merely on major/minor version tags. This due diligence can be automated through tests but having a few humans in the updates pipeline helps to smooth out anything weird like packages being renamed where the old version is still HTTP 200 but the new version with a security patch is under a different name, etc

          2 votes
    3. [2]
      elight
      Link Parent
      Everything in docker +1. docker-compose files all over the place. I really should version these.... I've broken a few services by updating images nightly. But then I'd tend to be far less...

      Everything in docker +1. docker-compose files all over the place. I really should version these....

      I've broken a few services by updating images nightly. But then I'd tend to be far less concerned about regular updates for non-internet facing services.

      ACK on no public-facing SSH. That would make me nervous. I'm not even seriously considering exposing HTTP server ports. Definitely HTTPS, probably using LetsEncrypt, if I went that way.

      3 votes
      1. xk3
        (edited )
        Link Parent
        Interesting to note: publickey SSH is immune to bruteforcing and MITM attacks (though you can end up on a honeypot if you ignore the host keys not matching warning--though that never gives them...

        Interesting to note: publickey SSH is immune to bruteforcing and MITM attacks (though you can end up on a honeypot if you ignore the host keys not matching warning--though that never gives them access to your server because the key exchange is not replayable... but if you use ssh-agent the honeypot can then ask ssh-agent to connect to the real server! (depending on your configuration)).

        But yeah, for a more targeted attack, denial of service is pretty easy via sshd. So if only for that reason keep it private network

        6 votes
  2. [2]
    Turtle42
    Link
    Definitely not in infosec either and you probably have more Linux system administration experience than me and I barely do most the advice I'm about to give very well or vigilantly but, most of us...

    Definitely not in infosec either and you probably have more Linux system administration experience than me and I barely do most the advice I'm about to give very well or vigilantly but, most of us small time individuals following the basics like monitoring logs, setting up SSH keys, keeping file and directory permissions appropriate, and using a wireguard VPN to access any exposed services will probably be okay for a majority of threat actors and script kiddies running IP scans for open ports.

    Fail2ban or Crowdsec with geo blocking and other security measures enabled will help an additional percentage, although nothing is truly foolproof and sometimes I worry these give a false sense of security.

    If you're concerned with ransomware consider using ZFS and setting up snapshots so you can rollback.

    I'm not a gamer so I'm unsure of any game server specific security measures one should take.

    15 votes
    1. elight
      Link Parent
      Ah, good. I use ZFS. Haven't set up rollbacks. Good suggestions!

      Ah, good. I use ZFS. Haven't set up rollbacks.

      Good suggestions!

      4 votes
  3. [6]
    DynamoSunshirt
    Link
    I always try to keep my Debian install up-to-date, but major OS upgrades sometimes require enough effort that I often lag a big behind. Fortunately they tend to backport security updates a bit,...

    I always try to keep my Debian install up-to-date, but major OS upgrades sometimes require enough effort that I often lag a big behind. Fortunately they tend to backport security updates a bit, but I wonder what I'll do if I ever hit a true breaking change that disrupts one of my services.

    I've often considered switching to Ubuntu on my server since major upgrades are basically a nonissue one-liner. But I'd love to hear if anyone else worries about this, and if any other distro solves the problem better or easier (without snap garbage that I have to manually disable).

    Besides my OS and wireguard, I don't worry about it, because Wireguard is the only service I directly expose to the outside world. Everything else is local to my network.

    Honestly, my biggest worry is my router. I keep considering an OpenWRT One so I can guarantee security updates for a long time, but I'm concerned about breaking the network for any length of time and eating up days with tedious configuration.

    8 votes
    1. [4]
      vord
      Link Parent
      I make sure that I always have a router with the MAC cloned from my current router so that if the main router breaks for any reason, I have drop-in basic functionality.

      I make sure that I always have a router with the MAC cloned from my current router so that if the main router breaks for any reason, I have drop-in basic functionality.

      8 votes
      1. [3]
        DynamoSunshirt
        Link Parent
        That's an excellent idea, especially since my current ISP requires no modem and simply provides an ethernet hookup. I strongly suspect they whitelist by MAC address. I've been looking at the...

        That's an excellent idea, especially since my current ISP requires no modem and simply provides an ethernet hookup. I strongly suspect they whitelist by MAC address. I've been looking at the OpenWRT One and some of Flirc's routers, perhaps spoofable MAC will make up my mind!

        4 votes
        1. vord
          Link Parent
          Yea, you setup the new one as a double-nat behind the old one. Once it's setup, clone the mac, swap them out. My pair of WRT1900AC has A/B firmware, so the secondary one gets used as a dumb access...

          Yea, you setup the new one as a double-nat behind the old one. Once it's setup, clone the mac, swap them out.

          My pair of WRT1900AC has A/B firmware, so the secondary one gets used as a dumb access point, but has the previous-version main-router firmware as the alternate firmware.

          Update process goes:

          1. Flash the AP. This overwrites the old main router firmware, leaving AP firmware as-is.
          2. Apply Ansible script for Router to AP.
          3. Swap AP with Router.
          4. Flash Router, apply AP's Ansible script
          5. Put router where AP was.

          So every patch they switch rolls, and as such either can function for the other by booting the backup firmware. I'll probably grab a third if a cheapie comes my way to just have as a drop-in replacement without losing my AP. I live in a rancher so I need that horizontal coverage.

          2 votes
        2. mxuribe
          Link Parent
          I instantly became jealous...but only because after a recent move, the current ISP is a piece of crap...and i long for the days with my previous isp when i had the freedom to use whatever router,...

          my current ISP requires no modem and simply provides an ethernet hookup

          I instantly became jealous...but only because after a recent move, the current ISP is a piece of crap...and i long for the days with my previous isp when i had the freedom to use whatever router, ethernet, dchp setup, etc. :-)

          2 votes
    2. elight
      Link Parent
      Same regarding OpenWRT. My router is old.

      Same regarding OpenWRT. My router is old.

      3 votes
  4. [5]
    g33kphr33k
    Link
    I'm a complete generalist in IT, but I've been in IT for 28 years. I'm also responsible for cyber security and anything with a plug, apparently. Depending on how complex you want to make this...

    I'm a complete generalist in IT, but I've been in IT for 28 years. I'm also responsible for cyber security and anything with a plug, apparently.

    Depending on how complex you want to make this stuff, it can get crazy, but if you're running a nice stable and security patched OS, plus using containers/VMs you'll be fine with everything written in here by others.

    For me, the absolute base minimum:

    • HTTPS is the only thing that can come in from the internet and the firewall rule will be via a reverse proxy. I'm a fan of Caddy.
    • SSH from the internet should only be if required. If required, then cert based with fail2ban set to harsh mode (3 attempts and permaban from any IP that isn't white listed)
    • VLAN and block based on service - put each thing in its own VLAN and don't allow traversing the network internally unless it originates from your host network - yes, this seems extreme, but if you're hosting stuff from the internet you don't want your Plex being compromised and then being able to wander around the entire network freely.
    • Fail2ban on every service - you heard me. Learn how to parse logs for logins and use Fail2ban to monitor and block.
    • Use syslog - automate reading of logs and anything out the ordinary gets flagged to you, be ahead of the breach.

    Yeah, that'll keep you safe. Patching is something that should be done when required, it's not something that has to be done unless a CVE is out or a dev flagged something on their own code. Over the years I've seen products push out the latest version of something and that has the flaw when the older version doesn't. It's how Debian works their stable releases. Bleeding edge can mean getting cut.

    Some self-hosters almost brag about how you need to VPN into their network to do anything. That's great for them, but a pain in the arse for anyone that is accessing your infrastructure for services. I want family and friends to easily access my Jellyfin for the content. Not one of them is tech savvy and would want to run a service or software in front of the JF client to allow access, they just want to hit an URL and go.

    I've never had any issues with this set up. I'm not saying it cannot be breached because the mantra should always be "When, not if."

    Final note, if you're opening up your home systems then you best encrypt your personal data locally, make sure you backup with 3-2-1 and always be envious of what the other folks do, then make yours better.

    7 votes
    1. [3]
      mild_takes
      (edited )
      Link Parent
      Question about this.... I thought that's what fail2ban would do anyways? I have a VPS and when I look at the logs I see stuff like this: sshd[269609]: Invalid user user from 45.145.224.115 port...

      Fail2ban on every service - you heard me. Learn how to parse logs for logins and use Fail2ban to monitor and block.

      Question about this.... I thought that's what fail2ban would do anyways? I have a VPS and when I look at the logs I see stuff like this:

      sshd[269609]: Invalid user user from 45.145.224.115 port 22938
      sshd[269609]: Connection closed by invalid user user 45.145.224.115 port 22938 [preauth]
      sshd[269611]: Invalid user user from 45.145.224.115 port 22954
      sshd[269611]: Connection closed by invalid user user 45.145.224.115 port 22954 [preauth]
      sshd[269613]: Invalid user user from 45.145.224.115 port 22958
      sshd[269613]: Connection closed by invalid user user 45.145.224.115 port 22958 [preauth]
      sshd[269615]: Invalid user user from 45.145.224.115 port 55828
      sshd[269615]: Connection closed by invalid user user 45.145.224.115 port 55828 [preauth]
      sshd[269617]: Invalid user user from 45.145.224.115 port 55830
      sshd[269617]: Connection closed by invalid user user 45.145.224.115 port 55830 [preauth]
      

      Is there a way to set up fail2ban to recognize this and ban them? Looking at this current attack they're doing this for multiple users over a wide range of ports. Also, I thought UFW and security groups wouldn't even allow these attempted connections to start with?

      Edit: TBH it isn't running anything too important so I haven't been paying attention to it for a while, but this thread is reminding me that I should maybe learn/set it up correctly.

      Edit2: Also, I thought my AWS security group would limit incoming access to random ports???

      1 vote
      1. [2]
        g33kphr33k
        (edited )
        Link Parent
        This will hopefully explain jails and bans. https://www.naturalborncoder.com/2024/10/installing-and-configuring-fail2ban/ Also, why sshd and Fail2ban maybe misconfigured on Debian as it's looking...

        This will hopefully explain jails and bans.

        https://www.naturalborncoder.com/2024/10/installing-and-configuring-fail2ban/

        Also, why sshd and Fail2ban maybe misconfigured on Debian as it's looking at the wrong log file.

        Edit: Sorry this isn't overly useful on the reply front, I'm just heading to bed. I can share my config with you tomorrow for a few services if you wish, including Jellyfin and Jellyseerr?

        1. mild_takes
          Link Parent
          Configs would be super helpful. I'm not using Jellyfin though, but maybe that would help someone else. As for the wrong log file thing with fail2ban; I did sort that out some time ago and it does...

          Configs would be super helpful. I'm not using Jellyfin though, but maybe that would help someone else.

          As for the wrong log file thing with fail2ban; I did sort that out some time ago and it does seem to be logging to a file in /var/log/ and it IS banning people, its just not banning people attempting connections on other ports. I did just go into the sshd_config and the port wasn't specified so I did that and tomorrow I'll look and see, but I thought UFW would block those attempted connections at that level.

          Anyways. I'll read that link and see if it helps.

    2. elight
      Link Parent
      As someone who has helped run a PaaS, this speaks to me. I've struggled with syslog (there, I admit it!) but centralized logging has very much been a goal and WIP. Similarly, I want centralized...

      As someone who has helped run a PaaS, this speaks to me.

      I've struggled with syslog (there, I admit it!) but centralized logging has very much been a goal and WIP. Similarly, I want centralized monitoring and alerting but I'm new to Prometheus and Loki. At least I have metrics in Grafana so that's something. But, yes, logs.

      VLANs, want. I need better network hardware for this...

  5. TaylorSwiftsPickles
    Link
    Generally speaking, you can follow your OS's official server hardening guide, or a credible institution's server hardening guide for your OS (e.g. NIST). Often, there are tools that can help you...

    Generally speaking, you can follow your OS's official server hardening guide, or a credible institution's server hardening guide for your OS (e.g. NIST). Often, there are tools that can help you automate this, including pre-hardened server images. There are enough things you can (or should) do for server hardening that a single comment on Tildes is probably going to only contain a small subset of those suggestions, if any.

    5 votes
  6. creesch
    Link
    As other have said, ideally you use a vpn to access anything on the server you want. However that isn't always feasible. Since I am using a reverse proxy anyway I often set it up with basic...

    As other have said, ideally you use a vpn to access anything on the server you want. However that isn't always feasible. Since I am using a reverse proxy anyway I often set it up with basic authentication as well as an extra authentication layer where I have set up fail2ban specifically to monitor authentication requests.
    There are some technical limitations, I had one service where the combination of traffic effectively kept me in a neverending authentication loop. But overall I am pretty happy with this approach as it allows me to access these select services without a VPN but where bad actors can't easily see what is running and access it.

    4 votes
  7. [2]
    xk3
    (edited )
    Link
    I've been a big fan of CentOS / AlmaLinux in the past--and there is still some consideration to be found there... ConfigServer Security and Firewall (CSF) is still being updated, for example. In a...

    I've been a big fan of CentOS / AlmaLinux in the past--and there is still some consideration to be found there... ConfigServer Security and Firewall (CSF) is still being updated, for example.

    In a world where it is easy to destroy and recreate servers, this kind of system tweaking isn't nearly as important as it used to be. But vulnerabilities still exist in containers so it is good to have some kind of system to--if not automatically canary test and deploy security updates--at the very least have an inventory of the software that is being used across your infrastructure so that you can eventually develop some kind of system to prevent embarrassing large-scale automated exploits of known (old) vulns.

    You can't really prevent application 0days via strong security posture alone but having a well-tested deployment system can at least make the process of deploying the fix (once you have it) more painless.

    You likely don't need a stateful firewall like CSF. Something that makes configuration easy like FirewallD or ufw is really all you need. Just block all the ports you aren't using. There's a lot you can do at the network level to control traffic (virtual networking, out of band management, etc). Just keep things as simple as possible. Adding things to make things "more secure" (subjective) usually is a net negative and only serves to increase attack surface.

    LXC

    You might like this! https://gitlab.com/lxd_nomad/lxd_nomad

    2 votes
    1. elight
      Link Parent
      More sophisticated than me, for sure. Still, thanks! Some research to do here. I haven't been building my containers from source. I know: it's safer, at least if it's a reasonably well-trodden OSS...

      More sophisticated than me, for sure. Still, thanks! Some research to do here.

      I haven't been building my containers from source. I know: it's safer, at least if it's a reasonably well-trodden OSS project. I see using images as akin to running an arbitrary executable on any OS.

      LXD looks like a bit too much for me.

      With Proxmox, I keep a template LXC image. But it would be nice to have some sort of centralized way of deploying changes to containers. Haven't needed that much though.

      2 votes
  8. mxuribe
    Link
    @elight Funny you brought this up, as i am trying to think up a newer architecture for my homelab/home network...because like you, i host a few services on the same server/host...but what i want...

    @elight Funny you brought this up, as i am trying to think up a newer architecture for my homelab/home network...because like you, i host a few services on the same server/host...but what i want to begin to do is use something like tailscale or some such vp/reverse proxy...Why? So that i can start my "defenses" from not allowing anyone from outside any access at all...and then beyond that, apply other server-level hardening that others have noted in this post already. Using a vpn/rev.proxy is not a panacea, but i figure its not a bad start. Also, because my current isp is crap, limited, and generally awful - and blocks lots more stuff than my previous isp - i will need to pivot to how i have historically enabled access to my internally-run homelab services. Good luck on your journey!

    2 votes
  9. MephTheCat
    Link
    A lot of people have already commented this same stuff or similar, but I'll add my input as well. Only open the absolute minimum necessary number of ports and, if possible for your application,...

    A lot of people have already commented this same stuff or similar, but I'll add my input as well.

    1. Only open the absolute minimum necessary number of ports and, if possible for your application, move those services to uncommon high numbered ports.
    2. Disable passworded login over SSH, disable root SSH login, use keys only. Also, avoid common usernames (avoid admin, apache, mysql, oracle, etc ) for the services you run.
    3. Use fail2ban. It's astounding how much SSH traffic you'll get and you'll see an enormous number of login attempts for the usernames in 2.
    4. If acceptable, don't even expose services to the internet, access them purely though OpenVPN. My home server only exposes one port, which is for OpenVPN. Every other service is only accessible through that tunnel or on the local network. This isn't an either or, either, you can expose some services to the wider internet and not others.
    5. Understand the difference between listening on 0.0.0.0 and 127.0.0.1.
    6. If you must run a webserver, ensure that you're using TLS.
    7. Although confusing to configure and debug, learn how SELinux works and interacts with the services you run, same with ufw or firewalld.
    8. Grant only the minimum necessary persmissions for a given service and don't expose anything running as root to the outside world if you can avoid it.

    I'm not an infosec professional, so I'd welcome any input from others who are.

    2 votes
  10. TommyTenToes
    Link
    I'm so clueless on this stuff that I take strategies that completely avoid forwarding ports. This cluelessness will be apparent in the following text. The two solutions that I'm aware of are...

    I'm so clueless on this stuff that I take strategies that completely avoid forwarding ports. This cluelessness will be apparent in the following text.

    The two solutions that I'm aware of are Tailscale, which lets you create a virtual LAN that you can connect to from anywhere but requires an app and login, and then Cloudflare Tunnel which can authenticate users to an email whitelist and has a good free tier. I believe both of these completely negate risks related to open ports on the web. I currently use Tailscale to access a home server and enjoy it but wish I could more easily share it with friends.

    2 votes
  11. hamefang
    Link
    That's my general personal VPS setup: For direct remote access to the server and its Postgres database from my personal computer I use key-based SSH. I have ufw and fail2ban. Pretty standard setup...

    That's my general personal VPS setup:

    • For direct remote access to the server and its Postgres database from my personal computer I use key-based SSH.

    • I have ufw and fail2ban. Pretty standard setup from what I understand.

    • I have cron-apt configured to email me notifications about packages which can be updated. I firmly believe that everything that can should be updated asap.

    • I run all self-hosted apps behind Caddy's reverse proxy. Caddy has automatic https by default, which is awesome. Some apps are also behind Caddy's basic auth for a bit of additional security. Caddy also allowed me to block a bunch of crawlers on the websites I host, which is neat.

    • I have a script for making copies of important configs into a dedicated Samba share, which I then manually move to my computer (I'll get it a bit more automated eventually).

    • I have automatic weekly full VPS backups in case things go wrong and I have to do a full machine rollback. It's Digital Ocean's paid service fyi.

    1 vote
  12. lamelos
    Link
    Agree with most suggestions so far, but interesting to see no one mention port knocking. Sure it's a bit harder to setup, and does not replace any of the other suggestions itt, but imo it...

    Agree with most suggestions so far, but interesting to see no one mention port knocking. Sure it's a bit harder to setup, and does not replace any of the other suggestions itt, but imo it significantly reduces the attack vector of exposing ports to the internet.