Homeserver, hosted server, domains and stuff. What do you do, what should I do?
I'm having a "server" (very cheap, very old office pc) in my house I use together with dynamic dns. But it's not really stable, (needs regular restarts and dyndns is not really gold either) and as I want to offer family acces to nextcloud and myabe plex? any other ideas? and all the other nice stuff the free software world has to offer, this is not working well enough to not make them flee back to google + apple and stay there till eternity!
the other thing is, i got used to ssh and stuff over the last years and want to improve my skills and learn.
I know these two dont really go well hand in hand :-(
I actually have a decent up and down speed at my home so an upgrade for my existing system is thinkable but dyndns is just a PITA and i'd like having my own domain. do these work with changing ips? because with the prices they ask here for staric ips I can just rent a server in a center somewhere.
what do you do to self host, how do you do it and what would be your advise for me?
I use Namecheap as my domain registrar and they support using ddclient for dynamic DNS.
I run most of my stuff off of a 4gb Raspberry Pi 4 but I have an old Dell Optiplex box configured for Wake On LAN for when I run game servers and stuff for friends and need a bit more grunt/an x86 architecture.
I run the following with Caddy in front of them to handle reverse proxying different subdomains as well as providing SSL automagically.
Thank you for that answer. I tried to understand what https://caddyserver.com/ actually does but if I'm honest I dont really understand what it does. is it like apache server? and how exactly do you use it to put it in front of your other stuff?
ah that makes a lot of sense!
Basically at my router I only have port 80/443 (and some stuff for games but for simplicities sake let's pretend I don't) exposed and then Caddy is configured so that if a request for nextcloud.my.domain comes in it directs that traffic to the the nextcloud application running on some non standard port as you can only have one application running on the same port at once. You can do the same thing with Apache or Nginx, Caddy is just integrated with Let's Encrypt so it handles setting up SSL on stuff for me.
so caddyserver is like apache and nginx. IMO its better because its simpler, can do more(like automatic TLS and crazy flexible API), not to mention it can take nginx configs. its pretty amazing and i completely recommend it.
another question. a rasberry pi is strong enough that nextcloud feels responsive? with the setup i have now it feels a bit slow
It's fast enough for me but I only really use Nextcloud as an upload box for other people to share photos and to be able to share files from my NAS so I guess it depends on what you want it to do.
I switched my media server from Plex to Jellyfin and I've been very happy with it. Completely free / open-source, unlike the "freemium" model of Plex, and authentication doesn't rely on a third party as it does with Plex.
Extra thumbs up for Jellyfin.
It's a fork of Emby when Emby switched their licensing, and the team has done some great work since.
My dream feature at this point would be to federate with other Jellyfin hosts. All of my friends have switched from Plex to Jellyfin...and this is a crowd who still hasn't migrated off Google Hangouts due to inertia. Those of us on fiber have symmetrical gig internet, so streaming off each other is almost as good as LAN.
Working around that is part of the reason I'm trying to find a reasonably easy to setup, cross platform (for both server and client) distributed storage.
Namecheap supports dynamic dns. If your registrar does not, there is Hurricane Electric, which does DNS, IPv6, and more.
I use Cloudflare to manage the DNS records for my personal domain. You can easily update an A record to point to your current IP address using their API, like with this bash script. Just be careful about enabling the Cloudflare Proxy setting since you won't be able to easily remote SSH into your server.
PSA:
Never allow SSH, RDP, VNC (and many more) to be accessed directly from the internet.
If you need remote access, setup a VPN. There's a lot of bots out there that search for SSH and RDP access and will attack it.
If you have to allow remote SSH, then disable password authentication.
Definitely do this. And make sure you're running fail2ban, and I'd probably move ssh off 22. (inb4 obscurity is not security etc etc)
People misunderstand what "security by obscurity is not real security" actually means. It's best to think about things in the context of a targeted attack. In the event of a targeted attack, obviously moving port numbers around isn't going to slow anyone down very much, so it's not "real" security. But when most attacks originate from people just scanning the internet looking for low hanging fruit, it definitely can help reduce the number of headaches you have to deal with. It's like keeping valuables in your car out of sight. Is it actually making it any harder to break into your car? No. But it does make it so fewer people would even bother, which has plenty of its own utility.
In other words, it doesn't secure your system from a targeted attack, but it helps secure your system from being targeted in the first place.
"Security by obscurity" was never supposed to mean "don't change your port numbers", it was supposed to mean "don't just disable password authentication on SSH and then put it on a random port hoping nobody guesses the right number".
Well, it does mean that, but you need to keep into consideration another security maxim alongside it: namely to keep in mind an appropriate threat model for what you're protecting.
Obscurity is an extremely poor lock - but oftentimes a poor lock is all that you need when you're protecting... your home media server. Not going to keep a 3 letter agency from getting in, but that's not in your threat model.
I strongly disagree. The problem is that for some reason people think that the decision to use different port numbers was made for security reasons when really it was made for administrative ones.
a buddy of mine gives me that 'obscurity is not security' crap all the time, but my f2b logs are nearly empty compared to hers.
@dom_camillo, for ssh without a password, check into keys and also change up your port in /etc/ssh/ssh_config (don't forget to restart the service.) When you log in, use ssh don@server.net -p 2200 or whatever you changed it to.
I wonder if your friend has fail2ban configured to use tcpwrappers like so many guides suggest. There was a time when I had it set up like that, and when I switched the fail2ban config to set up a drop rule in the system firewall instead, it reduced the traffic from attackers by a lot, enough to make a noticeable difference in my network speed / CPU usage (I forget which).
Blocking IPs with tcpwrappers means that attackers can still connect to ssh, they'll just be denied access. From the attacker's perspective it looks similar to (if not the same as) being denied access due to invalid credentials, so if they're trying to brute force their way in they'll keep hammering on your server and producing those logs. After making the switch to iptables, I had logs from just a few IPs per day from driveby attacks.
I suppose disabling password auth accomplishes a similar thing (can't brute force a password if you can't enter one). Not trying to detract from other solutions here, just presenting another alternative. I think there's merit to changing sshd's port, and the "obscurity is not security" warning doesn't make much sense to me unless you're configuring a nondefault port as an excuse to avoid other best practices such as keeping sshd up to date and configuring a sane firewall policy. Just, for me personally, I prefer not having to pass the
-p
flag or configure the port in every ssh client I might want to connect with.If OP decides to switch SSH to a non-default port, you can avoid having to type the
-p
argument every time by configuring the host's port in your ssh client config. To do that, add something like this to~/.ssh/config
man ssh_config
has more info about how the config works.adding the port in the config is super handy!
There are so many bad guides out there. If they are set to the tcpwrappers, I'll suggest they move over to drop. Good call.
Well I wouldn't exactly call a guide that recommends tcpwrappers "bad". If you already have an iptables config, setting fail2ban up to drop traffic involves stopping and thinking about how that will interact with the rest of your firewall rules, which isn't as simple or straightforward as just uncommenting a line in a config. So I think it makes a certain amount of sense for quick-and-dirty tutorial-style guides to just have the user set it up with tcpwrappers, mention that there are other jails available if your needs are more complex, and call it a day.
Knowing my buddy, they've got some goofy honeypot running that is both completely unnecessary and most likely forgotten. They're the 'ooh, that sounds fun' type.
thank you, that is really helpful advise!
what is fail2ban?
It's a daemon that can run scripts (called "jails" IIRC) in response to seeing some number of log events within a certain timeframe.
A common use case is to set it up to watch for SSH authentication failures. If it sees, say, 3 failures within 5 minutes coming from the same IP address, it can run a script to block that IP by adding it to tcpwrappers, or adding a firewall rule. Thus converting authentication failures into bans... hence the name "fail2ban".
that seems really reasonable :-) wonder why it didn't show up in my research when setting up my system.
yes that is clear to me thanks to archwiki!! atm I just allow ssh from inside my local network so nothing to see here. but it might help alot if i set up a vpn so i can acess it and do maintenance from everywhere. what vpn server? software? are you using?
I'm going to kind of echo this advice, put slightly differently:
If you want remote access, establish exactly one entry point you can tunnel through to reach any other services you might want to use, and then fortify that entry point to the maximum extent you're able.
I'd actually recommend making SSH that single entry point (for fortification: always disable password login and use ssh keys for authentication; and always disable root login, using
sudo
orsu
from a regular user when you need root access), simply because it's simpler and easier to get right than most VPNs, and you can likely get away with no other administrative interfaces at all (eliminating the potential to accidentally misconfigure something to be accessible outside your security boundary). However, a VPN is also a perfectly fine single point of entry, and if you'll want one for other reasons, then go for it.(Note that you should disable passwords and root login for SSH even if it's behind a VPN, and you should make sure all your services are configured to be as secure as you know how, for defense in depth—if somebody penetrates your protected single point of entry, they should not automatically gain full access to everything else.)
what would other reasons be to use a vpn?
If there are networked services on the inside of your security boundary that you want to be able to access from outside, VPN software will likely be easier to set up and more robust for that purpose than trying to tunnel connections through SSH. Examples might be if you want your Plex server available from the outside, or you want to be able to pass traffic through your home Internet connection.
that seems interesting. thank you.
cloudflare just scares me a bit, are they the only ones doing that?
I'm a fan of duckdns.org for personal stuff.
Simple HTTP posts to update, including the txt record for Let's encrypt DNS validation.
I know Google Domains supports Dynamic DNS that can be programmatically updated via the DynDNS2 protocol, so I assume other domain registrars must as well.
I use freemyip for dynamic DNS as it's the best no-nonsense service I've found. I use completely random, meaningless hostnames for that, something like
f10dd8f17bea1ca5.freemyip.com
(fresh out of a password generator).Then, I have my custom domain registered through AWS, and Route 53 has a bunch of CNAMEs (aliases). So
server1.mydomain.com
is a CNAME pointing to thatf10dd8f17bea1ca5.freemyip.com
domain. This layer of indirection means I can switch to a different dynamic DNS provider if I wish, and it also means I don't have to use AWS directly for dynamic DNS (which is possible, but requires mucking with AWS creds in a way I don't want to have to fiddle with).thanks for freemyip! that service seems to be a lot better than what I'm using at the moment.
I don't want to use any stuff coming from Megacorporations, as it seems to me self-defeating :-) but the idea to use two different services on top of each other to make for easy changes is actually really nice!
In that case, I'll give a shout-out to nearlyfreespeech.net, which is a great no-frills, no-nonsense web hosting company that's been around for almost 20 years.
You need a domain name registrar in order to register your domain, and many of them (such as GoDaddy) are not just megacorps, but some of the worse companies in the entire tech industry. NFS is at the sweet spot for a registrar that's not a megacorp but also not some fly-by-night operation that'll disappear tomorrow.
You also need DNS hosting (which can be separate from your domain registrar, but for simple setups like yours it's much easier to combine them), which they can also do.
That'll fill the same role as my use of AWS, so you'll have
yourdomain.com
registered through NFS, a subdomain such ashome.yourdomain.com
(it doesn't have to be a subdomain, but it's a good idea because it makes it easier to set up other subdomains for other things, or to host a "real" website on the primary domain), and that subdomain will be a CNAME toyour-dynamic-dns.freemyip.com
.hey thank you for that in depth answer! this seems like something I need!
As far as self-hosting itself (as the other posts cover DNS quite well)...
Keeping a stable server is just one of those things you need to do to self-host. For me, running everything using docker-compose on a Linux host works well.
There are some operating systems geared to make self-hosting easier...Yunohost comes to mind. But depending on your needs they can be quite restrictive.
Personally, I'm a crazy person who's looking to collaborate with some friends to do some distributed storage and service failovers, particularity for our password managers and media.
It's quite a rewarding thing to self-host... very liberating feeling when you can say "I don't need to pay (with money or personal data) for this service".
I'm just running an arch system, as my thinking goes, to use what your used to to make less errors.
And yes, self hosting is really nice, and the system iIhave now is good enough for me, but it has its outtages (its quite slow, needs periodic restarts and dyndns just does not work some days... but that might be because i use a free service.) but if I want to offer it to my family it hast to be more solid, as they don't now and don't want to use the workarounds that work for me :-)
This is all very true. Friend/family will walk away if it isn't just as easy as netflix or Gdrive.
yes it is. but in all this time i never had any problems with it on my server(quite barebones) on my notebook on the other hand i have to interveen quite a lot, butmosty b.c. i'm trying stuff and fuck it up
You may want to give NixOS a shot. I'm a former Arch user but switched years ago, and now run NixOS on all my Linux boxes, including desktops & laptops and my home server.
Among many other benefits, the configuration is all declarative, so you can roll back changes easily (including, if you mess your system up enough, you can reboot and select a previous configuration from the bootloader menu). Much easier to un-fuck things than with manual configuration (read the Arch wiki, it tells you to edit such-and-such config file, you try that, it doesn't work, you try a different suggestion on the wiki, and so on...remembering to undo those changes that didn't work is always a challenge)
eg, @rmgr recommended Caddy as a reverse proxy. If you search for Caddy in The Big List of NixOS Options, you get some
services.caddy
results. That means you'd add something like this to your/etc/nixos/configuration.nix
:Run
nixos-rebuild test
, and it downloads Caddy for you, installs it, and sets up a systemd service (source code, in the rather obtuse Nix language, is here...you don't need to understand that to use NixOS, but you can follow along with it to see how it translates each config option you set into a config file or setting for systemd).If you decided not to use Caddy, you remove that block from your config, and it uninstalls Caddy, including removing any config file cruft (with the exception of
/var
data directories that it won't remove automatically to prevent data loss).Or, hypothetically, running Caddy locks up your entire system (unlikely, but using it as an example anyway). Since you only did
nixos-rebuild test
, you can reboot and it'll boot back into a system that wasn't running Caddy. If you want to keep Caddy and run it automatically on boot you'd runnixos-rebuild switch
to confirm that you want to make your configuration the active one, even after reboot.We had a couple recent threads on NixOS, here and here.
i heard a lot about nix. and its on my list to give it a try, but atm I'm still learning to much on arch doing exactly what you said and fucking stuff up. :-)
I run unraid on my homelab. The interface is pretty slick, and it's fairly simple to set up. To access web apps outside my local network, I have a domain name (purchased from namecheap, manged with cloudflare) and update my A record using ddclient. Nginx configurations are managed by swag.
what does unraid do? there is so much PRspeech on that page I canno't even grasp what it actually does and why I or anybody else would need it.
swag seems really nice.
Ha, fair enough. Unraid is basically a linux distro designed around docker/vm management. It has a nice web UI, which makes it easy to setup (and maintain) a home server. As for the name, Unraid uses a raid-like (but not raid proper) data storage system that allows you to easily add hard drives for storage or parity (eg, you could build an array of 3 TB, 4 TB, and two 8 TB drives, with one of the 8 TB drives saved for parity and the rest for 15 TB of total storage; or you could use the two 8 TB drives for parity and have 7 TB of storage).
So basically think of Unraid as being an alternative to FreeNAS, Proxmox, or Synology's DiskStation Manager.
thank you! that cleared it up nicely. they should just go with your words on their page :-)
I host my own e-mail from my house. I have a static IP address from my ISP (they will give you one for free if you ask) and I use port forwarding from my router to point the important ports to my Soekris server. It's a little box that runs an AMD Geode CPU and boots off an SD card. No muss, no fuss. For an OS, I use OpenBSD because it's lightweight and easy to maintain. I used to use Linux, but the mish-mash nature of the GNU ecosystem is not for me.
I would also recommend getting an IPv6 tunnel from Hurricane Electric. It won't fully eliminate the headaches of NAT for you, but it'll help quite a bit.
That has not been my experience in the USA. Numerous ISP have told me I have to pay extra ($10 a month!!) best-case, or a business plan at worst.
Several providers had de-facto static so long as there were no outages, but there were never any gareuntee to that effect unless I paid more.