17
votes
Advice for first home server?
Hello,
I have a few questions. I didn't want to wast money so I wanna use what I have in terms of hardware, only the PSU and storage if needed.
PC:
- CPU AMD 5 1600
- RAM 16G
- SSD 125 GB for OS
Services I think of running:
- Node Tor middle relay
- Node Bitcoin
- Node XMR
- Gitea or Gitlab
- Maybe some service to host files or make a share for lan or a could service
- Maybe a TS Server or Minecraft
Questions:
- Do I have enough power to run all of this or I am being to greedy? I have raspberry(not pi 4) stopped at home doing nothing I could run some of this services on them if the computer can't handle everything.
- Should I virtualize? Can you explain me your response on this?
- I thinking of buying a good PSU since I am running this 24/7, should I invest in gold platinum or something like that?
- Should I have multiple disks if yes can you explain how much and for what.
This is will be my first server at home so I would like to hear tips if you think I am forgetting something.
Thanks in advance.
Edit: visualize > virtualize
Running a bitcoin miner alongside this doesn't make sense. Cryptocurrency miners in general are more GPU-bound and most often require dedicated hardware to make sense running (and honestly are incredibly energy-wasteful but whatever).
The rest doesn't require much power. Gitlab is somewhat memory-hungry I believe; I like it a lot but you'll probably be better off with Gitea.
Then the only thing that requires any meaningful amount of memory/CPU is minecraft. A single-player server is fairly light but it ramps up pretty rapidly with the amount of players. So YMMV depending on how many people you intend to have on there, but if it's a tiny server you'll be fine running all this with fairly cheap hardware.
I've no idea what gold platinum is. Buy a good PSU regardless; don't get an unbranded one. I like Corsair's but all the big PSU brands are similar.
When you're running services on a server (be it a home server or in a professional setting on AWS or something), you always need to ask yourself: "What kind of resources will this consume, how much of it, under what conditions, and how will those scale?"
For example: A tor relay will consume mostly CPU, little memory, scaling pretty linearly under load from additional users. A Minecraft server will consume a lot of disk space as the map gets explored, a high amount of RAM which more than doubles every time you double the amount of loaded chunks, etc.
So, should you have multiple disks? it depends. You mention you want to run a file share, so maybe? Those "multiple disks" need to solve a problem you're having somehow, such as redundancy or disk space.
FYI, a bitcoin node is essentially just maintaining a copy of the bitcoin ledger on the local machine. It uses bandwidth and disk space, but very little CPU, as it's not doing mining / payment processing.
I see, thanks for the clarification :)
Thanks for you insight.
For the nodes, a full node is different than a miner, I don't intend to have a GPU on the server, only for the initial installation :p.
The Minecraft part makes sense, probably I will only think about that when all the rest is running and see if it can handle it after.
When I ask about the disks is because I don't want the disk to be a bottleneck since multiple services would be using it at the same time. Maybe I should search how much the nodes uses the disk and how much space they can have in the future.
I have edit the point 2(word), if you have any insight about virtualization. The only thing that comes to mind is more secure to have things separated, but don't know if the performance I am going to loose is a good trade.
I don't know a lot about setting up / using virtualization on servers for this sort of use case, other than developer-level experience. But I feel like you keep getting stuck on trying to find solutions to problems you don't necessarily have.
I recommend looking at it a different way: If you're curious about a technology (of any kind), rather than ask "Do I need it?", ask "What problems does it solve?". Then you'll know if/when you do need it.
Because yes, it's "more secure to have things separated" as you say, but if you understand why it's more secure, you'll know whether that's actually something solving a problem you have.
Sorry, I know this isn't terribly helpful, but what I'm saying here is: Keep experimenting, but try to understand why something is useful before using it; and then maybe you can get into a situation where it is useful, that way you can play with it! Otherwise, you'll just be like someone who has twenty different screwdrivers, but not a single screw in sight.
It is helpful, thanks
I would recommend virtualizing what you can. It makes you more independent from the actual underlying hardware at a minor cost on modern CPUs.
For redundancy, I would absolutely recommend running a RAID1 or 6. RAID5 with disk sizes over 2TB is a risky business since the rebuild will take forever. You can look into ZFS which has a triple-disk redudnancy called RAIDZ3 (think RAID6 but one more failure redundancy). Keep in mind that ZFS doesn't let you change the parity level of a disk pool after it's been created, the only way to expand a specific disk pool is to either create a new one and copy all data over or replace all drives and rebuild it between each replacement. ZFS requires a lot of planning for long term use.
Any kind of redundancy will reduce available disk space, the amount of redundancy determines how much you loose. RAID1 costs you 50% disk space, RAID6 costs you two disk sizes and RAIDZ3 costs you three.
What do you recommend has OS for the virtualization, promox? Do you recommend a machine for each service? About having multiple machines should I have multiple disks for each virtualization or the SSD disk can handle the read/write of all systems?
My CPU is a desktop CPU, do you think it can handle all the virtualization? 6 threads not that bad I guess.
I don't know if redundancy is important in to me(at least for now), I would say backups are more important, if the system stop working I can wait a few days/hours until I have time to fix it.
Proxmox can be a decent start, it supports LVM and ZFS, both can handle RAID for you.
I would recommend setting up one VM with docker on it, that would be were most services should run.
I create VMs for services only if they can#T be put into a docker container.
CPUs can handle plenty, server load is fairly different to desktop load. Atm I run about 17 services on my NAS concurrently, plex/linux iso seeding/airsonic/etc. and it's an ancient i5 with 4 cores.
For backups I personally use restic and push everything into backblaze b2, they cost about half a dollar per terabyte.
I wanted to learn docker and next kubernets, but I didn't have the motivation, I have pi's waiting to be used for that, plans for the future.
Nice to hear about the CPU, 1600 is more or less a i5.
I am going to backup locally, less money long term and more secure in a way.
if it's local it isn't a backup. I guess it depends on how much you care about the data, but unless you are putting that stuff somehow in a separate failure domain (fire / flood / lightning strike) then you aren't doing much more than protecting against hardware failure which you can cover off with zfs + snapshots. I mean it's better, but it's not that much better. If you really care about it it needs to be at least one other site.
Local backups aren't as secure as off-site backups. The common example is your house burning down. It's useful to have a local cache for the backup so you can quickly recover localized disasters, but generally it is very very recommended to have one backup offsite.
Of course you can encrypt it, so nobody knows what you stored.
Proxmox actually can handle docker and LXC for you :). As well as libvirt machines
I don't think it does docker, to my knowledge it only manages LXC. I use Portainer for Docker Swarm since they have a neat editor to manage stacks.
Yeah, you're actually right there. No docker :(
I've been running an Ubuntu box with mdadm and LVM for a while. It's been fine for my 1-man or friends/family storage access and Plex server. MDADM has been useful as I went from a 5 drive raid 5 to a 7 drive raid 6, then expanded the drives from 1.5T to 3T to 8T over the past 15 years or so.
I remember looking into ZFS and it seemed like a tough proposition for dealing with volume expansion for a small budget versus an Enterprise budget that would be willing to put up the money for things like additional hardware for cutovers.
Yeah, ZFS is a pretty expensive setup to run, though most of reddit Homelabbers like to pretend it isn't (tbh, everytime you bring up that ZFS RAIDs can't be expanded, ZFS-users seem to deliberately misunderstand and correct that you can extend the ZFS pool easily by adding new drives, as if that solves the problem)
Honestly, my experience with home servers is that they are not worth it, but that is largely because of crappy US ISPs. Most of them have a part in the service agreement specifically disallowing you from running servers, and even without that upload speeds tend to be terrible and dynamic IP addresses add another painful hoop to jump through. So for me, a cheap VPS from a provider like AWS, DigitalOcean, or Azure is worth the extra cost.
Dynamic ip addresses can be overcome by dynamic dns, or even manually, since in my experience even dynamic leases don’t change too often, unless you reboot the gateway. Some things like a Minecraft server don’t require much upload bandwidth but do require a good cpu and lots of ram, which can be expensive on a cloud host.
I run a home server for all my music and other files so I can access them wherever I am. I have done so for over 10 years. It’s not that bad, really. (I’m lucky enough to have symmetric gigabit fiber to my house these days though 😁)
I'm aware, it's just another annoyance to work through. Especially when you are running a device that is configured to use their junk ISP-run DNS servers that take ages to update.
I do have FIOS gigabit so there's really just dynamic DNS to contend with. Upload bandwidth is sitting at close to 800-900 Mbit.
Also runninf caching DNS resolver (pihole) forwarding to 1.1.1.1 for excellent local performance.
European here, I probably don't have that limitations.
What normally they do is limiting my traffic speed at the most, I will have to motorize the network to see if they do that. And if they do... well, I really doubt a call to them will change anything XD.
Normally when a football game is played the ISP throttle the streaming services. Nothing happens to them, some laws exist just to be written and read not used.
Edit: Home server are not worth it, but running nodes in clouds defeat almost all the purpose of being decentralized.
It depends on what services you'd like to run, for instance Im running pfsense as my main router and also containerized pihole to block ads on my network. I'm also running freenas for storage and a Windows 10 vm so that my wife can play games on that through steam on her laptop without disrupting my PC.
Oh yes, naturally internal-use servers are much less painful. It's just making it available externally where ISP nonsense starts to come into play.
How's the performance on gaming through the VM? I've never really considered a setup like that before and I'm curious how well it'd work (I assume CPU performance is comparable but graphics aren't great?
The CPU takes a small hit, I passed through an rx560 oem card to the VM and plugged a dongle in that fakes a monitor. I haven't checked the performance on a more demanding game, my wife says it plays the sims better that way than on her core 2 duo laptop.
Recommendation: GitLab is incredibly bloated; Gitea/Gogs's hardly noticable, resource-wise. You'll have to figure out a separate CI, though; Gitea/Gogs doesn't come with one (but you'll still come out ahead resource-wise vs. GitLab, which regularly hogs 6GB on a single-user instance).
But yeah, you should have no problem otherwise; that's more than enough for everything.
I'd recommend running everything you can in a docker container. It just makes it a bit easier to update and stop/start etc.
I'm pretty sure the bitcoin blockchain is like 150gb so you're going to burn a HEAP of space on that alone. I've seen guides on running a BTC node on a raspberry pi so it shouldn't be TOO demanding.
My advice?
Don't run a server for this in your home. I mean, you can, but it's... Painful. Especially for public-facing services. Instead, lease a server, or start with a small cloud server.
It's pretty great not to have to worry about power, internet connectivity, hardware failures, etc. And, you can do all the virtualization you like on a dedicated box. They're not too expensive, with some hunting. I use Hetzner, and got a pretty beefy box for ~35US/month. Soyoustart is pretty good too, and they have US based servers.
I have done this last year, I prefer owning the hardware. And I don't know what you mean painful, this is a hobby for me :D.
The node of cryptocurrency on the cloud defeat the purpose of decentralization and I want to support the network.
If you're running a public facing service, like web, email, etc, most home ISP's block this sort of thing, and you upload speed is crap. Also, you're at the whims of your residential power provider, unless you have your own generator set, fuel supply, etc etc.
Also, you get to deal with drive failures, power supply failures, etc etc.
I mean, if you want to do it, just to do it, go for it. I ran a handful of servers in the house for a while. The costs though, were higher than what I got for leasing a dedicated box from Hetzner, though.
I'll hold my tongue on bitcoin and other cryptocurrencies though, as they are just a colossal waste of resources, imo.
It's really not that tough, I have a home web/ssh/ftp server. I easily forwarded all of the ports. Not having a static public ip can be annoying, but it doesn't change often and I set up dynamic dns. I don't have many outages and I don't need better than 95% uptime, and if you experience psu/drive failures on a small home server you probably should get better parts. Do the drives in your desktop regularly fail?
I have had drives fail, yes. And power supplies. When you run 5 disks per raid group on 5-10 servers, theres a good chance you will get a drive failure or 3 if you run used gear.
But you are right: if you dont care about the uptime, you're good to go doing it yourself on a shoestring.
I understand about the possibility of having a failure but look at the services I am going to run, if they stop working is not a biggy.
All I need to backup is the share, confg files and the gitea database.I understand all of that, if I was running a business.
Maybe is good idea to backup to the cloud like others have said.
Yeah, if it's nothing others will be hitting, go for it.
I'd say for many things this makes sense, but media services, you can't beat the experience of running stuff at home. ISP outage? who cares. Stream 4k all day long, who cares. Disk problems are basically solved. Compute redundancy is kind of pointless but I can next day a board in whenever one fails and they tend to run for 4-5 years so meh.
It's obvious that the economies of scale make cloud providers better for most stuff. I have a local mail relay in my house, I rent a £3 per month vm from ovh for the web presence. For everything else I run it locally. ipv6 makes addressing a snap.
You run an email server out of your house? How deliverable is your mail? I know I personally block any DHCP pool address on mail servers I run.
But yeah, a lot of stuff makes more sense to run in your home (Like media servers).
It forwards mail destined for the internet via a VPS. It's convenient to have an smtp relay server on the home network to buffer messages and manage cron mail for local domains (I deliver mail direct to my workstation for anything that goes wrong in the house).
I don't know much about other stuff but you can run Gitea and Tor middle relay (configure accordingly) on your pi easily - they use very less memory/cpu when idle also otherwise wouldn't put too much load, I would also setup hidden service to access gitea. You can setup pi-hole to block ads on your devices (dns proxy to block ad networks by not resolving them).
I like the idea of putting gitea on an hidden service even if I am not going to use I would like to learn how to do it.
It's very easy once you've setup a webserver. imo this is easier than buying a domain, setting up certificates, dynamic dns and other stuff.
The problem with a pi is you shouldn't use the sd card for anything other than booting (the larger one you get, the hotter it'll get and the shorter life it'll have, especially with more use), and adding more storage faster than USB2 is complicated. I have done a MC server on my pi3, but I've only ever tried it with one player.
There are SBCs with SATA ports though, I think Odroid might make them. That could be a better solution than a Pi.
Have you considered running a Minetest server? It's a lot less heavy on system resources. It's also open source.
I tried for 10 minutes, I don't think I can start playing Minetest instead.
There's a mod for it called MineClone 2. It makes it exactly like Minecraft.
I can give it a try I guess, but the hard part is not convincing me, but the people I play with.
That is always a problem. It is free though, so that should help.