Is a NAS for me?
Hi, I keep reading about this thing called a "NAS" and I don't have in my social network a bunch of reasonable geeks to figure out if this is something for me or if it is overkill and I can get by with less -- trying to be frugal and all.
The Situation
At the moment, I have a Raspberry Pi 3 (that a colleague gifted me) which runs Jellyfin, mostly for music. I'd use it for watching series and movies, but given how slow it is at transferring files and the fact that it has a 1GB (maybe 2GB) RAM... I was afraid to break it. On top of that, its storage is a years-old external hard drive.
I use Jellyfin mostly to have music on my iPhone. I can access it when I'm out and about on Tailscale. I hope to find a solution for my photos as well.
I'd also occasionally use the pi to experiment with some self-hosted open-source apps.
I constantly find myself wanting to upgrade because I want to also backup my important photos (with face recognition if possible) and documents "offline" (i.e. in my local network) to something more stable than an aging hard drive. They're all in the cloud, but a second backup option could be great.
What I understand from reading about NAS's is that I basically have one, it's just not... reliable?
The Question
I understand there is definitely a buy-in cost for buying an actual NAS, I'd like to know how much... so that I can make an informed decision on if and when I would buy it. What is an entry-level NAS and how much will it cost? What could it NOT do that an RPi could, and vice-versa? Am I missing an in-between or even an alternative solution for my use case? Is it overkill and should I just upgrade the pi? What are my options?
Thanks in advance for reading my post!
A NAS is literally just a computer that has some storage and makes it available over a network. A Pi can be a NAS if it's set up in a way that fits that.
The usual recommendation for people who want a home device manufactured to be a NAS is Synology's DS series. You'll notice the specs on those sound suspiciously like a desktop computer, because that's essentially what they are; they just have an OS designed explicitly for NAS use and a case that allows for easily messing with hard drives. People run home servers on those too, because they're just funny shaped Linux boxes and they can run Docker containers like anything else.
The in-between alternatives are all variations of "get a computer that's bigger than a Pi and put drives in it". There are as many options there as there are computers that run Linux. There are also things like Unraid and TrueNAS that are the same sort of thing as the Synology OS but for arbitrary hardware.
Running Docker containers sounds pretty nice. Any downsides to that versus other ways of doing things?
It's another thing to learn, but other than that, mostly just the slight disappointment of admitting the "fuckit just pack up the whole OS instead of building real packages" style of distribution won.
I don't run any of these NAS-focused OSes (my machines are bare Linux boxes managed with a custom tool) but I've stuffed everything I run in containers because that's what's most likely to Just Work these days.
An important note for @skybrian and anyone else considering this solution, if you're not particularly technologically inclined or aren't super familiar with Linux, one of the pros of docker containers is that you need to learn a lot less to get them working. Ultimately someone else did the heavy lifting of figuring out how to containerize it and make it run with very little setup and technological know-how from the end-user.
I don’t see any disappointment at all here. Especially if something fucks up and I can just wipe that one container out of existence — unlike a system package, which can end up contaminating any manner of other components, and can be downright impossible to coexist with other applications due to dependency conflicts.
Disappointment that that's what we ended up with as a solution, and not something like Nix but user friendly. It's certainly a solution; it's just a particularly inelegant one IMO.
It's an elegant solution for the real world.
If all developers of all applications always updated their applications to the latest version of all dependencies, of package managers always operated perfectly and maintained perfect awareness of all files on a machine associated with all packages at all times and tracked their dependencies perfectly, and if breaking changes to dependencies were always perfectly communicated to and acted upon by those downstream software packages, yes, we would have no reason to worry about containers.
That's not the world we live in though. So abstracting a minimal file system with all of its dependencies into a logical group per application and then associating networking with that group is a pretty elegant situation imo.
You have assurance that when you get rid of a container, everything associated with the logic of that application is gone.
You never have to worry about being on a "optimal" or "tested" version of a dependency of a package, because the developer explicitly specifies the dependency.
My only gripe is with developers of containers not keeping their dependencies up to date with regard to security hot fixes, but that's more of a bone to pick with developers don't adequately caring for the security of their applications more than anything. The same thing happens frequently with more natively bundled dependencies.
It sounds like it does the job and the remaining question is whether I should be running a home server at all. Cloud syncing and occasional offline backups (backing up to a portable drive) pretty much cover what I want to do.
I'm also a bit wary of ending up running old operating systems and software 24-7 without automatically applied security updates. Offline backups have an air gap and that seems pretty secure.
No it's pretty fit for purpose. A Synology NAS with Docker running works well and isn't too difficult to set up.
Depending on what you're doing and your general comfort with tech and software, it may be quicker for you to just install things the old fashioned way. It's an extra layer of complexity to learn but that's about it.
I struggle to understand their setup and maintenance (my workplace has dockers, kubernetes, and another stuff in a spaghetti hodge podge depending on which product you're working on and they all use them differently) but I can't deny that it's super useful to just restart a container for something when you screw it up and just want to start from a clean slate.
The Raspberry Pi 3 has the Ethernet port on the same bus as the USB ports. This caused contention when using both USB and Ethernet at the same time, like when run a NAS. The Raspberry Pi 4 and 5 moved the Ethernet to a separate bus so the Pi can use both simultaneously. This means upgrading to a newer Pi will get you more ram, faster Ethernet and better performance.
I started out by getting a Synology DiskStation DS220+ about 2.5 years ago to primarily serve as storage for my Plex library and act as my Plex server. (Note: the DS220+ has been replaced by the DS223+, which is essentially the same thing but with a better CPU).
The buy-in cost was $300, plus another $250 for two 6 TB Seagate IronWolf NAS drives, and $100 for a 4 GB RAM upgrade (so I could run more Docker containers).
(I recently just upgraded to the DS423+ so I could take advantage of using M.2 SSD for a cache pool, the upgraded CPU, and two extra drive bays.)
You can absolutely achieve the same functionality for cheaper, but truly the Synology is quite easy to use, extremely versatile, and can do a lot. And Synology support is also top tier: I had some RAM sectors go bad about 3 months after my 2 year warranty expired but they still RMA'd my device and replaced it with a new one.
I can't really say what the NAS can do that the Pi cannot, but a lot of the NAS features are quite easy to set up and use with the Synology, and being and to easily set up and run Docker containers means you have a lot of options for anything that isn't already built-in.
I think the key benefit you'd pick up would be hotswappable drives and RAID (if you went with such a configuration) alongside more processing power to run something like Plex.
Turnkey solutions for stuff like backup and the like are a plus for some, but I'm not sure a turnkey app would give you 'more,' -- possibly just simpler.
If you're not dealing with multiple TBs of data, and you're comfortable with the Pi, I'd suggest buying another external drive and doing a software RAID1 to it. Then when the older drive dies, you can replace it and keep on chugging.
One thing I’d like to note: Especially for lower quantities of data (anything up to 1-2 TB these days really), I’d definitely go with an SSD for the new drive(s).
More robust and faster speeds, but let’s just say with hard drives you get what you pay for.
Unpopular opinion, but I generally don't think RAID setups make sense for homelabs unless you truly want to just tinker with storage configurations. (And if you are in that group, you should do it!)
I've had some bad experiences with RAID1 setups, so I'm at a point where I just backup what I want to another drive (and then cloud backups on top of that). I don't do 1:1 backups this way, but you absolutely could with a variety of sync/copy tools/commands on a schedule/cron. The only thing you don't get from this setup is failover, but I'd be curious to hear what setups on homelab-priced equipment would let a drive fail in a RAID without stopping boot, etc until it's acknowledged.
In terms of upgrading your hardware for more computing power or RAM to power your containers, I highly recommend poking around eBay to find something cheap that does the job. I have an old Dell workstation with a many-core Xeon CPU that was pretty affordable. If you're looking for something with a small physical footprint, that's a little harder, but you will at least need space for whatever number of disk drives you're using.
I don't really agree with this take. Depending on your needs, either ZFS (through TrueNAS) or unRAID are both great for homelab use.
I'm definitely biased due to my bad experiences. One time was partially my fault - I did a hardware raid, and wasn't able to replace the motherboard I used for it with the same one, so the data was pretty much unrecoverable. (A RAID card would have solved this issue, assuming it would be easy to buy a replacement.)
My second run of bad luck is when I had a software RAID1 on Linux. It worked just fine until I had a power outage, and the UPS I had was faulty - it kept clicking the power on and off repeatedly before I could stop it. This actually killed those drives - they're completely unusable now. And the manufacturer (calling you out Cyberpower) said I can kick rocks because it was out of the 1yr warranty period. Their connected equipment warranty only applies during that time period.
I did have other JBOD drives connected to the same machine, and they were fine. So it's something about the software raid that caused problems with the frequent power cycling.
Oh yeah, frequent power cycling like that will absolutely still fuck with modern software RAID. I wouldn't recommend a NAS to anyone without a UPS (that works).
To add though, OP's use cases would also be best served by a NAS OS, at which point there's no reason not to RAID.
I know this isn’t really a response to your actual question but I wanted to share this self hosted photo management app (since you mentioned wanting to find one in your post):
https://immich.app
Seconded. You need to try immich.
I cannot give advice, but I can describe my situation and you may take it as you will.
In 2013, I bought Gigabyte C1037UN-EU motherboard and 2x 3TB HDDs (5400rpm) plus other hardware to make a full PC out of it. I installed Linux on it and set up HDDs in RAID1 (mirror; one fails, other one still has all the data).
I have set up my home directory on it and access it through NFS on my local network - this DIY NAS became my (amd other users'/family members') home directory, my desktop has only boot drive.
Being a home directory on network, I also had family photos on it and had DLNA server running to accessphotos (and later some movies) on the TV over the network.
I had this running for almost 10 years. After that time I upgraded (due to fear from HDDs going bad after 1 years).
Today I have i5-4670k (or whatever) with 8 GB RAM (16GB waiting for installation), GTX 750 (for HW encoding), SSD as OS drive and 3x 4TB (7200rpm) in RAID5 (one fails, data still available; scalable to make it bigger by adding another HDD). This is my new DIY NAS for a year and a few months. It still has users' home on it, it doesn't run DLNA anymore, I switched to Jellyfin. It also runs TVheadend server and I have a bunch of RPi 4s with TV HATs (tuners) to basically make IPTV from DVB-T2 signal. It also runs DNS server and Ubiquiti network controller (or whatever) for my APs. Oh and also Brother scanner program, so I can can directly from my printer without interaction with desktop PC or phone.
I made the server the centerpiece of my PC setup at home. It started more than 10 years ago and evolved into home server (from standard end user HW), not just NAS.
If you want to have just a server running Jellyfin and NAS (as a big disk to lay off your big files connected via network) and maybe something like Pi-hole, Raspberry would do. If you want it to be more versatile or have more oomph, you can do with some older PC hardware. I think Linus Tech Tips made a video (like a year or two ago) about making your DIY NAS from older office PC you can get for less than say 100€. Maybe have a look at it? There is just one BIG drawback. For example my server runs at 50W idle. When I give it full tilt, it goes over 100W like nothing, maybe as high as 120-130W. Compared to RPi with one HDD doing probably 10W...