Would this be alright for a NAS?
Right now I've got a shitty WD EX4100 and everything was sort of running along nicely with docker and all, but today it rebooted and decided that it didn't want to do anything with docker anymore. I got the thing before I got into Linux and its time to move on.
Someone locally is selling the following for $250CAD
- Quad core Celeron @ 2Ghz
- 8GB RAM
- Fractal Node 304 case
- 2x WD 2TB Red 3.5" hard drives
- 120GB Kingston SSD
- 700 watt semi-modular power supply
All I run are the following:
- nzbget
- sonarr
- qbittorrent (but I'll switch to a better one)
- serve up content to my HTPC (running Kodi, so no transcoding or anything)
I don't need the drives that come with it. I'll be putting in 4x 4TB WD Reds. Right now the box is running Open Media Vault 6, so I'll give that a swing, otherwise it'll just be Ubuntu server.
How does this sound? I'm not opposed to spending some money on a new NAS, I just want something simple that I don't have to fuck around with too much.
I ended up going with the HP Proliant
OS: Ubuntu 20.04.3 LTS x86_64
Host: ProLiant ML310e Gen8
Kernel: 5.11.0-43-generic
CPU: Intel Xeon E3-1230 V2 (8) @ 3.700GHz
GPU: 01:00.1 Matrox Electronics Systems Ltd. MGA G200EH
Memory: 32GB
It's pretty good so far. Thanks everybody!
I'd probably go an alternate route, it seems a bit pricey for what you're going to do with it.
Buy a used workstation. Companies unload these things all the time for <200, and they've got ECC RAM, which is the best for servers.
The extra RAM is critical if you use ZFS, but even regular usage benefits from being able to cache more files. On my next upgrade I intend to setup a RAM disk as a temp drive for in-progress torrents.
yeah, I've thought about that. I'm not sure it has enough hard drive bays, though. How do you think that HP Proliant ML310e Gen8 I mentioned above would be?
What CPU cooler does it come with? I made the error of putting a Noctua tower cooler in my Node 304, it made it very difficult to install six hard drives as the SATA data/power cables take up the same space.
Also it might be worth looking into something like a used HP MicroServer. Having the drive bays accessible from the front so that you can hot swap a failed drive saves a lot of time, and ECC memory just might end up saving your data.
Good question about the cooler. Not sure. All they mention are the two intakes and one larger exhaust for cooling.
Here are the only photos I have of it. It doesn't look like its got a big tower cooler or anything.
There is a HP Proliant ML310e Gen8 posted for $200
Do you think that'd be a better direction to go in? The price is definitely right.
That's likely to be fine. Some potential concerns to consider include:
thanks! I think I'll be passing on the one above. If it seems alright, I might go for that Proliant Gen8 I mentioned in my other comment. I think its enough space and has more than enough power for what I'm doing with it.
This stuff stresses me out :)
If you're not space-constrained, the ProLiant will certainly give you a much better bang for your buck. The biggest downside I can think of is that that Xeon may idle at a much higher power usage than a more consumer-oriented CPU.
(For what it's worth, the Z420 @vord linked would work fine for your purposes as well: you can mount 3.5" drives in 5.25" bays with cheap adapter brackets. The GPUs wouldn't do anything for you now, but might be fun to experiment with down the line.)
System-building can be stressful, to be sure. You're much less likely to accidentally destroy something now than in the bad old AT days, but there are definitely still surprise incompatibilities it's easy to run across. In can be fun to improvise around issues, though! Here's a recent war story of mine, which maybe can be inspirational:
As I've mentioned, I run a home server (it used to be just a NAS, but I've been putting more services on it). Relevant to this story, as of a month or so ago, it had 3x 1.5TB HDDs (in a ZFS RAID-Z pool), a basic power supply, a mid-tower ATX case with 3x 3.5" bays, 3x 2.5" bays, and 3x 5.25" bays, a tiny cheap-o video card to convince the motherboard to POST, and a bunch of other hardware that doesn't matter here, running FreeBSD 12 (for reasons). I was running out of storage and saw a good deal on hard drives on Newegg; so I somewhat off-the-cuff sprung for 5x 4TB HDDs.
Due to limitations of ZFS, I can't expand my pool and replace the devices in-place; I need to set up the new drives as a new pool, migrate all the data, and then decommission the old pool.
Fortunately, ZFS provides a convenient tool called
zfs-send
which makes it possible to turn a zpool into a streaming image that you can send over the network, or store somewhere, or do whatever with. I don't have enough room to store the image I'm going to create; but my desktop has enough drive bays and connectors to load everything up, so I can use it as a drive mule, send the snapshot over the network to it, and then swap the drives back into my NAS.So, I start doing that, and immediately run into a problem.
I can't just remove it, because it's a gaming PC and the rest of the system was built on the assumption you'd drop a big standalone video card in it and doesn't have integrated graphics (and won't boot without a GPU). Fortunately, the server has a tiny little video card because it's mostly scavenged from older revisions of this gaming desktop and has the same limitation! So, I can swap the video cards and mount all the new hard drives in my desktop. This setup is goofy (my server doesn't even have a monitor attached to use its temporary new hefty GPU on), but everything I need works.
My desktop runs Debian, so it's straightforward for me to install
zfs-dkms
and initialize the pool. SeeingSize: 11T
indf
's output is pretty sweet! I do a couple of quick tests ofzfs-send
to try to get the invocation I want, then kick it off. I'm sending roughly 1.8TB over gigabit ethernet, so this takes a few hours. I go to bed.In the morning, the send is complete. I check on it and immediately find an issue.
I want to keep my snapshots. There's nothing I know I need in them, but, well, that's the thing, isn't it. So I re-research the
zfs-send
commandline options, wipe the filesystem (the easiest way I found to do this was just to destroy and recreate the pool), and re-send.Once the data has been sent again, I still don't see the snapshots! Turns out with the version of openzfs in Debian, you have to add a flag for
zfs list
to show snapshots. I'm pretty sure the first send actually worked fine. Oops. Anyway, now I'm ready to swap all the drives around.Pulling all the drives out of my desktop results in a nasty scraped knuckle, but is otherwise smooth. Swapping the drives in the server is four-fifths smooth.
I could have sworn I had more adapters than that, but evidently not. Fortunately, the new pool has two-drive redundancy, so I can just leave one of the drives disconnected and operate in a degraded state pretty safely until an adapter arrives. I'll just mount them all, and…
Oops. I knew about that one, too. Oh well, I'll just leave it out of the case until that adapter arrives. (Fun side effect of these two problems, but not really an issue, is that I have to leave the OS drive loose in the case in order to reach with a power connector. SSDs don't really care about this, but it's amusingly unpolished and in line with the way this has been going.)
So, I boot the server, go to import the pool, and…
Guess it's time to upgrade to FreeBSD 13.
This goes fine. The correct order of operations with upgrading the OS and packages isn't super clear, but whatever I do works well enough for me to boot. At some point around here, I discover a very orthogonal but fairly serious issue:
sssd
caches my account if LDAP is down, but it's now up with nothing in it, so my user account on my laptop disappears.This is a real problem. It's surmountable—I have local admin accounts on all my systems, and direct hardware access regardless—but really annoying, and it's preventing me from ssh-ing into the server for arcane reasons. I cobble together a connection by
su
ing to local admin and copying the ssh keys from my regular user account. After the upgrade, importing the pool goes perfectly smoothly. I quickly mount LDAP's database, restart the daemon, restartsssd
on my laptop, and check: my user account exists again. Whew.At this point, everything is up and running. Woohoo! I order adapter brackets and a Molex-to-SATA adapter. When they come, I mount the last drive. It is extremely tight to finagle it into the bay, but I manage to do so without needing to take anything apart. I can actually mount the OS drive, now, too.
And that's the end! It was a surprisingly bumpy ride, but in the end, I got everything working, and the only additional stuff I had to buy were the two adapters (and I was able to get up and running without them).
At the risk of starting a flamewar...
I migrated from FreeBSD with ZFS to BTRFS purely for that expand/replace problem. Slower, but on an average home network the disk speed isn't the limiting factor.
Being able to migrate disks/partions online is some black magic. I once had a BTRFS root partition on a spinning disk, which I wanted to migrate to a smaller SSD. Added SSD to the pool, rebalanced, removed HDD from pool. Then wiped HDD and added to the other HDD pool.
Added bonus of using mutt disk configs in a raid 1(6+4+2+1+1) is just fantastic. When one of those smaller drives dies, I pop open case, replace with larger drive, boot, and rebalance. If I had a hot-swap chassis wouldn't even have downtime.
Had another thought:
Keep in mind that they're largely the same silicon, just the Xeon has more L2/L3 cache, ECC support, and generally more cores. Not unlocked for overclocking, but only a madman would overclock a server. :)
Since this NAS is going to be 100% media, I'm really considering ditching all of the fancy RAID stuff and having separate volumes for each media type.
The second part... stressful. haha. So much of this stuff is a pain in the ass.
Not a bad idea. RAID is great for merging smaller drives so you don't have to juggle data between multiple smaller volumes, which is what I plan on doing for the 2x1TB Gen3 NVMes in my new build, but it's not worth the hassle for larger drives IMO, unless you really do need some redundancy.
p.s. Also worth mentioning for those in the storage market right now is that Amazon (.ca and .com) has a 36% off daily deal for the next 10 hours on 12TB WD Elements external drives... which are pretty damn easy to shuck - video. It's not a historic low price, but considering the inflated HD prices right now it's still a pretty good deal. I just picked up one for myself.
oh nice! I picked up another 4TB for $99 (WD40EFZX) from good ol' Canada Computers. The other drives are all WD40EFRX, but I think the ZX is essentially the same. I only really need it for the shuffling around anyway.
I'm picking this server up in about fifteen minutes, so we'll see how it goes.
From everything I have read, the 12TB Elements could have a bunch of different drives in it, and you never really know ahead of time which you will get. It's always a bit of a roll of the dice, so 🤞 for a newer 7200 rpm one.
Good luck with the new NAS! Before I can start putting my new build together, I'm still waiting on my motherboard to arrive, which got stuck at LAX for 4 days last week and has now been stuck in Vancouver for another 2 days, despite me paying $30 for Express shipping. :(
Once I do finally build it, I will make a /r/Battlestations style post about it here though, since I kinda went all out with a white case + AIO + PSU + tons of RGB (something I have never done before), and even a bunch of new cable management stuff too. So I also plan on completely redoing my old desk setup as well... and it should be pretty sweet when I'm done (which is why I am so excited!!). :P
You should do a similar write-up about your experience setting up the new NAS, IMO. :)
that's such a sweet setup. I'd love to have a smaller screen above my main like that.
Your new build is sick. I'd love to have something like that. I thought about building a proper gaming system, but I only play GTAV... and I really only drive around while I listen to podcasts. :)
Thanks! Hopefully it will look ever sweeter once I'm done building the new PC, reorganizing my desk, and lighting it all. :)
Lately (before my old PC died) I was mostly driving around in Forza Horizon 5 while watching movies, shows, and youtube videos, so I'm in a similar boat. :P
And IMO you should think about giving dual monitors a try, even if you do only play GTAV. My 5 screens is ridiculous overkill, and mostly made up of monitors I have slowly collected over time, several different PC builds, and all of which I managed to acquire on the cheap. But I would probably be perfectly fine scaling back to just 2 monitors. I could honestly never go back to just 1 monitor ever again though, since the 2nd is that much of a game changer. It's so much better for multitasking, and increases productivity dramatically.
p.s. If you're looking for a super cheap ultrawide, the one I have above my main 35" is an LG 25UM58-P 25" flat-panel IPS, which is currently only $220 (3 years ago I paid $170 for it during a sale). It's not the greatest monitor ever, but it more than adequate for watching movies/shows/youtube, and for the price nothing else comes close to it, IMO.
right now I've got my MBP to my left as a second screen then another on my right that's connected to a crappy beebox that runs foobar2000 and Kodi. Similar to this but with different speakers and more shit on the desk.
I'd love to have a vertical screen again. An ultrawide would be really handy. My main is a 1440p. I'd love to move to a bigger 4K or something, but I'm not sure my old late 2013 MBP would like it.
I picked up that HP today and its chugging along nicely. Shuffling around the 7tb the new drive and other drives as they're emptied is going to be a total pain in the ass, but it'll finish eventually. Tomorrow I'm going to mess with some other transfer methods to see if I can get some better speeds out of the thing. Right now it's doing about 125MB/s or whatever with smaller files and into the 200s with larger ones. I figured it'd be faster... but a lot of this has me a bit out of my element.
Ah, okay, I mistook you for implying you didn't have a second monitor already. :P
If you're looking for accurate benchmarking of your drives, I use CrystalDiskMark... which you can then compare the results of to the WD specs for your models:
WD40EFZX = up to 175MB/s
WD40EFRX = up to 150MB/s
If you're getting significantly slower speeds than that, it could be due to a number of different reasons, but making sure you have new SATA III cables, instead of SATA I, II or even old/worn-down III cables, should probably be the first place you start. But TBH, it looks like your drives are operating well within their expected performance range. WD Reds, and NAS drives in general, are usually more about $/GB, and reliability, than speed.
I built a NAS, so let me offer some specs for perspective and some explanation for why I chose my parts. For reference, I had originally planned to buy something like a synology, but when I compared prices, I realized I could build something much nicer for only a bit more. The prices reflect the cost when I built this pc nearly a year ago.
Notice I haven't included the drives since there is some flexibility here. As people have mentioned, ideally you want CMR drives. The speeds are not especially important – 7200 rpm isn't necessary when your primary use case is serving media. For what it's worth, I use WD Red Plus drives despite the controversy surrounding WD for mislabeling NAS-ready drives. (But note that while the WD Red Plus drives are CMR, the similar-sounding WD Red drives are not.)
I would also recommend an SSD for an OS/cache drive.
For the cpu, I chose Intel for its QuickSync technology. Processors with QuickSync support have a dedicated core for encoding/decoding/transcoding media. Obviously transcoding capabilities should be a priority when building a media server. Because my cpu supports QuickSync, a gpu is unnecessary. But note that this cpu doesn't support ECC memory (nor does it support the maximum clock speed of my ram, for that matter).
For the cpu cooler, I bought the cheapest thing I could find. That said, I absolutely loathed installing this cooler – I felt like I was going to split my motherboard in two. Maybe splurge here a bit if you don't want to deal with the frustration.
For the motherboard, I chose something with enough slots for ~6 drives, just so I'd have room to upgrade. But I also wanted something with good IOMMU groups in case I wanted to use my NAS as a remote gaming rig too. This has yet to come to fruition. You can find something cheaper if you sacrifice the IOMMU groupings, which probably isn't important for you anyway.
For the case, I basically found the cheapest one I could find that's be able to hold ~6 drives. It's kind of an eye sore, though, and it wasn't the most convenient case to work with. I'd recommend upgrading to a fractal case if you can afford it.
Finally, for the memory and power supply, I was less picky regarding these items, so I chose items that were only generally recommended. I made sure the power supply would be beefy enough so that I could possibly add a mid-tier gpu in the future.
As an alternative to building a NAS from PC parts, you could of course buy a used server rack. I have less experience here (feel free to chip in, folks), but my understanding is that there are a few downsides that make this approach less desirable:
that's pretty slick. I probably should have built something, but the cost of it was just too high for what this box will be doing.
A proper NAS like this is so much better than the Synology (etc) ones. Having actual Linux is better in all ways.