8 votes

Would this be alright for a NAS?

Right now I've got a shitty WD EX4100 and everything was sort of running along nicely with docker and all, but today it rebooted and decided that it didn't want to do anything with docker anymore. I got the thing before I got into Linux and its time to move on.

Someone locally is selling the following for $250CAD

  • Quad core Celeron @ 2Ghz
  • 8GB RAM
  • Fractal Node 304 case
  • 2x WD 2TB Red 3.5" hard drives
  • 120GB Kingston SSD
  • 700 watt semi-modular power supply

All I run are the following:

  • nzbget
  • sonarr
  • qbittorrent (but I'll switch to a better one)
  • serve up content to my HTPC (running Kodi, so no transcoding or anything)

I don't need the drives that come with it. I'll be putting in 4x 4TB WD Reds. Right now the box is running Open Media Vault 6, so I'll give that a swing, otherwise it'll just be Ubuntu server.

How does this sound? I'm not opposed to spending some money on a new NAS, I just want something simple that I don't have to fuck around with too much.


I ended up going with the HP Proliant

OS: Ubuntu 20.04.3 LTS x86_64
Host: ProLiant ML310e Gen8
Kernel: 5.11.0-43-generic
CPU: Intel Xeon E3-1230 V2 (8) @ 3.700GHz
GPU: 01:00.1 Matrox Electronics Systems Ltd. MGA G200EH
Memory: 32GB

It's pretty good so far. Thanks everybody!

19 comments

  1. [2]
    vord
    (edited )
    Link
    I'd probably go an alternate route, it seems a bit pricey for what you're going to do with it. Buy a used workstation. Companies unload these things all the time for <200, and they've got ECC RAM,...

    I'd probably go an alternate route, it seems a bit pricey for what you're going to do with it.

    Buy a used workstation. Companies unload these things all the time for <200, and they've got ECC RAM, which is the best for servers.

    The extra RAM is critical if you use ZFS, but even regular usage benefits from being able to cache more files. On my next upgrade I intend to setup a RAM disk as a temp drive for in-progress torrents.

    5 votes
    1. tomf
      Link Parent
      yeah, I've thought about that. I'm not sure it has enough hard drive bays, though. How do you think that HP Proliant ML310e Gen8 I mentioned above would be?

      yeah, I've thought about that. I'm not sure it has enough hard drive bays, though. How do you think that HP Proliant ML310e Gen8 I mentioned above would be?

      3 votes
  2. [2]
    asymptotically
    Link
    What CPU cooler does it come with? I made the error of putting a Noctua tower cooler in my Node 304, it made it very difficult to install six hard drives as the SATA data/power cables take up the...

    What CPU cooler does it come with? I made the error of putting a Noctua tower cooler in my Node 304, it made it very difficult to install six hard drives as the SATA data/power cables take up the same space.

    Also it might be worth looking into something like a used HP MicroServer. Having the drive bays accessible from the front so that you can hot swap a failed drive saves a lot of time, and ECC memory just might end up saving your data.

    3 votes
    1. tomf
      Link Parent
      Good question about the cooler. Not sure. All they mention are the two intakes and one larger exhaust for cooling. Here are the only photos I have of it. It doesn't look like its got a big tower...

      Good question about the cooler. Not sure. All they mention are the two intakes and one larger exhaust for cooling.

      Here are the only photos I have of it. It doesn't look like its got a big tower cooler or anything.

      There is a HP Proliant ML310e Gen8 posted for $200

      • keys included
      • Intel Xeon E3-1230 V2 4C8T - 32GB RAM
      • PSU has been replaced with EVGA 750
      • 4x 3.5” HDD Bay
      • 2x 5.25” HDD Bay included

      Do you think that'd be a better direction to go in? The price is definitely right.

      3 votes
  3. [13]
    whbboyd
    Link
    That's likely to be fine. Some potential concerns to consider include: @asymptomatically's concern about clearances in the case is legit. Compact cases often put a lot of unpublished limitations...

    That's likely to be fine. Some potential concerns to consider include:

    • @asymptomatically's concern about clearances in the case is legit. Compact cases often put a lot of unpublished limitations on the dimensions of things. My NAS is in a cheap ATX mid tower; I don't have to care too much about the size because it lives in my basement, and I don't have to worry about clearances or capacity or having to get unusual or specialized parts to fit.
    • Check that the motherboard has enough SATA ports! A lot of mini-ITX boards only have four ports, which will leave you without a port for your OS drive if you load it up with four data drives.
    • The CPU is weak. For the workloads you've outlined, it'll probably work fine; but if you start trying to do much else with it, it will likely struggle. I'd try to find out what socket it is, and if there are reasonable upgrades you can drop into that socket.
    • The RAM amount is a little weak, too (though again, fine for what you're running), but this should be upgradable if needed.
    2 votes
    1. [12]
      tomf
      Link Parent
      thanks! I think I'll be passing on the one above. If it seems alright, I might go for that Proliant Gen8 I mentioned in my other comment. I think its enough space and has more than enough power...

      thanks! I think I'll be passing on the one above. If it seems alright, I might go for that Proliant Gen8 I mentioned in my other comment. I think its enough space and has more than enough power for what I'm doing with it.

      This stuff stresses me out :)

      3 votes
      1. [11]
        whbboyd
        (edited )
        Link Parent
        If you're not space-constrained, the ProLiant will certainly give you a much better bang for your buck. The biggest downside I can think of is that that Xeon may idle at a much higher power usage...

        If you're not space-constrained, the ProLiant will certainly give you a much better bang for your buck. The biggest downside I can think of is that that Xeon may idle at a much higher power usage than a more consumer-oriented CPU.

        (For what it's worth, the Z420 @vord linked would work fine for your purposes as well: you can mount 3.5" drives in 5.25" bays with cheap adapter brackets. The GPUs wouldn't do anything for you now, but might be fun to experiment with down the line.)

        System-building can be stressful, to be sure. You're much less likely to accidentally destroy something now than in the bad old AT days, but there are definitely still surprise incompatibilities it's easy to run across. In can be fun to improvise around issues, though! Here's a recent war story of mine, which maybe can be inspirational:


        As I've mentioned, I run a home server (it used to be just a NAS, but I've been putting more services on it). Relevant to this story, as of a month or so ago, it had 3x 1.5TB HDDs (in a ZFS RAID-Z pool), a basic power supply, a mid-tower ATX case with 3x 3.5" bays, 3x 2.5" bays, and 3x 5.25" bays, a tiny cheap-o video card to convince the motherboard to POST, and a bunch of other hardware that doesn't matter here, running FreeBSD 12 (for reasons). I was running out of storage and saw a good deal on hard drives on Newegg; so I somewhat off-the-cuff sprung for 5x 4TB HDDs.

        • I'm going to bullet the problems I encountered like this. This first one isn't really a problem, but more of a warning: now is an awful time to be buying hard drives. "Shingled magnetic recording" (or "SMR") technology enables significantly improved density at the cost of making some write patterns catastrophically bad. Critically for pooled storage, resilvering is one of the most pathological cases, which is… just really, really bad. Hard drive manufacturers have been quietly rolling out SMR technology because it's cheaper for a given capacity, and mitigating the write performance through tricks like big caches. There was a huge scandal around WD trying to hide this a few years ago. Anyway, the upshot is, do your research and make absolutely certain the drives you're buying for pooled storage are not SMR (the older technology, which you want, is called "conventional magnetic recording" or "CMR"). Drives being marketed "for NAS use" is not good enough.

        Due to limitations of ZFS, I can't expand my pool and replace the devices in-place; I need to set up the new drives as a new pool, migrate all the data, and then decommission the old pool.

        • Problem 1: the server doesn't have remotely enough bays or—more critically—SATA ports to connect nine drives (don't forget the OS drive!) simultaneously.

        Fortunately, ZFS provides a convenient tool called zfs-send which makes it possible to turn a zpool into a streaming image that you can send over the network, or store somewhere, or do whatever with. I don't have enough room to store the image I'm going to create; but my desktop has enough drive bays and connectors to load everything up, so I can use it as a drive mule, send the snapshot over the network to it, and then swap the drives back into my NAS.

        So, I start doing that, and immediately run into a problem.

        • Problem 2: my desktop's video card is physically incompatible with mounting as many hard drives as I need to; it's too long and interferes with some of the drive bays.

        I can't just remove it, because it's a gaming PC and the rest of the system was built on the assumption you'd drop a big standalone video card in it and doesn't have integrated graphics (and won't boot without a GPU). Fortunately, the server has a tiny little video card because it's mostly scavenged from older revisions of this gaming desktop and has the same limitation! So, I can swap the video cards and mount all the new hard drives in my desktop. This setup is goofy (my server doesn't even have a monitor attached to use its temporary new hefty GPU on), but everything I need works.

        My desktop runs Debian, so it's straightforward for me to install zfs-dkms and initialize the pool. Seeing Size: 11T in df's output is pretty sweet! I do a couple of quick tests of zfs-send to try to get the invocation I want, then kick it off. I'm sending roughly 1.8TB over gigabit ethernet, so this takes a few hours. I go to bed.

        In the morning, the send is complete. I check on it and immediately find an issue.

        • Problem 3: the send didn't transfer all the snapshots.

        I want to keep my snapshots. There's nothing I know I need in them, but, well, that's the thing, isn't it. So I re-research the zfs-send commandline options, wipe the filesystem (the easiest way I found to do this was just to destroy and recreate the pool), and re-send.

        Once the data has been sent again, I still don't see the snapshots! Turns out with the version of openzfs in Debian, you have to add a flag for zfs list to show snapshots. I'm pretty sure the first send actually worked fine. Oops. Anyway, now I'm ready to swap all the drives around.

        Pulling all the drives out of my desktop results in a nasty scraped knuckle, but is otherwise smooth. Swapping the drives in the server is four-fifths smooth.

        • Problem 4: I need to power six drives (five data drives and the OS drive). The basic power supply in my server provides four SATA and two Molex plugs. Somehow—I'm seriously not sure how this is possible—I only have one Molex-to-SATA power adapter.

        I could have sworn I had more adapters than that, but evidently not. Fortunately, the new pool has two-drive redundancy, so I can just leave one of the drives disconnected and operate in a degraded state pretty safely until an adapter arrives. I'll just mount them all, and…

        • Problem 5: I need to mount two of the drives in 5.25" bays, but I only have one set of adapter brackets.

        Oops. I knew about that one, too. Oh well, I'll just leave it out of the case until that adapter arrives. (Fun side effect of these two problems, but not really an issue, is that I have to leave the OS drive loose in the case in order to reach with a power connector. SSDs don't really care about this, but it's amusingly unpolished and in line with the way this has been going.)

        So, I boot the server, go to import the pool, and…

        • Problem 6: The version of ZFS in FreeBSD 12 is significantly older than the version in Debian Bullseye. The pool has feature flags the server OS doesn't know about. I can only import it read-only.

        Guess it's time to upgrade to FreeBSD 13.

        This goes fine. The correct order of operations with upgrading the OS and packages isn't super clear, but whatever I do works well enough for me to boot. At some point around here, I discover a very orthogonal but fairly serious issue:

        • Problem 7: I use LDAP for user accounts on my systems. This is totally unnecessary, but handy for some things. The LDAP server runs on this same server, and I put the LDAP database on the ZFS storage pool. When I booted the server with the old pool out and the new pool not importable, LDAP looked at its empty, unmounted database directory and decided to come up with an empty database. sssd caches my account if LDAP is down, but it's now up with nothing in it, so my user account on my laptop disappears.

        This is a real problem. It's surmountable—I have local admin accounts on all my systems, and direct hardware access regardless—but really annoying, and it's preventing me from ssh-ing into the server for arcane reasons. I cobble together a connection by suing to local admin and copying the ssh keys from my regular user account. After the upgrade, importing the pool goes perfectly smoothly. I quickly mount LDAP's database, restart the daemon, restart sssd on my laptop, and check: my user account exists again. Whew.

        At this point, everything is up and running. Woohoo! I order adapter brackets and a Molex-to-SATA adapter. When they come, I mount the last drive. It is extremely tight to finagle it into the bay, but I manage to do so without needing to take anything apart. I can actually mount the OS drive, now, too.

        And that's the end! It was a surprisingly bumpy ride, but in the end, I got everything working, and the only additional stuff I had to buy were the two adapters (and I was able to get up and running without them).

        8 votes
        1. vord
          Link Parent
          At the risk of starting a flamewar... I migrated from FreeBSD with ZFS to BTRFS purely for that expand/replace problem. Slower, but on an average home network the disk speed isn't the limiting...

          At the risk of starting a flamewar...

          I migrated from FreeBSD with ZFS to BTRFS purely for that expand/replace problem. Slower, but on an average home network the disk speed isn't the limiting factor.

          Being able to migrate disks/partions online is some black magic. I once had a BTRFS root partition on a spinning disk, which I wanted to migrate to a smaller SSD. Added SSD to the pool, rebalanced, removed HDD from pool. Then wiped HDD and added to the other HDD pool.

          Added bonus of using mutt disk configs in a raid 1(6+4+2+1+1) is just fantastic. When one of those smaller drives dies, I pop open case, replace with larger drive, boot, and rebalance. If I had a hot-swap chassis wouldn't even have downtime.

          4 votes
        2. vord
          Link Parent
          Had another thought: Keep in mind that they're largely the same silicon, just the Xeon has more L2/L3 cache, ECC support, and generally more cores. Not unlocked for overclocking, but only a madman...

          Had another thought:

          The biggest downside I can think of is that that Xeon may idle at a much higher power usage than a more consumer-oriented CPU.

          Keep in mind that they're largely the same silicon, just the Xeon has more L2/L3 cache, ECC support, and generally more cores. Not unlocked for overclocking, but only a madman would overclock a server. :)

          2 votes
        3. [8]
          tomf
          Link Parent
          Since this NAS is going to be 100% media, I'm really considering ditching all of the fancy RAID stuff and having separate volumes for each media type. The second part... stressful. haha. So much...

          Since this NAS is going to be 100% media, I'm really considering ditching all of the fancy RAID stuff and having separate volumes for each media type.

          The second part... stressful. haha. So much of this stuff is a pain in the ass.

          1 vote
          1. [7]
            cfabbro
            (edited )
            Link Parent
            Not a bad idea. RAID is great for merging smaller drives so you don't have to juggle data between multiple smaller volumes, which is what I plan on doing for the 2x1TB Gen3 NVMes in my new build,...

            Not a bad idea. RAID is great for merging smaller drives so you don't have to juggle data between multiple smaller volumes, which is what I plan on doing for the 2x1TB Gen3 NVMes in my new build, but it's not worth the hassle for larger drives IMO, unless you really do need some redundancy.

            p.s. Also worth mentioning for those in the storage market right now is that Amazon (.ca and .com) has a 36% off daily deal for the next 10 hours on 12TB WD Elements external drives... which are pretty damn easy to shuck - video. It's not a historic low price, but considering the inflated HD prices right now it's still a pretty good deal. I just picked up one for myself.

            3 votes
            1. [6]
              tomf
              Link Parent
              oh nice! I picked up another 4TB for $99 (WD40EFZX) from good ol' Canada Computers. The other drives are all WD40EFRX, but I think the ZX is essentially the same. I only really need it for the...

              oh nice! I picked up another 4TB for $99 (WD40EFZX) from good ol' Canada Computers. The other drives are all WD40EFRX, but I think the ZX is essentially the same. I only really need it for the shuffling around anyway.

              I'm picking this server up in about fifteen minutes, so we'll see how it goes.

              2 votes
              1. [5]
                cfabbro
                (edited )
                Link Parent
                From everything I have read, the 12TB Elements could have a bunch of different drives in it, and you never really know ahead of time which you will get. It's always a bit of a roll of the dice, so...

                From everything I have read, the 12TB Elements could have a bunch of different drives in it, and you never really know ahead of time which you will get. It's always a bit of a roll of the dice, so 🤞 for a newer 7200 rpm one.

                Good luck with the new NAS! Before I can start putting my new build together, I'm still waiting on my motherboard to arrive, which got stuck at LAX for 4 days last week and has now been stuck in Vancouver for another 2 days, despite me paying $30 for Express shipping. :(

                Once I do finally build it, I will make a /r/Battlestations style post about it here though, since I kinda went all out with a white case + AIO + PSU + tons of RGB (something I have never done before), and even a bunch of new cable management stuff too. So I also plan on completely redoing my old desk setup as well... and it should be pretty sweet when I'm done (which is why I am so excited!!). :P

                You should do a similar write-up about your experience setting up the new NAS, IMO. :)

                2 votes
                1. [4]
                  tomf
                  Link Parent
                  that's such a sweet setup. I'd love to have a smaller screen above my main like that. Your new build is sick. I'd love to have something like that. I thought about building a proper gaming system,...

                  that's such a sweet setup. I'd love to have a smaller screen above my main like that.

                  Your new build is sick. I'd love to have something like that. I thought about building a proper gaming system, but I only play GTAV... and I really only drive around while I listen to podcasts. :)

                  1 vote
                  1. [3]
                    cfabbro
                    (edited )
                    Link Parent
                    Thanks! Hopefully it will look ever sweeter once I'm done building the new PC, reorganizing my desk, and lighting it all. :) Lately (before my old PC died) I was mostly driving around in Forza...

                    Thanks! Hopefully it will look ever sweeter once I'm done building the new PC, reorganizing my desk, and lighting it all. :)

                    Lately (before my old PC died) I was mostly driving around in Forza Horizon 5 while watching movies, shows, and youtube videos, so I'm in a similar boat. :P

                    And IMO you should think about giving dual monitors a try, even if you do only play GTAV. My 5 screens is ridiculous overkill, and mostly made up of monitors I have slowly collected over time, several different PC builds, and all of which I managed to acquire on the cheap. But I would probably be perfectly fine scaling back to just 2 monitors. I could honestly never go back to just 1 monitor ever again though, since the 2nd is that much of a game changer. It's so much better for multitasking, and increases productivity dramatically.

                    p.s. If you're looking for a super cheap ultrawide, the one I have above my main 35" is an LG 25UM58-P 25" flat-panel IPS, which is currently only $220 (3 years ago I paid $170 for it during a sale). It's not the greatest monitor ever, but it more than adequate for watching movies/shows/youtube, and for the price nothing else comes close to it, IMO.

                    1 vote
                    1. [2]
                      tomf
                      Link Parent
                      right now I've got my MBP to my left as a second screen then another on my right that's connected to a crappy beebox that runs foobar2000 and Kodi. Similar to this but with different speakers and...

                      right now I've got my MBP to my left as a second screen then another on my right that's connected to a crappy beebox that runs foobar2000 and Kodi. Similar to this but with different speakers and more shit on the desk.

                      I'd love to have a vertical screen again. An ultrawide would be really handy. My main is a 1440p. I'd love to move to a bigger 4K or something, but I'm not sure my old late 2013 MBP would like it.

                      I picked up that HP today and its chugging along nicely. Shuffling around the 7tb the new drive and other drives as they're emptied is going to be a total pain in the ass, but it'll finish eventually. Tomorrow I'm going to mess with some other transfer methods to see if I can get some better speeds out of the thing. Right now it's doing about 125MB/s or whatever with smaller files and into the 200s with larger ones. I figured it'd be faster... but a lot of this has me a bit out of my element.

                      1 vote
                      1. cfabbro
                        (edited )
                        Link Parent
                        Ah, okay, I mistook you for implying you didn't have a second monitor already. :P If you're looking for accurate benchmarking of your drives, I use CrystalDiskMark... which you can then compare...

                        Ah, okay, I mistook you for implying you didn't have a second monitor already. :P

                        I figured it'd be faster... but a lot of this has me a bit out of my element.

                        If you're looking for accurate benchmarking of your drives, I use CrystalDiskMark... which you can then compare the results of to the WD specs for your models:
                        WD40EFZX = up to 175MB/s
                        WD40EFRX = up to 150MB/s

                        If you're getting significantly slower speeds than that, it could be due to a number of different reasons, but making sure you have new SATA III cables, instead of SATA I, II or even old/worn-down III cables, should probably be the first place you start. But TBH, it looks like your drives are operating well within their expected performance range. WD Reds, and NAS drives in general, are usually more about $/GB, and reliability, than speed.

                        1 vote
  4. [2]
    psi
    (edited )
    Link
    I built a NAS, so let me offer some specs for perspective and some explanation for why I chose my parts. For reference, I had originally planned to buy something like a synology, but when I...

    I built a NAS, so let me offer some specs for perspective and some explanation for why I chose my parts. For reference, I had originally planned to buy something like a synology, but when I compared prices, I realized I could build something much nicer for only a bit more. The prices reflect the cost when I built this pc nearly a year ago.

    Type Item Price (USD)
    CPU Intel Core i5-9400 2.9 GHz 6-Core Processor 166.00
    CPU Cooler Cooler Master Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler 34.99
    Motherboard ASRock Z390 Pro4 ATX LGA1151 Motherboard 119.99
    Memory Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-3200 CL16 Memory 59.99
    Case Cooler Master N400 ATX Mid Tower Case 49.99
    Power Supply Corsair CXM (2015) 450 W 80+ Bronze Certified Semi-modular ATX Power Supply 69.98

    Notice I haven't included the drives since there is some flexibility here. As people have mentioned, ideally you want CMR drives. The speeds are not especially important – 7200 rpm isn't necessary when your primary use case is serving media. For what it's worth, I use WD Red Plus drives despite the controversy surrounding WD for mislabeling NAS-ready drives. (But note that while the WD Red Plus drives are CMR, the similar-sounding WD Red drives are not.)

    I would also recommend an SSD for an OS/cache drive.

    For the cpu, I chose Intel for its QuickSync technology. Processors with QuickSync support have a dedicated core for encoding/decoding/transcoding media. Obviously transcoding capabilities should be a priority when building a media server. Because my cpu supports QuickSync, a gpu is unnecessary. But note that this cpu doesn't support ECC memory (nor does it support the maximum clock speed of my ram, for that matter).

    For the cpu cooler, I bought the cheapest thing I could find. That said, I absolutely loathed installing this cooler – I felt like I was going to split my motherboard in two. Maybe splurge here a bit if you don't want to deal with the frustration.

    For the motherboard, I chose something with enough slots for ~6 drives, just so I'd have room to upgrade. But I also wanted something with good IOMMU groups in case I wanted to use my NAS as a remote gaming rig too. This has yet to come to fruition. You can find something cheaper if you sacrifice the IOMMU groupings, which probably isn't important for you anyway.

    For the case, I basically found the cheapest one I could find that's be able to hold ~6 drives. It's kind of an eye sore, though, and it wasn't the most convenient case to work with. I'd recommend upgrading to a fractal case if you can afford it.

    Finally, for the memory and power supply, I was less picky regarding these items, so I chose items that were only generally recommended. I made sure the power supply would be beefy enough so that I could possibly add a mid-tier gpu in the future.


    As an alternative to building a NAS from PC parts, you could of course buy a used server rack. I have less experience here (feel free to chip in, folks), but my understanding is that there are a few downsides that make this approach less desirable:

    • an old server will likely idle with much higher power usage due to the CPUs being less powerful and less efficient
    • you will (likely?) need a dedicated gpu for transcoding
    • the rack will be much noisier
    2 votes
    1. tomf
      Link Parent
      that's pretty slick. I probably should have built something, but the cost of it was just too high for what this box will be doing. A proper NAS like this is so much better than the Synology (etc)...

      that's pretty slick. I probably should have built something, but the cost of it was just too high for what this box will be doing.

      A proper NAS like this is so much better than the Synology (etc) ones. Having actual Linux is better in all ways.

      1 vote