• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "zfs". Back to normal view
    1. ZFS is crud with large pools! Give me some options.

      Hey folks I'm building out a backup NAS to hold approximately 350TB of video. It's a business device. We already own the hardware which is a Gigabyte S451 with twin OS RAIDED SSD but 34 X 16TB SAS...

      Hey folks

      I'm building out a backup NAS to hold approximately 350TB of video. It's a business device. We already own the hardware which is a Gigabyte S451 with twin OS RAIDED SSD but 34 X 16TB SAS HDD disks. I used TrueNAS Scale and ZFS as the filesystem because... It seemed like a good idea. What I didn't realise is that with dedupe and LZO compression on, it would seriously hinder the IO. Anyway, long story short, we filled it as a slave target and it's slow as can be. It errors all the time. I upgraded to 48GB of RAM as ZFS is memory intensive but it's the IOPs which kill it. It's estimating 50 days for a scrub, it's nuts. The system is in RAID6 and all disks dedicated to this pool which is correct in so far as I need all disk available.

      Now I know ZFS isn't the way for this device, any ideas here? I'm tempted to go pure Debian, XFS or EXT4 and soft raid. I'd rather it be simple and managed via a GUI for the rest of the team as they're all Windows admins as well as scared of CLI, but I suppose I can give them Webmin at a push.

      I'm about ready to destroy over 300TB of copied data to rebuild this as it crashes too often to do anything with, and the Restic backup from the Master just falls over waiting for a response.

      Ideas on a postcard (or Tildes reply)...?

      UPDATE:

      After taking this thread in to consideration, some of my own research and a ZFS calculator, here's what I'm planning - bear in mind this is for an Archive NAS:

      36 Disks at 16TB:

      9x16TB RAID-Z2
      4 x VDEVs = Data Pool
      Compression disabled, dedupe disabled.

      Raw capacity would be 576TB, but after parity and slop space we're at 422TB usable. In theory, if we call it 70% usable, I'm actually going to cry at 337TB total.

      At this moment, the NAS1 server is rocking 293TB of used space. Since I'm using Restic to do the backup of NAS1 to NAS2, I should see some savings, but I can already see that I will need to grow the shelf soon. I'll nuke NAS2, configure this and get the backup rolling ASAP.

      My bigger concern now is that NAS1 is set up the same way as NAS2, but never had dedupe enabled. At some point we're going to hit a similar issue.

      Thank you for all of your help and input.

      16 votes