Looking for feedback on a homelab design
I wanted some help with a homelab server I am in the beginning stages of designing. I am looking for a flexible and scalable media and cloud system for home use, and I thought this community would be a good place to source feedback and recommendations before taking any real next steps! I really want to check that I am approaching the architecture correctly and not making any bad assumptions. I am open to all feedback, so please let me know what you think!
I already run a simple home server and I have typical homelab FOSS apps, such as jellyfin, navidrome and audiobookshelf, but I am also interested in migrating away from cloud storage using nextcloud, immich, etc. In an ideal world, this setup would also allow me to leave windows on my main machine and use a windows vm for business related work that can’t be done on Linux. I will likely be the one primarily using the services, however I could expect up to 10 - 20 users eventually.
High level setup is with two machines:
- Proxmox Server
- TrueNAS Scale server
- JBOD with either 90 bay or 45 bay storage
- 10G switch
This might be a stupid setup right off the bat, which is why I wanted to discuss it with you all! I have read a ton about using TrueNAS as a WM within Proxmox, but I just like the idea of different machines handling different tasks. The idea here would be to set up the TrueNAS server so it can be optimized for managing the storage pool to allow for easy growth. While the Proxmox server can handle all the VMs and connecting users, with higher IO, etc.
TrueNAS System Specs:
- AMD ryzen CPU and motherboard
- 64 or 128GB ram
- Mirror 500GB M.2 NVMe OS Drives
- GPU if necessary, but hopefully not needed
- Dual 10gb pcie card if the motherboard doesnt already come with them
- An hba for the JBOD something like the LSI SAS 9305-16e
- SLOG and L2ARC as necessary?
JBOD enclosure
- While I am interested in a 90-bay enclosure, I would only realistically be starting with two vdevs which is why I think a 45 bay enclosure wouldn’t be an issue.
- Im tentatively planning for an 11 wide Raidz2 vdev configuration. This would hopefully scale to 8 vdevs with 2 hot spares or 4 vdevs with 1 hot spare.
- All drives would be HDDs
Proxmox Server Specs:
I am less familiar with the specs I will need for a good Proxmox server, but here is what I am thinking.
- AMD epyc and motherboard if I can get my hands on a less expensive one. Otherwise I was thinking a higher end AMD ryzen cpu
- 128 or 256GB ram
- Mirror 500GB M.2 NVMe OS Drives
- Somewhere between 2 and 8 TBs of SSD storage. Depending on the number of drives, I think this would be a single drive, mirror or raidz1.
- This storage will be used for all the vm configuration and storage, except for something like Nextcloud where the main storage will go onto the TrueNAS mount.
- I would also use this for temporal storage such as downloading a file before transferring it to the TrueNAS mount.
- A dedicated GPU primarily for transcoding media streams, but also for testing and experimenting with different AI models.
- Dual 10gb pcie card
Questions:
- I know Proxmox can do zfs right out of the box so I know I don’t need the TrueNAS server, but splitting it this way just seems more flexible. Is this a realistic setup or would it just be better to let Proxmox do everything?
- Does anyone have experience creating NFS shares in TrueNAS for mounting in Proxmox? I would be interested in thoughts on performance, and stability among any other insights.
- Do any of the system specs I listed seem out of line? Where and how do you think things should be scaled up or down?
- If I ever did expand to a second JBOD shelf, assuming the first one was full first, is it be possible to create new vdevs that spanned across the shelfs without losing data?
- Is SLOG and/or L2ARC necessary for this setup? What capacity and configuration would be best?
- What else have I missed?
Lastly, a quick blurb:
I have been building PCs for a while and undertook building a home server a few years ago. I loved the experience of learning Linux (the server is running Ubuntu), picking up docker, and learning more about the FOSS community has been a joy! Part of this project is to learn along the way but also have a setup that I can build towards over time! Proxmox, TrueNAS and zfs would all be new to me so I really see it as an opportunity to explore. I want a solid media and cloud server setup, while also giving myself the freedom to explore new operating systems and general hypervisor functionality.
I don't have much to offer in terms of the specific questions you asked as I have not messed with TrueNAS/ZFS, but I have 5 machines running a proxmox cluster at the moment.
If you wanted to bounce ideas off each other, or use one another as an off-site backup, let me know! I would not mind dedicating some space for you to replicate to within reason!
I've done a lot of work on it and I have a 2 gigabyte fiber connection that is symmetric.
I really appreciate it! I definitely will be reaching out as a get more into the Proxmox setup, but since I am still in the early stages of deciding how to set everything up I am still a bit away from buying anything. I would definitely be down to set up some mutual off-site backups when I get around to that point, within reason of course.
Opinionated response:
Separate (hardware) NAS is always a good idea, in my opinion. Being able to access files if your primary server craps out is very nice. Especially if you're doing a lot of things you're unfamiliar with that could compromise the integrity of your systems.
Consider if you truly need RAID. This is an unpopular opinion, but I have had nothing but issues with RAID setups. The most devastating was a power event that killed two separate software RAID1 volumes, and their respective drives. So I lost 4 actual hard drives as a result. Just consider that there's other ways of backing up files that are truly considered backups (even rsync to another drive in a JBOD setup), other ways for versioning, etc.
Consider the M.2 setup and necessity. I used a solid M.2 drive (moderately pricey Samsung model) but a perhaps questionable PCI adapter. It overheated and I was sad.
Unsolicited advice:
While VMs are cool (and I'm currently setting one up myself to use as a devbox vs ssh'ing into separate systems on my network), containers can do so much. You mentioned docker experience so I won't ramble too much, but I use podman for just about everything and there are nearly zero cases I need a VM anymore.
Start from the bottom doing infrastructure as code with ansible or another tool of your choice. It's a lot of overhead to get started, but I wouldn't want to kick off my containers any other way (works like docker-compose on steroids), and setting up the previously mentioned devbox VM this way gives me confidence I will be able to recreate it from nothing in a few minutes should my house go up in flames.
With the concept of infrastructure as code, you may not need the overhead of something like proxmox. I just directly interface with KVM with that ansible script.
I like VMs, and by extension Proxmox, as they offer a nice abstraction for backups. VM backup/restore procedures are consistent and very batteries included. IaC and podman are great, but then you have to write bespoke backup/restore procedures for each stack and often people make the blunder of A) not pinning their images B) not including the current on-disk images in the backup.
Are you thinking of non-ZFS filesystems when you're talking about RAID? I can't imagine it'd be particularly convenient to run a NAS with the shares limited to the bounds of single hard drives, but as soon as you're spreading data across multiple drives you're in a situation that one drive failure loses everything if you don't have RAID(Z) or mirroring set up. Proper offsite backups are a necessity, no question, but I'd consider some level of drive redundancy a necessity as well otherwise you'll need to do a full restore (and deal with however many hours or days of data loss since the last backup) every time a drive fails.
I agree with you on infrastructure as code, I've never regretted the up-front time cost of having my configuration in one place that I can review/edit/diff it, although I'd want something state-aware like terraform or pulumi rather than ansible. After putting in that up-front time cost, I like to be able to have the software guarantee that the system state matches what the config file says, and that the updated state after making changes matches in turn, rather than just using it as automation for a series of steps.
I am. I admittedly am not as well versed in ZFS. I opted against it in my setup for a couple reasons, mostly not wanting/having the RAM required to operate optimally on my NAS.
I think it depends on your particular data set. My collection of "not personal media" is something I have no problem re-downloading if catastrophe strikes. I use the rather primitive mergefs approach to split this collection across a couple drives. So if one dies, I don't lose the entire collection at once. But as said, data integrity is not a concern there.
It's nice because if one drive dies, I can replace it with whatever size, etc I want. Or even choose to not replace it and not have everything be down.
Software RAID solutions (and ZFS in particular) have become very popular because they solve a lot of the pain, headache, and reliability problems that come with using hardware RAID controllers. It's gotten to a point where many are readily declaring that hardware RAID dead. While it still sees use in many datacenter/enterprise deployments, it is has become functionally nonexistent in the homelab scene.
It's worth noting that the memory requirements of ZFS are often grossly overstated. The "1GB per TB" rule you might be familiar with is assuming more enterprisey workloads like hosting big high performance databases and/or dozens of concurrent SMB clients.
It's rare for home use cases to get that intense. I run my 60TB ZFS pool with only 8GB dedicated to ARC and that is likely still way more than it really needs to serve its purpose as a storage place for backups and my Jellyfin library.
Well, the RAID I mentioned above that resulted in hardware damage was actually a Linux software RAID. It's very possible the same outcome would not have occurred with ZFS, but just providing extra context there.
(I've done hardware RAID prior and I would agree that it at least should be on the way out. So many issues.)
I vote against Truenas Scale after their kubernetes rug pull. I personally moved to Unraid and it's ... Fine. But I'm not proud of it and I'm actually rather annoyed by the UI-ness of it compared to a beastly 'inference-box' I built with plain Jane Ubuntu Minimal. Primarily because I embraced a simple gitops approach from the jump on that build.
Do you actually mean a 45 bay ... For hard drives? I thought I was a mad lad with my Unraid box being in a ten bay case that I use 5-6 slots on.
Fwiw. Btrfs and parity drives is an interesting approach. Certainly simpler than zfs pools I did on my previous SCALE NAS. I was too weak and fleshy to even figure out zfs properly on my Ubuntu Minimal build.
I missed this, what did TrueNAS due with regards to Kubernetes?
https://truecharts.org/news/scale-deprecation/
https://www.truenas.com/docs/scale/24.10/gettingstarted/scalereleasenotes/
This turned out way longer than I intended, but I hope it's useful! There are a ton more details and nuances floating around in my head, so absolutely feel free to ask if there are specifics you want to discuss more.
I've had to do a kind of absurd deep dive into ZFS and the hardware/OS config/general NAS setup to run it recently, so happy to get as far into the details as you'd like! Something I've really found when searching is that a lot of the accepted wisdom is quite dated, even when it's being repeated in newer discussions - I've consciously gone against that in a few places, so if something doesn't seem to add up just ask, 50/50 whether it's deliberate or whether I've just made a mistake.
Roughly how much data are you expecting to store? This seems like a lot, especially when 100TB usable is five or six 28TB drives (the currently optimal price-per-TB). You might want to go 16TB or so just to give yourself a bit more granularity for VDEV configuration, but even then you're pushing 200TB usable space in a 4U case alongside the NAS server itself, no JBOD needed.
For what it's worth, you want to be buying the recertified with warranty Exos drives - redundancy is one of the major purposes of a ZFS setup, and no individual user is buying enough drives to be a statistically significant sample anyway, so as long as you're covered against DOA and early failure it's really not worth worrying about the difference between new and refurb/recertified. I've even seen some people suggesting that recertified have a lower failure rate due to the extra rounds of testing, although I haven't seen enough data to corroborate that.
If you're thinking of more drives for better throughput, I'm going to make a potentially controversial point and say that (within reason) you shouldn't be worrying about the HDD speed on a modern NAS. SSD and RAM caching are multiple orders of magnitude faster, and cheap enough that it's just not worth trying to optimise the bits of the system that are always going to be limited by a physically moving arm and platter.
ZFS loves RAM, so more tends to be better, but you can always start at 64GB and see if you actually hit any bottlenecks from cache misses first.
You'll probably want to run the memory at lower than its rated speed, too. This isn't a problem - you will absolutely never notice a difference in memory speed when serving files unless you're also running $10k in networking hardware - but Ryzen only has two memory channels, so four DIMMs is running two per channel (unless you're planning on using 64GB DIMMs, but last I checked they were way too expensive compared to 32GB) and they tend to get unstable if you don't knock the speeds down a bit.
You'll also want to consider the PCIe layout of your motherboard, and especially look out for one with bifurcation support, because that'll let you drop in a whole lot more NVMe drives as and when you need them.
There are really good bundle deals on eBay at the moment for a Supermicro H11SSL-i or H12SSL-i along with RAM and an Epyc Rome (7002 series) or Milan (7003 series) CPU. I bought one a month or so ago and it's been great so far - I'd say it's worth paying a little extra to go for Milan or newer, Rome is getting pretty dated at this point. You can also save a little by getting a Gigabyte board rather than Supermicro, but everything I could find said they're incredibly picky about PCIe devices, whereas the Supermicro ones just work (which I can attest to). Shipped from China, which is the only thing that might be an issue if you're in the US.
Separate NAS is always a good idea in my opinion. Proper separation of concerns and the ability to add/repurpose more machines later is more than worthwhile for a setup like this.
You could always run Proxmox on the NAS box as well, just with a single TrueNAS VM inside it, if you want the extra layer of abstraction for system backups etc. - I'd consider this if using TrueNAS, but I'm using NixOS so I'm less worried about losing system installs or OS config. If I were going down that route I'd still let TrueNAS handle the ZFS side, rather than giving it to Proxmox. But honestly as long as you have config backups for TrueNAS I'm probably overengineering things by even suggesting virtualising it.
If you're sharing sensibly sized files (few megabytes to tens of gigabytes) I'd expect this to be fine - large numbers of much smaller files are where you start getting problems. I've found that NFS pretty much just works, and has decent throughput, but does add enough latency that you'll notice it when you need to do something like recursively list a directory tree with a few tens of thousands of files and subdirectories.
L2ARC: yes. You don't want to be waiting for hard drives if you can reasonably avoid it, and you definitely don't want to be waiting for hard drives to look up metadata. Ideally you'll be hitting ARC rather than L2ARC more often, but even 64-128GB in comparison to HDD sizes means that things definitely will be pushed out relatively frequently, and 2TB NVMe really doesn't add much cost in the context of the hardware you're talking about here. If you find cache misses are a bottleneck once you get a feel for things, it's easy to add more NVMe drives later as long as you have the PCIe lanes for it.
You'll see a lot of people online saying SLOG is unnecessary because it's only required for sync writes (async writes are acknowledged as soon as they're cached in RAM, and are flushed to persistent storage every 5 seconds or less by default, whereas sync writes are only acknowledged after they're in persistent storage). What they don't tend to mention is that NFS defaults to using sync writes for everything!
Your options here are:
asyncflag when mounting the NFS shares. In the worst case this can cause data loss in case of network disconnection, power outage, or system failure, because NFS on the client system will confirm writes before they've been written by the server.But there is a major caveat on the last point: consumer SSDs don't actually write things persistently in the way most people expect. The faster ones have an internal DRAM cache, which is itself volatile before the drive controller can flush it to the NAND. For most consumer-grade drives, this means the latency on SLOG writes is a lot higher than you'd expect it to be, because ZFS is waiting for the NAND write rather than the DRAM write: still a decent amount faster than a HDD, but nothing like the snappiness you get from a "normal" filesystem that just assumes the drive's DRAM cache is safe enough to consider successfully written. For some drives, the controller will falsely report to ZFS that data was written when it's actually in the cache: this is much worse, because ZFS can deal with known data loss or corruption, but isn't built with an expectation of drives giving incorrect information. A power loss on the latter type of drive could theoretically leave ZFS itself in an inconsistent state.
If you're going for a SLOG drive, make sure it's an enterprise grade one with PLP (power loss protection) - those can correctly acknowledge writes even when they're in the DRAM cache, because they have an onboard capacitor that will allow the controller to flush the DRAM to NAND even in the case of total power loss, and without any interaction from the filesystem or OS. You'll see a lot of discussion about Optane drives, which were basically this setup with an unusually large (for the time) cache, but those haven't been in mainstream production for years and are now even hitting the end of extended enterprise support. Modern drives have large enough caches and fast enough flash that I don't think it's worth considering Optane, although if another manufacturer picked them up and made a modernised and supported version I'd be interested!
Micron and Kingston both make M.2 2280 NVMe drives with PLP, or you can look at used 2.5" U.2 enterprise drives and get an adapter to connect one to a PCIe or M.2 slot. Either way, you'll only need a few tens of GB for a SLOG drive, so pretty much anything on the market will be fine in terms of size - since SLOG is inherently a write-heavy workload, you can consider the extra space to be overprovisioning for wear levelling.
This is such an excellent write up, thank you for taking the time to type it all out! I absolutely will need some help with the zfs setup so when I get to that stage I will certainly be reaching out.
I know that a 90-bay or 45-bay storage is a lot and I am the first to say that I will likely never use the entire capacity, but I have been burning through ~40TBs embarrassingly quickly and I thought it would just be a good exercise to think through what a true “future proof” system would look like. Also, I usually see prices for JBOD cases to be pretty much negligible between larger drawers, probably since most of the cost of a system like this is the drives themselves, it never struck me as practical to limit purchases to smaller bays.
I absolutely will be buying recertified or refurbished drives. I don’t see any reason to shop new.
The vdev configuration is less about increasing for more throughput and more about chopping up the drives in a way that makes sense with quite a bit of online literature essentially pointing away from making extremely wide vdevs. I had thought about doing 6-wide z1 vdevs but that alternative seems more risky even though the overall parity is roughly the same. The only real downside I see for my setup is that I need to buy 11 drives for each new vdev which is quite a lot. I’m no expert here though, happy to hear your thoughts, I was using this tool to help visualize failure rates: https://jro.io/r2c2/
On RAM, that was exactly my thought. 64GB DIMMs are way too expensive for whatever reason so I was thinking of starting with two 32GBs and then expanding as necessary.
Thanks for the information on the Supermicro motherboards, I have definitely been thinking Rome would be a little too out of date, but Im glad to hear others are thinking the same.
SLOG and L2ARC: This information is invaluable, thank you for the write up. Finding objective and nuanced information online about SLOG and L2ARC has been… challenging to say the least. It seems to be polarizing in a way that doesn’t make sense.
Very glad it was helpful! And I've definitely come across that same challenging feeling when searching for info, so I'm pleased to have had a reason to get it all written down in one place.
Totally fair on the JBOD side; I'm limited by rack space, and even more limited by floor space for another rack, so that's probably colouring my thoughts even when I'm trying to be more general. I think I had a bit of an impression that you were worried about overflowing a 45 bay shelf, but it doesn't sound like that's the case!
Similar on the VDEV layout, I think I'm naturally assuming large drives just because I'm always keeping compactness in mind, so when you say 11 drive VDEVs starting with two and scaling to four or eight, I hear half a petabyte to start, possibly growing to 2PB. That's a pretty serious installation even by large organisation standards (cough Korean government), and a solid few thousand euros/dollars/pounds if you need to add a VDEV, but if you're looking at old 4TB or 8TB SAS drives it's a whole different ball game. I'd probably still lean towards smaller VDEVs with bigger drives just because they're likely to be newer, but that's likely just bias creeping in on my side.
That failure visualisation site is cool, by the way, I hadn't come across it before! I think what I'd say there is just to keep in mind the externalities. You're not worried about the probability of failure per se, you're worried about the probability of multiple failure in the day or two it takes for a replacement drive to be delivered, and even with a "bad" VDEV layout the numbers are so low that you want to be looking at them alongside the chances of losing the server as a whole to a faulty PSU or burst water main or lightning strike on the power line. A single hot spare takes delivery time out of that equation and stacks the odds even more your way.
There's certainly no harm in thinking about drive failure rates, or in optimising against them to a degree, but (and I say this as someone very prone to bikeshedding, who needs to hear it myself!) if you get a multiple simultaneous drive failure it's more likely to be because a meteorite hit your server and you need to restore from offsite backups anyway.
Regarding SLOG...
The same writes are happening to long-term persistent storage (i.e. spinning drives) whether you have a SLOG or not, though? I'm not sure how it could make a difference to the noise and vibration.
It's a good question! Everything actually does get written twice for sync writes - first to the ZIL (ZFS intent log, this is the log you're separating out with a SLOG drive, i.e. a separate log), and then to the actual ZFS filesystem properly. Whether or not a SLOG drive is being used, the ZIL exists so it can be written as fast as the underlying storage device will allow, so that the sync write can be acknowledged and the program that's waiting on it can continue - it's basically a dumping ground for ZFS to put raw data into and confirm the write without having to stop and think about it, before then figuring out where it goes in the pool and writing it properly afterwards.
If the ZIL is on a spinning disk, the head has to flick back and forth constantly between the ZIL area of the disk and the actual zpool area of the disk, and because sync writes are latency sensitive it really is constant - it keeps getting pulled back to the ZIL millisecond-by-millisecond as new writes come in, while still trying to keep up with the overall pool writes (including the "real" writes for the data that was temporarily dumped in the ZIL) which pull the head back to a different area of the drive.
A SLOG drive removes this issue of a physical head being pulled back and forth on the same device, and on top of that means that the only writes going to the spinning disks at all are clean, ready-organised sets of blocks that get flushed every few seconds - as opposed to many thousands of latency sensitive writes that need to be acknowledged immediately even if they don't nicely line up with the underlying filesystem state. So you've got 1x the data written to spinning disks rather than 2x, which is a big win to start with, and then you've got something like 0.1x or 0.01x the number of write requests to get that same amount of data into place because they're being batched efficiently.
And I can say from personal experience that it's really noticeable! I had an absolute "oh shit" moment when I installed a new batch of HAMR drives into a machine and it became so loud I could hear it in another room - I was worried they just ran that much louder than the non-HAMR ones I was replacing, and I was going to have to figure out some kind of janky soundproofing, but it turned out I'd just messed up the sync config. Without the constant stream of ZIL writes hitting them, the sound is barely noticeable even close to the machine, and if you do listen out for it it's a little "blip" every few seconds rather than a continuous (much louder) rattle.
Good info on SLOG and ZIL here, if you're interested: https://www.truenas.com/docs/references/zilandslog/
Honestly... I would take a step back and ask yourself if this is something you really want to do.
I've gone through ebs and flows with homelab stuff, because some of the stuff it does is useful (home assistant mostly), but other things become a huge pain in the ass at inconvenient times for me.
Right now I run a kind of complicated setup with proxmox hosting my pfsense firewall and a few other things that require me to pass network interfaces around, tag ports with different clans and so on.
It's kinda grown that way to meet a bunch of different needs and it makes sense if you understand it, but if I don't pay attention to it for a few months and something suddenly breaks, I'm banging my head against the wall for hours trying to fix something.
The advice you get online about how to prevent stuff like that is often misguided too, and only works in a theoretical perfect world where your setup is exactly the same as the developers of thousands of little components envisioned when they were writing the components your lab relies on.
So my advice would be to take a step back and ask yourself if investing a lot of ongoing time and money into this project is really worth it to you, like, is it something you'll continually enjoy doing even if a lightning strike knocks out a NIC, or a volume that you were accidentally not monitoring fills up and bricks your hypervisor, or a bug in some random component causes your whole setup to crumble and you have no idea why. Because each of those things are a minimum of several hours to fix, and if your household relies on those services, it's several hours of not having it.
Personally, I'm very hot and cold about it. Sometimes I love messing with the stuff, often times, I wish I'd ripped all of it out, used a Netgear router and just paid for apple cloud storage.
I definitely respect this opinion, I have had my fair share of "driver issue causes complete system meltdown leading to multiple days of troubleshooting", but I will say this is mostly for fun, and at least for now, the data and services I'll be running aren't mission critical. For example, I completely rejected the idea of hosting Vault Warden on my current server because I didn't want my entire password manager to go up in flames if I didn't configure something correctly.
If you're having trouble with NFS in terms of reliability, try comparing the speed with SSHFS or even rsync. Maybe you don't really need NFS
I combine plocate and sftp/scp/rsync to search across many computers and it works well. I use it multiple times per day:
https://github.com/chapmanjacobd/computer/blob/main/bin/locate_remote_mv.py
SSHFS is also fast enough once you figure out the right config (setting max_conn to something between 8 and 40 is really important! essentially, how many files you expect to access at the same time--but don't set it to something ridiculous like 400 either--I imagine the overhead can get big)
My initial thought is that your setup here sounds like overkill. For context on that: I have almost all of the systems you mention running on a Beelink Mini S12. Plex can handle transcoding multiple 4k video files to 1080p on the fly after I passed the Intel CPU based video card to the container in Proxmox. The only thing I don't have that you mentioned is TrueNAS. Instead, I set up samba and NFS to share a USB hard drive on my Raspberry Pi4b router running OpenWRT. I don't recommend that last part be mimicked, but it just illustrates that for your use case you are going far above what you need to.
I would highly recommend gong Intel over AMD for any system that will be doing transcoding. You can save the video card for the AI models. I personally used an Nvidia Tesla P4 for that, which I got on eBay for ~$100. I don't know if you can tell, but I'm very frugal when it comes to my homelab, perhaps to the point of absurdity. I haven't actually built a new PC in over 10 years.
If you're okay with spending the money and having a truly future proof setup, then you've definitely got that.
This is what I see on clients I work on. I cut my teeth managing VMware environments and deploying backups. It was typically:
VMware Hosts > NAS/SAN (depending on who/size) < Veeam
I haven't finished mine but I'm looking for an identical setup to use my raspberry pi NAS to host VMs, etc, over NFS.
The only issue is I don't think you can motion between Pmox hosts with the free version, but if you only have one host it's nbd.
The host specs seem fine. I got myself a NUC clone to run my stuff on, but I'm running light loads hoping to play with tools my team is working to support at work (kubernetes, increasing ansible, etc).