52
votes
What are you self-hosting currently?
I recently discovered Paperless-ngx and have immediately fell in love. I must now decide whether to host it on my VPS (risky with personal documents), on a Pi at home or finally invest in a proper home server (something cheap but with a bit more power than a Pi4). It can totally be run a Pi, but performance may not be as good.
Does Tildes have a big self-hosted community? What are you self-hosting currently, and what do you enjoy about it?
I run a fairly low power homelab with 4 esxi hosts (on various NUCs), a kubernetes cluster on 4x rpi4s and a few other odds and ends.
NUCs run a handful of VMs, some with bare installs and others are just split up docker hosts. Generally hosts:
Kube cluster is 1xMaster / 3xWorker nodes with a longhorn fs backing / traefik proxy:
Misc rpi3b+
All data hosted on a freenas install w/ primary storage 42TB in raidz2 and backups 6TB in mirrored. Used for everything currently running. Various personal machines auto-backup to the backups share, along with various other services which take snapshots as well. I have 4 drive synology that is used as an offline backup w/ 42TB in SHR1 where I periodically sync everything off the truenas instance onto it. Nightly I have bormatic sync the majority of the backups share into my b2 backup buckets.
I also self-host my own mastodon instance, but not on my home network.
As to the why of all of the above? I like tinkering and I've found each of the things I host important to various things in my life. I enjoy the ability to control what I am dependent on.
I followed the home lab subreddit for a while and yet you just introduced me to several new projects! Thank you!
I've never run into any DNS issues internal to the k8 cluster. Both piholes are configured using cloudflare as upstream using DoH. My network advertises both the primary (rpi3) and secondary (k8s). All devices, aside from k8 nodes, are forced through the primary pihole for DNS requests if they refuse to use either of the advertised ones.
That does make sense and is probably something I've just avoided because my intention was to always have the pihole on the k8 cluster as a backup. My k8 cluster started with rpi3b+ so my usage of it was always light and left critical path things out of it. Now that they are on rpi4 I've put heavier stuff on there, but nothing as critical as DNS that I don't have redundancies built in.
I replied to your earlier post, it sounds like you might have run in to a similar problem where the service routing iptables rules were inadvertently commandeering your DNS traffic and routing it to a PiHole container that wasn't running.
Ultimately I kept having weird problems knock the whole network offline with it in K3s, so yeah, I'm with you on that. Dedicated VM, or maybe I'll get an actual Pi for it.
I ended up giving on PiHole in K8s, I'm curious about this. In my case I tried to isolate the worker nodes by having them get their DNS from the router whereas all my other devices got their DNS from the PiHole, to prevent a circular dependency of DNS to get DNS, but I hit a snag. The Linux distributions I was using were using
systemd-resolved
for DNS, which listens on127.0.0.53:53
, however the network fabric K3s uses by default (Flannel) configurediptables
in such a way as to consume all port 53 traffic. So basically, the PiHole service interrupted my normal OS DNS resolution. I don't remember the exact details, it's been a few months, but I had to finagle with some iptables rule edits after K3s/Flannel came up to make it not do that, but I hadn't gotten around to persisting those changes, and after a power outage caused everything to deadlock and leave me with effectively no internet until I manually untangled it, I sort of rage-quit PiHole in K8s.I wonder if there is a homelab community of sorts on here. I use a tiny mini micro cluster running proxmox and for now just just HA, Pi-hole, plex, and the arrs. I plan to expand to a proper NAS when I move.
My NAS is a Raspberry Pi 4 with a 1 TB SSD connected to it. I am consistently amazed at how much I can throw at it. As of now, it's running several Reddit bots, a plethora of *arr services along with transmission and a JellyFin server. It sweats a bit when a lot is going on with transmission, but it does just fine with JellyFin (obviously no transcoding).
The Pi 4 is pretty fantastic. I'm running three FoundryVTT instances off mine and the server only really struggles when I'm making huge changes with the modules I'm running.
How are you managing storage with this setup? I kinda want to make the cluster use ceph or something also but doing mini PCs kinda prevents that.
My storage is kind of a mess. I’m in the middle of a move and plan to revamp it to a proper NAS that each node is mapped to. Right now I’m just using individual SATA HDDs for plex with no good redundancy or backing up.
Ah, yeah i want to Plex on cluster but then you need a NAS to store it all which kinda removes the high availability. Think it's still the way to go, but maybe setting up the configs for the applications to store in ceph or something.
Nice sounds close to the route I am thinking of. NAS for big Plex things and some storage service on the cluster for smaller things like their configs. Don't like the thought the NAS not being highly available, but it is a home lab. I am also planning to buy a Synology NAS with redundant power so it should be pretty optimal I'd imagine.
Currently hosting the following on my Unraid server:
I also have a few VMs running for various tasks/apps that I found easier to run in Linux than in a Docker container.
I'm one of the mad folk who run a Kubernetes cluster at home (there are dozens of us!). I use it for all kinds of stuff like the standard *arr stack, my password manager (vaultwarden), home-assistant, and a bunch of little tools and bots that I made.
You running multiple physical nodes?
Not yet, my setup is a little bit gross at the moment - one physical box running Proxmox, with multiple VMs on top for Kube. Definitely hoping to move to multiple physical nodes ASAP though! I've already got rack space reserved for them ;)
Hell yeah. I’m actually starting to look at buying my first house and every place I look at I’m having to consider where the future server room is going to be. Lol
I don't have a separate space to put a rack in, but I ended up getting (very) lucky on a secondhand apc netshelter cx. It's soundproofed and looks pretty much like a normal cabinet (though bulky), and it's the only way I was able to get away with putting it in a corner of the living room 😅
Are you using your *arr stack with Jellyfin or Plex, or something else?
I currently have both Emby and Jellyfin running, though Emby is the one that sees the most use. I need to work out a few kinks but I'm hoping to move to Jellyfin entirely.
I'm fairly new to homelab stuff.
I self-host a Unifi Network Server on a NUC.
Learned about the Media Plex Server and recently got that on there as well.
Will probably build a NAS with sufficient storage once I build a new PC soon!
Didn’t realize you can self host UniFi, that’s awesome! I just picked up a Dream Machine SE and love the UniFi interface. Running it with a 4Tb drive and 4 Ubiquiti cameras.
I have a Hetzner auction box in DE[1] which has a bunch of stuff:
Authelia - Auth for all self hosted services with 2FA
Caddy - Simple reverse proxy with minimal config
qBittorrent - BitTorrent client
Plex - Media server
tautulli - Plex monitoring
Sonarr - Media management
Radarr - Media management
Jackett - Add sources not supported out-of-the-box by Sonarr and Radarr
Overseerr - Media request management
Homepage - Dashboard for all the services + some quick links
Attic - Private + public Nix binary cache (useful for for my devices and CI pipelines for things that aren't cached by default in Nixpkgs)
Notado - Content-first social bookmarking and highlighting service (I maintain and host this project)
Kullish - Cross-forum comment aggregation service (I maintain and host this project)
I maintain and manage all of this with a NixOS flake as of earlier this year, which has made things so much easier compared to the previous fragile container-based setup I had!
[1]: Because Hetzner peering is so bad on the US West Coast, I route Plex through Cloudflare (with caching disabled via a rule so as to not violate the TOS):
This also has the added advantage of not requiring me to open up the Plex ports on the firewall.
Nothing as cool as other people here but I've been running a Plex server on my computer for awhile to watch my favourite shows and movies, so much more convenient than using streaming services.
Maybe I'm getting old, but I'd be uncomfortable trusting important personal documents to such a system.
Offsite hosting is, as you say, risky for identity theft. Hosting within your premises is one accidental "delete" click away from losing everything (I get that backups are a thing, but they'd also have to be within your premises, and checked regularly for recoverability, which nobody does).
And then if some organisation demands the "original" document, you don't have it any more.
I think I'll stick with my wads of paper for now.
Most mainline backup tools these days (that I'm aware of) come with encryption built-in and enabled by default. That should take care of the risk in offsite storage, right?
Plus, you should be using encryption at rest on your own VPS anyways. It’s not hard to set up a LUKS partition…
I've recently upgraded my little raspberry pi to a full on Ubuntu server. I've been creating an entirely bespoke system from scratch using php which I access via wireguard. I need to do a full breakdown if it at some point. It serves everthing from TV shows, music, movies and tracks calories, weight, card collections and numerous other random things.
I've got a couple different computers running several things, everything is tailnetted together and I have a VPS that acts as my public IP and funnels traffic to my home. Honestly, I love Tailscale
My main stack looks like:
Ubuntu 22.04 LTS Server box that lives in my basement closet
OpenSUSE Tumbleweed running on the Framework laptop 11th gen
Debian 11 box
HTPC running Windows 10
Love Paperless, I originally started using it before
-ng
and -ngx
were a thing, but thank the deity of your choice that it's been taken up for continued development with those branches and not left abandoned. Paperless was actually the first service I ever self hosted, and led me to Docker for the first time.This is Vergil, photo about 2 years outdated. It's a Supermicro 846E16-R1200B chassis with a 846EL1 backplane. It runs headless Debian, my Linux OS of choice over the last 15ish years. OS runs on a 1 TB NVMe drive, that was an experience, I had to modify the BIOS for my SuperMicro motherboard as it can't boot from NVMe by default. Storage array is 2x ZFS raidz2 pools of 6x 12 TB drives, for a total of 110 TB disk space. I'm at about 65% capacity right now.
Vergil hosts a number of things, mostly in docker:
I also rent a small Linode I've had for like, 12+ years now, that has virtually no resources but sits on a massive internet carrier's backbone, so speeds are insane (Typically around 5 GB/s down, 2 GB/s up). That one runs:
I also have a neat little Ubiquiti network stack I've polished up over time.
This is how it started
This is how it's going
I custom designed that 2U Pi panel and had it 3D printed. Those are two Pi's running PiHole (in docker), so I can have it up high availability. I use a script I wrote to force captive DNS to the PiHole's, even if a client does not want to (Google Home devices, for example, which have hard coded DNS).
It's ever a work in progress, but it's come a long way, and I'm relatively happy with it!
Woah, never heard of Paperless before but this is exactly what I need, thank you!
vSphere cluster on 3 HP Mini EliteDesks:
Standalone Lenovo TS140:
Synology DS1821+:
Misc:
I use my home server for backing up files, and a simple Jupyterhub setup for computations. I'm using an old desktop which no longer runs the latest games but has way more power than a Pi. Honestly, in retrospect it would have been better to use a remote hosting service. Dealing with dynamic DNS is a pain for updating encryption certificates, and many ISPs don't even support port forwarding anymore since they are using CGNAT. I ended up using Mullvad to forward a port through their VPN, but they are dropping support for that in a month, so I will need a better solution.
The main thing stopping me from going down that route is the rising cost of electricity. I think to leave my old desktop running 24/7 would cost me something in the order of £15 per month, which is considerably more than I pay for my VPS.
I'm running a DS218+. It maintains my Plex and HomeAssistant servers. I don't do too much with it otherwise but I've been enjoying having it.
Actually been debating upgrading to something with more bays at some point in the future, maybe whenever we move next.
I moved from a DS218+ to a DS1522+ with no regrets. I love that I have the option to run virtual servers and the increased speed and ability to upgrade the ram that I couldn’t do with the 218+
ds918+ here w/ a plex server, some usenet/torrent services and also used for important photo backups. I could (and should) probably use it for way more but plex and photo backup alone are worth it for me.
An HP Tiny PC running Unraid, with Ubooquity (e-reader), Jellyfin (media), SearXNG (search engine proxy / aggregator) running on it currently. Bender Dashboard as a webpage to get to these apps. Currently brainstorming other stuff to run on it. Pi-hole is a possibility, I've run that before on other systems, though currently NextDNS is fine for me instead. The apps/docker containers run off of a small Optane drive and the data resides on a 2TB SATA SSD.
Micro Dell PC running Diet-pi X86 for super fast/easy set up as a print server. I love Diet-pi distro. It's super light-weight and the built in TUI and CLI commands give you all sorts of things you can install and do, including Pi-hole- it's my preferred distro to run Pi-hole
Dell/Wyse 5020 Extended Thin Client running OPNSense - this is not live at the moment- but planning to replace by Ubiquiti ER-X with this as my own custom router. Way more options/configuration available and much more hardware power. Still sips very low energy, has 4 ports, etc.
I run a lot out of my Synology NAS.
We've got all our documents on the cloud now. Secured. I have really high speed internet now, so I don't struggle with accessibility outside the home.
I was able to ditch Google photos and host all of our photos and automatically back everything up on the phones and whatnot.
I have a server for all of our Calibre ebooks. Slick interface and easy to send books to my paper white or simply read on my phone.
I run a Bedrock server for family and friends. Yep, we just updated to 1.20. Let's go!
Several other things hosted locally, but I also have a bunch of stuff hosted by a company I've used for over 10 years. I seamlessly interconnect stuff from the hosted servers to my NAS.
It can be a lot of work to run things on your own, but it beats the hell out of relying on these third parties that steal your data.
I think I'm going to work on migrating away from Evernote next to locally hosted.
I like to control my own data, did you notice? 😊
Plex on an old NUC and used to run pihole on a RPi4.
I’d love to get some kind of proxy for logging web traffic but never really looked into what’s best and how much resources are required. I think squid on Linux would be the way to go. Would like to get into more invasive inspection of the traffic as well (for fun and to keep an eye on the kids).
I keep instances of Stable Diffusion, an LLM, and whatever side project I'm working on running on my home server most of the time.
Not much compared to a lot of the people in the "self-hosted community", but it's plenty for me haha.
I've been running Plex on my living room media/gaming PC for a few years.
Just decided today to pull the trigger and order some parts to build a DIY NAS. Planning on installing Unraid OS and Plex on there. Might use this thread for ideas for what else to run on there.
Instead of paying for RSS digests Im using https://github.com/piqoni/matcha (executable binary on a cron) to generate a daily digest in markdown of all my rss feeds.
A CDash server!
Technically it’s running on a little VPS on our cluster at work that someone else maintains, but I’ve been keeping it alive. In my “free” time I’ve been using it to stand up an automated ticket status tracker (e.g., file a ticket with a reproducer, register it, have it run on nightly builds and report whether it’s passing yet). I piggy back off an existing Jenkins server, and launch test runs via Slurm to our cluster.
I've just picked up a Dell R720XD to start my homelab journey but I'm having to move suddenly so haven't gotten a chance to set anything up fully. I've got ESXI on it at the minute with a Windows Server 2019 VM for AD and a few other things, Ubuntu server for PiHole which also handles internal DNS resolution to the various services and the plan when I get time is that I'm going to setup *arr services with Deluge in containers for media, though I'll probably run Jellyfin on my laptop pointed at an SMB share to benefit from the 3070 in that for transcoding. I also want to use something like Paperless NGX but have considered Mayan EDMS as an alternative. Longer term I'm also planning a database server to use as a test bench for various data pipeline ideas spitballs and practice as I work as a data analyst and want to improve my skills in engineering and also want to learn ML engineering so I can use these for work. Additionally I'm planning to test out various open source BI tools to see how these compare to the proprietary tool at work so that I can look to get my department the ability to self-serve direct from our data mart without incurring extra costs. If any works well the hope is I can get this setup on the data mart server to give the teams access to use the data in a controlled manner without needing any SQL knowledge.
I'm also going to get a dedicated NAS setup which I plan to build using a case I've seen on AliExpress that can hold up to 15 hotswap 3.5 inch HDDs. OS will be TrueNAS Scale and I'll run Jellyfin on there when complete as I'll use a more modern CPU so it can do the transcoding instead of my laptop.
As far as backups go, I'm considering a tape based backup solution but haven't had any experience with enterprise hardware outside of the R720XD so any feedback on the backup option would be greatly appreciated. As I understand it, tape is the recommended option but open to any alternatives that may be cheaper as a tape writer is pretty pricey from an initial check.
I'm currently hosting some services in a small mini PC (Celeron N3350, 4GB RAM) with CentOS Stream, a small upgrade from my previous Pi 4 setup due to proper hardware decoding. Overall It is a janky setup where storage is limited to 64GB eMMC and a 2TB 2.5'' USB HDD, but it takes no space which is currently at a premium for me.
Everything below is running in rootless podman, except for acme.sh, wireguard and samba.
Unfortunately, it is a somewhat brittle, as I'm hitting this network bug, throwing me into a 502 Bad Gateway page every now and then. Furthermore I picked up podman because of
podman auto-update
but that doesn't seem to work well with podman's depends-on feature either.As far as things go, I want to set up Home Assistant again, some container repository for my images, and a git forge, but I need to work out the kinks of this setup, and hopefully make it reproducible.
Yaay I can contribute now that I got an invite!
My home lab is currently offline due to me just moving and not having the time to put it back up but when it was up I experimented with DIY HPC/parallelization a whole lot. It came in handy to have a number crunching system as I played around with some simulations. The greatest part is that I got most of the parts for virtually no cost due to them being scrap servers, old IBM x3550 M3's which were originally used as SAN volume controllers. I've now got a total of 7 of them and some other ancient scrap server acting as a head node. 4 more kitted out x3560's are waiting for me so I'll need to put up the lab eventually but currently it is exam season so it might be a while.
If you're interested in getting actual enterprise equipment then I'd really recommend just writing to suppliers/providers in your area if they have any scrap laying around in their warehouses. For example these x3560's I got for 10 euro per piece. Pretty much the worst they can say is "no". With that said I would also like to mention that if you intend to have such equipment on at all times, get ready for a laaaarge electricity bill. My lab was off when not in use but could be turned on via IPMI and smart power sockets remotely. Oh and they act like electric furnaces essentially. Others in this topic posted a solution that I too think is perfectly suited and that would be a NUC. More power than an RPi and still quiet and power efficient.
Currently plan on running a simple web server via nginx from my RPi3. Might set it up to host files as well. So really boring at the moment :)
Mine is currently deployed as a cybersecurity learning platform for myself.
Rocking a AMD fx-6350 based system with xcp-ng virtualizing an Active Directory domain controller, one AD joined server and a ELK stack server to propagate logs to. My desktop also has a couple of AD joined virtualized workstations.
And a Raspberry pi with wireguard set up so I can access the lab from anywhere.
For anyone interested in AD / cybersecurity, I highly recommend to set up your own AD environment. It teaches you so much more than I could have ever imagined.
I have a small homelab at home that I mainly use to experiment and learn new technologies, networking concepts.
My physical compute resources consists mostly of refurbished and second hand machines:
Currently self-hosting:
Recently I've managed to get an ASN and an ipv4 /23 allocated to me by my RIR, so it's been really fun learning BGP and figuring out how to announce part of my IP range into my homelab.
Looking to start playing with voip and hosting xmpp next.
Nothing fancy, just a cheap Windows nuc running:
R530 with a T1000 for transcoding jobs as the Xeons from that era didn’t have QSV.
Running Unraid with the *arrs, Plex, Jellyfin, calibre, audiobookshelf, checkrr, homebridge, mylar3, Nextcloud, overseerr, Plex-auto-languages, Plex-meta-manager, portainerCI, recyclarr, scrutiny, tautulli, tdarr, Jackett, nzbhydra2, prowlarr, flaresolverr and a couple of other backups and assorted dockers and VMs.
Books and audiobooks have their own Readarr instances.
Plex has everything I need to barely ever touch it or settings on the screens.
Tdarr makes sure everything is in h.265
The overlap with indexer managers is because some sites play nice with some and others with others.
I also run a QNAP that is my backup machine as everything backs up to it and then it uploads those backups to the cloud to complete my 3-2-1 plan. The backups are in bare metal and file format so I can pull whole drives for restore or single files.
I also run two DNS servers, one simple with just reverse lookup zones for local network items, the other with ad-blocking/family filtering so I can A/B if something isn’t working right.
With my PoE switch and rest of my network rack it actually isn’t too bad on power. The 13th gen Dell servers definitely aren’t as power hungry as their older brothers.
I’m very likely switching to a NAS in the near future however as the server is getting long in the tooth.
If I were to change anything at this point, maybe I’d start with a different hypervisor/docker host, but after tinkering with some others the learning curve for unraid is way less steep, and I’m not ashamed to admit I just don’t feel like being big brain sometimes.
A good friend of mine swears by it.
I have a new NAS coming in the next few weeks that I’m using as a project box, with yours and his recommendations I’ll spin that up first and build around that to see if it fits the bill. Thanks!
My dad has a Synology NAS (not sure which model sorry) that we use for photo and video backups. I host a Jellyfin server on my PC and just have a basic HDD for storing movies, TV shows, and home videos as well. Haven't really done anything fancy. Might move the Jellyfin server to a dedicated PC in the future (M1 mac mini maybe?) but currently, it works fine as is.
I have a Pi Zero W with multiple sensors attached that monitors ambient temp/humidity, my aquarium temp, then grabs outside temp/humidity/barometric pressure from 2 public API's. I also have a Pi Zero 2 W that runs fluidd/klipper for my 3D printer, but I guess that doesn't really count.
I honestly want more Pi's to run more stuff. I think it's the perfect platform. I used to run linux on whatever my oldest hardware was that is 2 steps from the trash can as my home "server", but a Pi is about 3 trillion times more efficient, lol.
I just started to self host plex+overseer+sonarr+radarr+prowlarr+flaresolverr+qbittorrent on a rpi4, i will probably change the hosting to an old mac mini or a NUC.
I’m accessing it remotely with wireguard (I have a VPS that works as the wireguard server for my home peers and external servers peers). It works like a charm !
Currently really quite a boring set up
Grafana and Prometheus in Docker so that I can develop dashboards
Did have all the *arrs and Plex but now we have an IPTV that docker setup has been shutdown for a while
Currently self web-hosting, as I have been for 25+ years. I just cant stand forking over money to a tool like Webflow or Squarespace or Shopify even IF the platforms are good. So I'm running a mail server, web server and FTP, and it's just living on a little 1U box somewhere running Debian.
At home, I'm running PLEX as i'm an obligate pirate and data hoarder, as well as my own wee arcade setup on Batocera/ EmuElec (haven't landed on a winner yet). I consider that a "server" as it's not in the same rooms I play in, and it's on my network to do stuff like scrape metadata and such. So... server-ish.
I have homemade NAS with RAID5. It serves as in-home thing - hosts Jellyfin, TVheadend (homemade over-the-air to IPTV), home directories of family members, DNS server (for LAN currently), prin and "scanner" server etc. I'm planning on asking my ISP to forward some ports on my internal IP address to go public - at least OpenVPN and DNS, better yet e-mail server, webserver, SSH and maybe other stuff mainly for my own use, not for general public use. The NAS is running 24/7 already, so why not use it as self-hosting server? :-)
I already have one such server (running RAID1 on Inte Atom N270, drawing around 20W of power) with my own public IP, I used it as DNS, e-mail, webserver and OpenVPN too and also as off-site backup for family photos. There's gonna be ISP change on the site though and I will lose the public IP, so I have to adapt. This server started just as curiosity (to setup my ow sevices, like a software training) like 10 years ago and is still running by chance.
I run my own backup & file server with Unraid. I don't really host any containers on it though, and certainly nothing public. Public hosting requires ongoing maintenance and it's too much of a security headache for me personally.
I used to be big into self-hosting, but have moved more and more stuff to cloud. I still have a FreeNAS server at home with 16TB of replicated storage that feeds Plex for me. I have various jails and plugins that run random stuff, like a service that synchronizes Google Keep's lists to Todoist after IFTTT stopped working.
I have a second repurposed mini desktop I built about 8 years ago to run Kodi that now runs my Ubiquiti network controller and NVR for security cameras at my house.
Other than that, my website and personal file storage have moved to Google Cloud. Used to host OneCloud for synchronized storage but moved that to Google Drive.