17
votes
What programming/technical projects have you been working on?
This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?
My wife asked me what kind of investment it would take to make our home server more reliable and more performant. I couldn't believe it. So I sourced an old(er) desktop poised for the trash at work to set up a new Proxmox server onto, and am now looking at ways to build a cheap NAS with TrueNAS. Looking at keeping new purchases under $600 CAD or so since I'll likely need to buy some large capacity hard drives (and maybe a case for those drives).
I'm very excited about this project.
If electricity is cheap for you, I would buy something that supports SAS drives like this and then fill it with cheap SAS drives like this (16TB for $40)
But if electricity is expensive... don't underestimate the extra yearly costs. Spending a bit more for a more energy-efficient array often pays for itself after a few years. I have an old 12-bay r730xd which uses around 300 watts while transcoding media--running 24/7 that's an extra $400 per year in electricity and I pay only $0.14 per kWh.
Ever consider a Raspberry Pi? The upfront cost is low and the running cost is also low.
My current home server is a Pi 4 with a mirrored 2 drive RAID and separate drive for the OS. It’s a NAS, it’s our Kodi media server, Owntone server, CalDAV server, and some other random services. It is basic and probably not very fast, but it is doing really well for our needs.
How did you get the drives set up? I currently have a modest old desktop running all of our server needs (Jellyfin, Owncast, Stump, SMB, more services when I can think of it) but its just a 128GB SSD and 4TB HDD set up with LVM. I am currently thinking about setting up a NAS to expand storage.
I made a small fish shell function yesterday which I'd like to share:
This allows me to put a text file somewhere filled with interesting commands to run. Then I can type a short command and it will substitute the commandline with one of the lines from that file.
For example, I have
abbr_random_line wt ~/watch
in my fish config. When I typewt
and press space it will expand to a random line from the~/watch
file.Been playing with adding small controllers to home assistant to use LEDs to display status of things.
Mostly because I have a few pi picos that have been on the "find a project" bench for awhile now so was trying to make use of parts on hand instead of ordering exactly what a tutorial/blog laid out. Got one wired to an airlift wifi module but it was enough of a pain trying to get them to actually connect to the wifi I switched over to a pico w, which was incredibly straightforward to get connected all the way to home assistant. Went to get the LED strip I'd seen in the pile and it was already wired up to a Zooz z-wave module that I'd apparently set up and couldn't decide what to do with it.
So now there's a nice little light strip above the washer/dryer that goes red if they're turned on during peak electrical pricing and can otherwise just add directed light to the room when the main light comes on or be a nightlight or whatever. Which is neat.
But now I'm back to square one on "what can I use these picos for". Could maybe light a bookcase I suppose but that seems like a waste of an addressable LED strip and the capability of the little controllers. Maybe somehow get it setup so each shelf can be lit up individually? That actually sounds neat.
I tried to -- and failed to -- setup Kubernetes on my home-server. I was running a few things in Docker/Portainer before, but I felt like the server was getting too "messy".
The platonic ideal would be to have a server that can be wiped and re-constituted easily if need be. Ansible would probably have worked to accomplish the goal of having a reproducible system, but I had worked with Ansible in the past and wanted to try something new.
So, Kubernetes. Google-scale on a single machine... 🤦 (though I planned on adding more machines later on to get a proper quorum of three). Why, oh why, did I think this was a good idea?
Talos Linux is a distribution that runs nothing but Kubernetes. You can't even SSH into the server because there are no system tools installed -- everything runs through the Kubernetes API. There are a ton of good tutorials on YouTube from the creators of Talos, and I got relatively far with my first attempt: everything needed for running containers worked (modulo a detour where there was not enough entropy in the system to derive encryption keys for the hard disk).
But I couldn't manage to get a port forwarded to an application, specifically ports 80 and 443. Theoretically I could have installed something called MetalLB or just lived with the fact that NodePorts are generally assigned in the 30000+ range, but at that point the idiocy of the whole endeavor caught up with me.
However, my research triggered the YouTube algorithm, and I got served videos about something called Incus. Incus is a Container/VM manager that allows very easy creation of docker containers, system containers (containing an entire operating system except for the kernel) and full virtual machines.
So I wiped Kubernetes, threw Ubuntu Server on the server and installed Incus. It took a bit of fiddling until everything was setup to my liking, but the docs are thorough, so it was mostly smooth sailing.
With a bit of setup you can use
<container>.incus
DNS names to resolve the containers from the server itself, so that different containers can talk to each other over a network bridge. I used that to my advantage, installing a Debian container with Caddy as a reverse-proxy, and now all software on there was accessible via HTTPS (using a self-signed cert).It is very nice that you can use different Linux distributions from the image library without having to go through a manual installation. I installed Forgejo in a NixOS container, because on NixOS that's like 10 lines of configuration to set it up, whereas most other distributions would have required manual setup. Heck, even a docker-compose for Forgejo is more complicated to setup.
So yeah. Kubernetes was a stupid idea, but it led to Incus, which worked out fantastic so far. You can snapshot containers and VMs and copy or migrate them to other hosts as well. In theory I can wipe the server, transfer backed-up snapshots to it, and be up and running again if things ever get too messy.
I'm running Kubernetes on my self hosted servers, and I think you have made the right decision. Kubernetes solves some problems in a very elegant way, but introduces a bunch of new problems and things you need to deal with. And if you don't have a big need of solving the first set of problems, the second, new set is gonna make your life much harder for very little gain. I guess I'm happy I pushed through and learned a bunch of things, but i don't think i would do it again.
It would have been nice if Kubernetes had worked out; it does bring many good things to the table, even if it is likely overkill for such a small setup. I was so curious if I could make it work that I didn't really stop to think if it actually, actually makes sense for me to do so.
But I likely wouldn't have learned about Incus if I hadn't tried to Kubernetes, so the time was not wasted, even if the decision to invest that time feels a bit stupid in retrospect.
Work continues on building a musical keyboard. I got a one-key prototype working last week and now I'm working on multiple keys.
Dipping my toes into ansible. I have tons of experience with puppet and terraform, but somehow never had the opportunity to work with ansible.
I am creating some roles related to server hardening. Things are going smoothly though I am struggling to pick good "sizes" for my roles.
Should I have a bunch of small roles like "noexec on shared memory mount", "configure unattended upgrades" or should I have one role that handles all the hardening.
I am assuming I should go for the former, but that would mean that in each playbook I need to specify the individual hardening roles. And if I then add more, I need to make sure not to forget adding them to my playbooks.
I could make a playbook for hardening and include that in other playbooks, or I could make a hardening role that has all the small roles as dependencies. But both of these options seem frowned upon from what I could find for best-practices.
I think you could group by dependency like
debian-hardening
and include the debian specific unattended-updates, etc. Then another one forsystemd-hardening
,mountpoints
,network
, and then any application specific roles should be their own thingThanks for the feedback! It helps me get a better feeling for the required level of granularity.
nothing special, but something I need. This adds two buttons to
imdb.com/title/.../reference
to look up the movie on Letterboxd and also reddit (via truefilm and criterion by way of google)userscript.js
its pretty easy to add to if you wanted to have it go to your favorite Linux ISO search.
https://i.imgur.com/NPNUeZV.png
edit: I also added a way to send the IMDB ID, Title, and Year to a Google Sheet via POST -- pretty handy -- (demo)