17
votes
What programming/technical projects have you been working on?
This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?
I got my first 3D printer, a BambuLab A1 Mini Combo, the other day. I've been printing things non-stop. My favorite thing is printing parts for the printer. A bucket for collecting waste filament, a scraper handle, a small drawer attachment, handles for the allen wrenches, etc. I like the idea of owning something that both creates and solves its own problems.
Beyond that, it's a lot of novelty prints -- benchy, a turret from Portal, some fidget toys, and a D&D mini of a rat (a gift for a fellow player), and other little goofy things.
I'm learning a lot...like multicolor prints are a pain in the ass unless you plan things out intelligently. When I made the rat mini, it didn't have a baseplate so I fiddled around in Bambu Studio to add one. The mini was one color, the stand another. And I sunk the rant a few layers into the stand which meant for a few layers the printer had to swap colors multiple times and it took forever.
I really want one. But the truth is, I really have no idea what I would print and can therefore not justify the price. I just think they are extremely cool.
That's why I held off on getting one for years. Then I got into flying and building FPV drones, and they're incredibly useful for that. You can't realistically print a decent drone because 3d printer filament isn't strong nor stiff nor lightweight enough, but you can print tons of parts for them, and those parts often get destroyed when you crash (which is a lot, depending on the type of flying you do), so I'm usually printing new stuff at least once a week.
I actually kind of hate the little sculptures that most people who own 3d printers end up printing constantly, because I can't stand clutter and chotckes, but I do end up printing out little functional parts quite a bit for stuff around the house. (Custom holders for tools to go on the wall, bins for my toolbox, headphone holders, some mounting brackets for a power strip, some brackets to hold up some LED tubes I had, among other things).
I'd say unless you have a specific hobby that would be well supported by 3d printed parts, or if you like little sculptures and want to make 3d printing a hobby in it of itself, they're probably not worth it to most people.
I sort-of initially thought this too. (and of course always true that YMMV) I had one small niche idea I wanted to print, and that propelled me to get into budget-level 3D printing to start, and now that I have a 3D printer, I've found all sorts of useful things to print (brackets, holders, covers, stands, etc.) - lots of small fiddly things that I would otherwise overpay for somewhere else. It's probably already paid back a lot of the cost.
I find it interesting how it's actually completely transformed how I look at the world- which this sounds overblown, but I mean it genuinely- I sort of see everything I use/own/live with all through a new mental map, or a lens/layer of "is there something 3D printable that would improve this" and I end up finding all sorts of neat little things I can custom-make.
Super niche example: I noticed the way my filters lay against my evap coil of my HVAC meant that they were too uncomfortably close to the drain pain below AND also came with a risk of easily slipping during install and falling down beside my blower/airhandler in a gap between it and the wall- so I printed some small clips that clip onto the drain pan, without impeding any drainage, and simply provide a small platform and a catch for the bottom of the filters to rest against.
This has prevented them from incidentally picking up any extra moisture from the drain pan and also makes the install more secure and less likely to slip- and these clips are so small that they used probably like 1/200th or less of a roll of filament (which costs anywhere from like $15-25 for the entire roll), meaning they costed just cents to make myself. Even if I could buy some kind of clip online and "make it work", it would not be custom fit, and even if cheap would cost astronomically higher than cents-per-clip.
It's a lot of little things like that^ that have made this whole process wonderful. Finding all these ways to very slightly improve or fix small annoyances with things, or custom "manufacture" my own parts that just don't exist for sale anywhere
I got a P1s at home. I travel most of the year though. So I’ve been thinking of getting an A1 Mini on the road.
Not sure how feasible that or other portable printers really are. But the idea intrigues me.
If you had a good carrying case for it, I think the A1 mini would be a good portable option.
I played in my first online DnD session (first dnd session since 1995 for those keeping track) and I was blown away. I go myself a license for Foundry VTT, built a linux server, and started working on setting it up. It's a little more "involved" than I expected, but I'm learning bit by bit.
How did you play your first session, assuming you are setting up FVTT after the fact?
FVTT is a bottomless hole of plugins, so watch out. Conventional wisdom is to just play with the barebones for a bit, and slowly add plugins that make sense. Rather than just going through all the options and adding them before you need them.
I was invited to a virtual con and it was roll20. I wasn't a fan, and I didn't like the idea of a subscription service. One time purchase seemed better to me.
Yes, I'm finding out that there are a LOT of modules and they are confusing af. The real challenge has been the battlemaps and stuff. I feel like there's a bewildering array of monthly subscription options to get maps. Yes, the irony is thick.
Anyway, I'm inching forward bit by bit. All of my experience is 1 and 2e from the 90s, plus some more recent Monster of the Week. That being said, Pirate Borg (and all the borgs) speak to my soul, as does Shadowdark.
FoundryVTT is great, as someone else who also used Roll20 for a while and made the switch. The modules can be overwhelming but it's so nice to just be able to tweak stuff to fit your situation.
Were you self-hosting or cloud-hosting, out of curiosity? I ran it off my own machine for a while but felt icky not having HTTPS set up properly. Ended up following a guide to set it all up on a little free Oracle cloud instance and that's worked pretty flawlessly since. https://foundryvtt.wiki/en/setup/hosting/always-free-oracle is what I followed if it's of interest.
I don't know if it's still an issue, but a few years ago when i was more into FVTT there would be posts about those oracle instances suddenly just closing/being deleted an inaccessible without warning. That basically "you get what you pay for". Did you see those conversations and were they recent at all? Obviously you haven't had that happen, but do you make backups?
There are two different types of oracle cloud accounts. There are « always free » accounts that can access only the always free resources. If you try to access more, you are blocked. The number of compute nodes for always free accounts is also limited, so once you have an account like this, it is very difficult to actually use. Oracle makes it quite hard to get these accounts.
Then there are regular paid billing accounts. Getting these accounts is much simpler (although more complex than they should be). With this account, you pay for what you use, and don’t have any artificial limits. You can use the ample compute resources, so you don’t have to fight for the specific « always free » nodes. Most importantly, you can still use up to the always free resource limits without a charge (although their craptastic website interface often doesn’t label things as always free). If you go over the limits, it will charge you to your set billing.
All of the issues I have heard about are on the « always free » style accounts, not paid billing accounts. I have a paid billing account for many years now and have only paid oracle a handful of dollars (when I would intentionally use something beyond the limits). As long as you use a full billing account, you shouldn’t have any issues with the oracle free tier.
So when I first set it up I remember reading some warnings that on the free tier they might terminate instances which were sufficiently idle, to free up resources for others. It might have been something like that you were hearing about?
There was advice on keeping the instance small enough that just the background work of keeping Foundry ticking over with no users connected was enough to keep it 'busy' enough for those thresholds.
I've got regularly scheduled backups of the instance disk happening within Oracle itself, admittedly I don't have a secondary backup off their platform so if they nuked my account I'd be out of luck.
Hmm, yeah maybe I should have done a free oracle instance. I'm running it on a digital ocean $10/month droplet and I'm pretty happy with it. I was able to get Let's Get Encrypted all set up and it's running super smooth.
As I've told my wife, the real challenge remains the same.
Finding players.
I've been out of the hobby for roughly 30 years and my career kept me living out of a suitcase and traveling for the better part of those three decades.
Peopling is hard, yo. ;)
I mean if you've got something working nicely that's what matters.
And yeah I'm with you on the social aspect being a challenge. I've got a few friends interested in the hobby that I've been playing with which is lucky because I'm not sure that I'd otherwise go through the stress of putting myself out there to find a group that I gelled with. But then there's some fairly disparate tastes within my group so it can be a bit of a balancing act making sure everyone is enjoying themselves.
Well if you ever need another or want to talk ttrpgs my dms are open. 😁
Look at me peopling!
I read a blog post about using Podman and Systemd to manage containers on a server. I can't stop thinking about it. I am going to set this up with Debian and probably start running some smaller applications in my homelab like Penpot and Stump.
Go for it! Also, it's way easier on Fedora. Plus cockpit is nice.
Seconding Fedora Server for podman management with Cockpit. I love it.
Wow I never even thought about this. Right now I run a proxmox server with many VMs and my goal is to move to one or two VMs and orchestrate containers in those.
I do containers on the host OS but they should be very performant in VMs too. I also use Ansible for spinning things up, and I wish I would have adopted that sooner.
Same here....this is a very interesting idea!
A couple of things:
My DnD party needed a better way to schedule our sessions, which for the upcoming campaign, will be dynamic (let's find a day once every two weeks) rather than static (let's meet every Thursday). I tried out a few tools, StrawPoll, When2meet, Rallly. They all could have been suitable, but with some work. Add on that I wanted an API so I could wrap a script to send Discord notifications about who needs to list their availability and when a date was decided, auto add new polls, but I just couldn't find a tool that quite did everything I needed.
So I had LLM AI make me a tool, purpose built to what I want. Enter PartyPlanner! Serves our purposes great.
I also spun up a DocMost container for our party to use, for notes. And damn is it a nice wiki. Much better system for taking and sharing notes than the shared Google Doc we were using before. If anyone needs a place to whip up some documentation, highly recommend. Great ACL's, editing features, linking features, sharing features. Way more solid than when I tried out MediaWiki.
When updating some things related to my IRC client, I was reminded of the old program
pisg, which allowed people to generate statistics in HTML format from IRC logs. I decided to see if there was anything newer/better for a few of the channels I'm in, and found SuperSeriousStats. Great, but I detest running PHP on bare metal. Peeking at the issues, the dev specifically said he didn't want to bother maintaining an image/containerizing the application. So I did.I just went through a comedy of errors repairing my Zojirushi rice cooker. It's out of warranty and there's only one repair center in Canada with a steep diagnosis fee.
Basically a design flaw allowed a small amount of liquid to leak onto a daughter board containing the clock battery and some serial lines and power for two sensors. This caused severe corrosion which degraded a pin in the FFC connector to the point that it fell out entirely.
After some lengthy troubleshooting and diagnosis and confirmation of said diagnosis, I set about ordering repair parts. I thought I found the right receptacle only to solder it in and realize the damn thing is upside down. So all my connections were reversed. Good thing I checked continuity!
I looked everywhere for the mirror version of the replacement. I even checked Chinese and Japanese suppliers, there's not even a search term for the orientation so I can only conclude Zojirushi manufactured them in house to frustrate anybody trying to make repairs.
So my next logical thought was "well I'll just replace the other side, cancelling out the change". I disassemble more of the rice cooker and extract the main board only to discover that the receptacle on that board is the exact same orientation as the replacement, only at a right angle.
Now I'm thinking about building an adapter, a crossover cable of sorts. I did buy extra parts thankfully. I considered a perf board adapter with the connectors on opposite sides, reversing the pin order, then two cables to complete things. But all my perfboard on hand was too big of a pitch.
In the end, I very very carefully spent about thirty minutes gently bending the pins with fine tweezers into the correct configuration and gently coercing it into place. It's not perfect but considering the part apparently doesn't exist, it's the best I can do without printing my own replacement board.
Tonight we had our first pot of rice in a week and it was fantastic! So that's how 2$ and a lot of time saved my fancy rice cooker.
My Zojirushi rice cooker pot had a scratch. Zojirushi Thailand's website listed several spare parts. However, the pot is listed as sold out. We called them and they say they'll contact us once it is in stock as they needed to import the pot. Two months later, it is still not in stock saying shipment delayed. Three months later it is back in stock and we ordered it.
The box says "Made in Thailand"
Obviously, the box was made in Thailand while the pot was imported from space :D /s
Zojirushi make great products but the repairability and parts availability leaves much to be desired. I imagine outside of Japan there's just not enough of a market to warrant having a good stock of repair/replacement parts. During my diagnosis phase, there were lots of YouTube videos from Thai and Vietnamese uncles repairing them so it feels like even throughout SE Asia there's just not the market or the official channels are just too expensive.
Slight tangent: I've always used a "dumb" rice cooker (which is actually quite clever) for rice, steamed buns, and many other dishes. Is there an advantage to using a "smart" electronic rice cooker?
I've always found the dumb ones to be too inconsistent. They always produce good rice but rarely great rice. Being able to keep rice warm in the smarter one and reheat as well is a killer feature imo. Microwaved or stove reheated rice is just never quite right and I can only fry so much rice.
I agree that this one is maybe overkill - do I really need AI adjusting cooking temperature and pressure on the fly? Probably not , but the induction cooking is quite nice for even cooking and the low pressure does make the rice a bit fluffier.
I can say that I've used every function on mine, with the exception of "quick", ironically. It is quite nice that as long as you follow the very basic directions, whatever you make comes out great every time.
To my friends that aren't daily rice eaters, I usually advise them to get either a dumb one or a cheaper zojirushi/tiger. Or teach them to use a pot , if they don't know. All rice cooking methods are valid except I still can't quite accept the Indian style of boiling basmati in water like pasta. I mean, it obviously works but it's unsettling lol
I've been using nixOS for a few months now and I want to clean up my configs a bit, switch out some packages with flakes for more reliable version control, and start using Home Manager. I haven't seen any guides that show how an all-inclusive setup looks like, and how people are handling stuff like secrets and backing up other derivative configs, which is part of what has held me back from making this jump. Ideally I would also keep my configs in actual version control as well. If anyone knows of any really good guides to accomplish this, please share it with me. Right now I have a single large config that keeps growing steadily as I add in new functionality to my system.
My latest project is cleaning up my music collection a little bit. I have a lot of albums that are stored as a single audio file along with a cue file, but most players don't support this format, so I'm finally biting the bullet and slicing those up. The primary tools for this are cuetools and shntool, but I also found out there's a GUI called Flacon. It feels like a good opportunity to try vibe coding in order to get help with writing the scripts to hopefully help me accomplish this task in a single sitting.
Broadly, take a look at the flake configs scattered across github. For secret management, nix-sops is probably the way to go.
I like flacon!
There's also unflac if you want to just point a bunch of CUE files and let it rip!
I wrote a script to help merge folders which share the same name excluding some folder names like "Disc 01", etc. It merges into the folder which has the shallowest depth. You can add more exclusions with -E name (add a path separator (ie. name/) to exclude all subfolders too)
I think the funnest part was creating the list of default folders to ignore. After excluding some patterns it's surprising how few common folder names there are. Seems like shallow folder hierarchy makes a lot of sense as long as it stays performant.
I'm having fun implementing my personal links server with Shelley and exe.dev. I now have a bookmarklet that I can use to select paragraphs off a web page and save them as as blockquotes in a Link's summary field. Also a copy button that puts them in the clipboard in the format I like for pasting into a Tildes post. No more converting to Markdown by hand :-) Lots of other stuff, too, like drafts and tags, but it's work in progress.
A few days ago I asked the AI to write a design doc for a tricky feature (including a mockup) and found that process works well; it misunderstood what I wanted so I told it to fix the design doc. So, my personal hobby project now has a Process. We're at 10 completed design docs and one in progress.
One thing I can do with design doc is have multiple AI's do design review. Here's my prompt:
So it gives me a list, and if I like the suggestions (and usually I do), I tell it to go edit the design doc itself.
It seems like GPT-5.1-Codex in particular is pretty good at finding subtle flaws. Furthermore, I ran the same prompt again and it found more stuff to fix. It's like you can just press a button and it will keep giving you more suggestions. But at some point I'm just like "let's leave that up to the implementer to decide."
My repeat-test library is coming in pretty handy. I did a release to add a couple features. It now has a quick reference aimed at getting coding agents up to speed with writing property tests, though I suppose humans could use it too.
I have too much projects and too little time with two long games (soon three as a Tilders told me Hitman is adding cross progression) I intend to play in Q1.
This week I added user verification spoofing to KeepassDX, which I already posted a PR. I also added AAGUID spoofing but that will need to be tested. The thing with passkeys is that websites can ask a passkey provider (eg. password manager) to perform a "user verification". Most user friendly ones like OS builtins will perform a fingerprint check or Windows Hello, so many sites interpret this as two factor verification (hardware protected key + whatever used to unlock the key). However, it doesn't make sense to unlock an already unlocked password manager. Also, why do software on our computer listen to random instructions from the internet anyway. So, KeepassXC got a huge drama where they say they will not implement it, and a Keepass committee member says you might be blocked.
KeepassDX's author think they can find a better common ground, so in their implementation you can either be 100% specs compliant where passkey request will requires you to unlock an already unlocked database, or you can disable verification which, if the website "requires" it, the password manager will block you from progressing at all. So, I think it is against open source spirit that such user hostile features are implemented, and I added a new "spoofing" option where if you disable verification, it will still says verification was performed.
Of course the author is not happy to merge it in, so I was planning to add my own CI and release a binary. But yeah, maybe sometimes.
Another project two years in the making is the port of libthai to Rust that I was doing around the new year vacation. libthai is a dependency of Pango, which is a dependency of GTK used by Linux apps. It depends on libdatrie from the same author, which I successfully ported it to Rust (I didn't like the Rust-side API too much, but at least it's working...). However, Rust doesn't have cdylib versioning and I believe it blocked some application compilation, which I have no clue how to debug. As for porting libthai, most of it are pretty easy. I tried comparing Claude Agent (in Jetbrains) output to mine, and I think most of it can be ported by LLM (I did not use LLM in the actual code, only for consulting and code review).
The hardest part in the port is the Thai word segmentation algorithm. The code itself is quite hard to read with pointer arithmetic instead of arrays. I asked the original author and he says that some of it is because that was the only way to write optimized code that would not rely on compiler optimizations, as back in the day some compilers are more primitive. From my understanding, the code use two linked list that acted as a priority queue and a recycle pool. Porting that code 1-1 is doable, as I have done that, but it was impossible to get rid of all
unsafewithout changing the data structure. Rust'sstd::collections::LinkedListis unusable in the real world - you can't even remove object in the middle that you already have, which is the point of using LinkedList over a Vec. And changing to other data structure (like a Vec or Heap) requires understanding the algorithm in full, which also calls to datrie data structure.Sometimes I ask myself why not just make this or Pango a glue to Unicode's ICU, which I believe Qt does use. I searched around, and it seems that in early 2000 IBM did ask Pango about contributing such a patch, but did not receive a reply. libthai's author told me that it might be because Pango's original design was already well thought out to support complicated languages already, and people back then view Pango as corporate-backed open source.
A few weeks ago I was doing the port in a maid cafe. Not exactly the best place to code since the point of such cafe is to interact with the cast members. They asked what are you coding, and I realize they're literally staring at output of word breaking algorithms all the time, yet, no laypeople would realize it is such a complicated piece of code. And yet, you either use the Thai word breaking written by non-Thai people (ICU, OS native, etc.), or libthai which is maintained by a single person in his late 40, who wrote it almost 30 years ago in a team that was demolished almost over 20 years ago.
I've been using Debian since August, after 17 months using Aurora, basically a heavily customized Fedora Atomic image. I have a Minisforum V3 I spent hours troubleshooting my sound hardware, while I was able to fix it quickly with Bazzite, using GNOME.
I'm aiming to get back onto Bazzite, but am going to test the hell out of the environment before I switch my computers back over. I loved Aurora, the only issue I had was a Linux graphics bug that I fixed by rolling back for a few weeks until it hit Fedora then the image. So I'm applying a more refined approach to the transition:
2.a Mega.io (via repo)
2.b NordVPN
2.c Vivaldi (could flatpak it, but why not deploy?)
2.b virtmanager/kvm
I have a couple of Github repos set up for me to customize these, so once builds are running it's fairly trivial to start adding pre-configured packages, and repos should not be too difficult. I had tried this previously but wound up using a Fedora distrobox to manage Mega, but it lacked file manager integration.
The primary goal I have is to eliminate the few papercuts I had, and learn more about the build system so I can get exactly what I want.
I also procured a Samsung Galaxy Tab S7 to put LineageOS on. Using Linux I had to get a little creative with the tools (shout-out to Thor), but its up and running, basically ready to sit as a dedicated reading device on an arm over my bed for books I upload to Google Play Books.
Another goal for this week is to get a robust container-based system of some sort started using podman quadlets in my Debian server, since I know everything I need to do to make it happen (did a similar implementation at work using bash scripts, but quadlets are far better). Gonna start with a router-based NAT to a piehole container, then something that I can break DBs out of like a Luanti server. I need to get intimately familiar with kubernetes for work, so I'm working my way up the container stack to get close with underlying/related technologies at various laters/stages of technology.
I've started the process of setting up a low-cost VPS to act as an ad blocker for my phone connections. I'm deciding between providers, VPN vs Proxy and other things to think about. I think I've settled on running unbound + adguard on my virtual server and I'm leaning towards a VPN approach with Wireguard. My biggest concern with choosing a VPN over a proxy is battery usage but it sounds like WG is decent in this respect. I understand this setup wont catch everything but it should make mobile browsing and some app ysage a bit more tolerable