20
votes
What programming/technical projects have you been working on?
This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?
I have been working on a tool that takes manifold surface meshes, and converts them into F-Reps with unit gradients, which can be parsed by Fidget. For a higher level description, I am taking 3D objects and turning them into pure mathematical equations. I found and fixed a bug in Fidget's JIT evaluator as a result of this; this was satisfying as it was tricky to find. The representation could be more efficient, right now each triangle is represented as a unit gradient function, which means that for manifold meshes each triangle edge is encoded in the result twice, and each vertex at least three times.
Future work for this technique will involve using an oracle function that does raycasting to determine if an evaluation point is inside or outside the surface, and will allow for arbitrary remeshing of triangle soup with Manifold Dual Contouring (with the intent to rely on the Fidget or libfive implementation).
I've been working on a custom Bluetooth speaker. I've been getting into woodworking in that past couple years, and this gave me a way to combine two hobbies. I had some really nice scrap hardwood pieces (I think it's walnut) I got at my local construction junk shop, so I'm building the case out of that. It will be in the shape of an old timey wooden radio (those big ones people had in the 30s or whatever), though much, much smaller.
The internals will be based on a Raspberry Pi Zero running a Python daemon to handle Bluetooth connections. I may give it a way to put the wifi adapter into AP mode and then provide a web based configuration UI, as well, but that's a stretch goal and I'm not even sure if it will be useful.
I've got the software and electronics mostly finished (except for the aforementioned web stuff), although I haven't decided on physical controls yet, so I may need to add some code to handle a potentiometer (because then I can get the on/off behavior in the knob as well), right now the software assumes a rotary encoder.
I'm moving slowly on the case because I am still very new to this kind of precision woodworking and I don't want to screw it up. I formed the rounded top yesterday using my bench top belt sander, it turned out pretty well. The hardest part is going to be routing out the insides to make space inside for the electronics.
After taking a break with mild burnout symptoms, I'm currently setting up a seedbox inside a VM on my Proxmox machine, though I'm getting somewhat frustrated by the "exercise left to reader" attitude that the rtorrent developers seem to have towards documentation. I'm not sure what else I might use instead of rtorrent, though- it's all I've known and I've never done anything particularly complex with it.
I just need something simple and performant that will sit alone in a Linux VM and download / seed torrents, which has some limited automation (scheduling, events) support so that I can move files on torrent completion / delete torrents on ratio limits, etc. Something CLI that I can run inside a Screen instance would be perfect.
If I find the energy, I also want to set up either something containered, or inside a VM, which runs relatively graphically hefty programs & a Sunshine instance, that I can connect to using Moonlight or similar tools. The ideal would be a system which runs the graphical program inside some form of kiosk desktop environment, and can spin-up and spin-down when needed / not needed. I have absolutely no idea exactly which angle I want to approach that problem from, tools-wise, though. I know that I would have to get SR-IOV working on the Intel Pro B50 first, which means playing with the opt-in experimental kernel in Proxmox as I need kernel 6.18 or higher.
I tried it out for a few months and to be honest... not too impressed. Something like
qbittorrent-nox(that is, no X11, no GUI) ortransmission-cliis a lot more responsive and scalable.I've slowly built up a collection of qbit scripts that make managing thousands of torrents across different servers a breeze (...once I remember the script name!):
library torrents-info does a lot of heavy lifting and it's my main interface into qbit:
This uses GNU Parallel, fish functions, lb torrents, also see allocate_torrents
nb. if you use these you'll need
rb_libtorrent-python3on Fedora, in Arch the correct python bindings are already included in the main libtorrent-rasterbar package (other distros might be even different but most likely installing via package manager will be very easy once you locate the correct one)There's also these scripts which I mostly only use in an automated way (if you search the script name in my computer repo you might find a reference in daily.fish or weekly.fish, etc):
https://github.com/chapmanjacobd/computer/blob/main/bin/ (see the qbt_* and torrent_* scripts)
Let me know if you see anything confusing, or if you have a specific problem, and I can try to explain how I would deal with it.
There are a lot of other qbittorrent CLI options too so you aren't limited to my scripts... I just have less experience with them--but if you don't like what I have, I'd still encourage a deeper look at qbittorrent!
I appreciate the thoughts, and I hope I don't come across as too fussy for my own good in my response.
I've just had a look at qbittorrent-nox, and while it seems like it might be more functional (certainly more well documented), I'm loathe to lose the TUI interface that I am used to. It feels to me like qbittorrent was built to be client-first, not with the server/client architecture in mind, and so few bittorrent projects provide a modern and clean TUI.
As for transmission-cli, that is another interesting option! Both transmission and deluge were part of my considerations initially. I'm still not certain about either- there are a number of front-ends for transmission which are either unmaintained or actively in "maintenance mode".
I'm quite happy with the solution in which a specific user is set up with a bashrc snippet which jails them inside a comprehensive TUI which allows for simple control of active torrents. I'm going to continue to experiment, and perhaps I'll settle on a solution. Thanks for the input!
I ended up setting up half of the seedbox, and then getting distracted by writing my own dockerfile and accompanying configuration for running llama.cpp, built with SYCL support for the Intel GPU, on my rackserver. That works, although it doesn't feel particularly polished and I might be looking at other quantisations of Gemma-4-26B-A4B than the one I am currently using, as well as alternative front-ends than the default one that llama-server provides.
I also finally got around to spending a little time on improving the coherency of my various domains- my primary and secondary domain have both self-hosted services running on a range of subdomains, and custom REST endpoints for which no fallback web-pages existed. My primary site now catches requests, to each of the subdomains that have only REST endpoints, which aren't within the
[URL]/api/v1/scope, and redirects them to an info page- as well as acting as a friendly failure-and-auto-refresh page for when the above mentioned self-hosted services are unavailable or offline for maintenance. I'm really happy with how clean the solution feels, and how minimal and un-complex it is.Now, when clients try to access the base URL for a subdomain which only serves REST, they get a friendly heads up and a prompt to navigate to the correct path, and when upstream services are down, they get a friendly heads-up which doesn't even require interaction to "try again".
It's nothing big, but for me it is. I managed to get Let's Encrypt working with HAproxy. It's to the point where auto-renew of certificates should work without my intervention and the web uses secure connection.
I use this for my Immich server when I want to share photos with someone not in my VPN. HAproxy is on my server (with public IP and DNS pointed at it) that is geologically elsewhere than the Immich one. They are connected through VPN. So the actual setup is: 1. You access the link I sent you, 2. HAproxy catches the request and redirects it internally inside VPN to Immich server, 3. You receive the page that got TLS-badged by HAproxy (internal VPN communication is on HTTP).
For someone this is daily routine, a minute of their time. For me this is quite an achievement - I mean the whole setup.
Another project will start tomorrow - I will be integrating IKEA Uppatvind air purifier into Home Assistant. The purifier is basic dumb one, I will be adding ESP microcontroller with wifi and use ESPHome to communicate with it. There is Github page with all the description and manual (not mine) that I will be using for doing this.
Hey this is awesome and no simple feat! I know when I first got into the weeds it was quite overwhelming so that's really impressive that you figured it out and developed a better intuitive understanding of it. I've used Haproxy and Lets Encrypt in many capacities (professionally and personally) for a good five years now if you have any questions, happy to help!
Thanks for heads up! Well, it seems I figured it out completely (for my usage), I probably don't need any help - at least not right now. I might ping you lager when the certificate renewal doesn't work. But I really believe everyghing is set up correctly, the file for Let's Encrypt us accessible from the internet and will be autogeneraged per HAproxy guidelines.
There are only two thing that can crasb on me: 1. Cronjob not running at the right time, 2. My "script" not working. The script only calls out certbot, combine certificates into one and restarts HAproxy, I believe it won't fail.
Relatively minor, but I've recently put together a mini pc to use as an opnsense router. Deployed this weekend and have actually noticed a slight increase in peak download / upload speeds, which I presume is down to the more capable CPU in it compared to the one in my previous wireless router (now solely being used as a WiFi AP).
Set it up as both a measure for "future proofing" security updates compared to prior hardware, and to hopefully use to learn more about networking.
I'm several years into using OPNsense now (after a few years with pfsense) and very happy with both the reliability and the amount of learning it's supported me through.
I did this a couple of years ago myself with a Wyse/Dell thin client with a four-port NIC card installed and have been very happy with OPNSense. Gigabit speeds with tons of headroom performance-wise. Great uptime and no problems. I have a Unifi AP plugged into one of the ports for wireless and mesh it with a second one.
Will never go back to consumer routers, and it was an upgrade from an Ubiquity ER-X I was having trouble with.
Taking the Go class on Coursera. Python is fantastic but the appeal of deploying that single binary is alluring
Honestly, this makes Go my absolute favourite language for anything more than a short lived script. Statically linked binaries in a type safe garbage collected language is just magic.
Not my favourite syntax and it breaks me that we can't have (proper) enums or ADTs. But it wins so strongly elsewhere.
Yes, I’m loving it so far. The mental model reduces so much cognitive overhead and great for infra. I’m having that “why didn’t I learn this earlier?” moment. Built in tests and formatter? Count me in.
I just built an api client last week and would have been a perfect use case for Go.
On the syntax, it’s definitely throwing me off, especially with variables, but I’ll pick that up with enough muscle memory. I think I just need to get over the anxiety from old C class when I see * and struct lol
“It’s okay, there’s no malloc here.” Haha
Overall I find the architecture really nice for those of us who don’t like complexity.
I looked at Rust but seemed far outside of my competency with the memory management and Go seems everywhere in infra so it’s a great fit.
Still tinkering with my personal links website. I decided that it would handy to be able to import images for charts.
I ended up converting the original DOOM source into my lisp language ghoul. It runs, not smoothly, but it runs the first level. I render the level, animate sprites, do collision detection, can open doors, shoot monsters, pick up stuff and get damaged. It was fun to see it actually working. Then I used it to do some performance improvements to the interpreter so it is at least playable! All the computation is in ghoul and I only needed a small go library for rendering a buffer to screen and handling keyboard inputs and sound. The tooling I've made makes it fairly simple to wrap arbitrary go code. It has some limitations, but it mostly just works. I used it to wrap most of the go stdlib so I can throw together a quick http server with lisp code which is kind of neat!
Well a week or two ago I posted about game development and if anyone had any tips or resources, the responses I got were incredibly useful! Definitely a lot to consider, so thank you to everyone who replied.
However I've decided to do things the hard way and ignore some of the higher level advice and dive straight into making the game I want which will have RPG aspects. Hahaha I'm pretty confident in my direction and tend to pick things up really quickly.
But I'm also happy to report that I have already programmed a working Match-3 prototype game from the ground up with all the right connections set up for integrating an RPG system into it.
I used the match-3 portion of the game as a learning tool to learn how Godot works, and now that I feel like I have a good foundation, I paused active development to do some over-all roadmapping and planning out the systems.
Currently on my first draft of my GDD, it's a bit wordy so I'll be condensing it and focusing it more to the prototype rather than the overall game. I'm also using my graphic design experience to make it visually interesting.
Also completely unexpected, but I also met someone who does chiptune music for indie games, and he cobbled up a 30 second sample for me based off the vibe I'm going for in the game.
Here's an artwork mockup I made with the music overlaid on top.. Kind of a rough draft on the vibe I'm going for.
Once I get the GDD done, I'll keep working on the prototype and try to nail down a visual style document.
One of the reasons I want a really good looking GDD is I really want to get the license for the Dungeon Crawler Carl IP, and so I want to make it look like I know wtf I'm doing. haha
But that also means I'm looking for collaborators and fellow fans of Dungeon Crawler Carl to work with if anyone is interested. A programmer or someone familiar with RPG attributes, leveling, and stat systems would be incredibly useful right about now and save me a lot of time from learning as I go. I wouldn't mind having an asset artist either.
I am planning on trying to get the license to use the Dungeon Crawler Carl IP, and I am currently planning for this to be a paid game, so revenue sharing is on the table. However, if I can't secure the rights to DCC, it's still a great game on it's own that brings something new that I haven't quite seen done like this. To be clear, the game concept itself isn't required to be tied to the DCC property, it just fits the theme and DCC would be a great IP match for this game.
And if anyone is interested in taking a look at my GDD so far or trying out the prototype once that's done, let me know.
Outside of that, I just spun up a pwnagotchi on a spare Raspberry Pi Zero 2W, so I'm playing around with that.
The pwnagotchi project looks very interesting, and adorable! Can you tell me a little more about what exactly you can achieve with it? I assume the legality of such a device is.. dubious?
I'm no expert, so if someone want's to explain it better than I am, feel free. haha And it's all for personal educational and security auditing purposes only of course. haha
I'm also jumping into it, so there's a lot more addons and plugins you can add for it to do other stuff.
But it basically passively performs a deauthentication attack in order to sniff out handshakes of devices on a wifi network, then collects those handshakes for later cracking, and if cracked then you have the password to that wifi network.
For me I was just curious about it, and also want to test my home and work network for any vulnerabilities. If I can crack my home or work. And before anyone says anything I ran it by my work's dedicated IT guy and he's as curious as I am about what it'll sniff out.
I've had it sitting on my desk for about 3 hours and I already have 8 handshakes from 2 different networks, but didn't bring a data cable so I can't yet actually connect to it to pull the handshakes out.
I don't know what I'll do if I actually crack someone's actually wifi password, I might try to reach out and let them know to update their security? I think most people just think it's neat and collect them like pokemon.
That's so cool! And it tunes the parameters using re-enforcement learning in order to improve the number of handshakes it captures, or the number of devices it "pwns"? I might try setting one up- I have a bunch of spare Pi Zeros lying around..
Trying (and failing MANY times) to setup wireguard between a homeserver and a VPS. When configuring the firewall for wireguard and forwarding udp packets on a specific port to the peer (homeserver) I somehow keep donking it up such that my ssh connection is killed and I have to wipe the VPS and start over. Usually trial and error is good enough to dredge through something I don’t care about thoroughly learning (firewalls) but this is a punishing process. I just want to host teamspeak on my own hardware with the VPS exposing it to the internet so I don’t have to open any ports on my router!
I’ve also been writing a lot of zig, making a better version of a tool at work on my own time. I usually struggle with project ideas, and I’m quite happy to have something to hack at off and on instead of wondering what to do. I’m excited to name it something stupid so I can savor the eye rolls of my coworkers forcing themselves to use it because it’s better than what we’re using now. Unfortunately it’s heavily leaning on std.posix, which is due to be nuked in zig’s upcoming 0.16 tag. I’m telling myself it’ll be ok and better for the language in the long run, but I’m dreading the looming migration.
I also had trouble understanding wireguard... you might have luck with wg-easy or headscale. I went with tailscale because it is easy to setup but I think headscale is almost as easy
https://github.com/wg-easy/wg-easy
https://headscale.net/stable/setup/install/official/
Doesn’t tailscale/headscale require the end user to use the tailscale client to connect to the server? I.e. if my friend wants to join my teamspeak server they’ll need to have tailscale?
It should be similar to wireguard. That is, it depends how you set it up. If you want to map a specific port to the open internet it's a similar situation either way.
The difference is just connecting between your server and your home computer.
Tailscale does have some products (some are free) for exposing to open internet without your own server https://tailscale.com/docs/features/tailscale-funnel --but you don't need to use them. You can use Tailscale like wireguard because it is running wireguard under the hood but with hole punching and other features which make it easier and more robust for many scenarios like not having a static IP address.
You could just do that final part of VPS to public port via reverse proxy like Caddy, HAProxy, Nginx. I think... maybe it is slightly more complicated:
https://old.reddit.com/r/teamspeak3/comments/fm39qx/nginx_reverse_proxy_for_teamspeak/
https://old.reddit.com/r/Tailscale/comments/1cskoya/port_forwarding_public_vps_local_machine/
Well for low latency stuff you probably want as few programs streaming the data so it sounds like wg-easy will be a better fit
Still working on my book tracker. Ended up having enough trouble with duckdb/ducklake that I decided to rip it out and replace it with my original thought, which is a sqlite solution. I'm using litestream and application-level user pinning to solve persistence and concurrent access woes. For the size of data and the data access patterns I have in mind the cost of downloading a sqlite DB from object storage isn't a big issue, but it certainly feels dirty and I definitely fear the edge case bugs that lead to a user's sqlite DB getting overwritten. Might have to enable object versioning (if my host even has that available), I already have some defenses written in code and I'm trying to get tests that'll help me catch the edge case/timing possibilities. Though my e2e test harness is already starting to feel larger than my actual application.
Reworked my personal site so that it uses github pages for hosting, but the content is fetched dynamically from my joplin notes at build time, so I can get the best of both worlds — free github pages hosting, but private content. It feels magical, honestly.
Inspired by @Juan over here I just vibe-coded this PNG to emoonji-art converter this afternoon. Truly we are living in the future.
I finally wrote out some scripts to simplify home finance reporting. The overall workflow involves data in nocodb (which is a layer over some postgresql tables and views) and reporting on that data with metabase. That has been in place for a while.
The new part of this is manipulating the csv exports from various banks / cc providers and uploading the data via nocodb's API. This is working great, except one provider has a bug right now where the date column is blank in their exports. Not much I can do with that, so hoping it'll be resolved in a bit, as the alternative is calling them and having it probably go nowhere. (Or scraping PDFs, but I reeeeeaaaally don't want to do that if I can help it.)