17
votes
What programming/technical projects have you been working on?
This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?
I've been writing my own note-taking tool for the past few days.
There's a ton of existing options in that space, from Obsidian to org-mode, but I never managed to stick with one. So I thought "I'm a programmer, why don't I make one myself?".
So now I have a Rust tool called
notelog
(imaginative name, I know!), which I can simply call from the commandline and it'll either take the note directly from the commandline args, read STDIN if provided, or open $EDITOR to let me write a longer note.The hardest part was getting all the error handling implemented -- there's so much that can go wrong even in such a simple program, and there's still a ton to do, though I'm now at a point where you'd have to actively try to break things for them to go south (like, by setting the $EDITOR env variable to
rm -rf /
or something).Each note is written to a Markdown file and stored in an automatically-created folder hierarchy based on the current year and month. The files all have YAML frontmatter, so the entire thing is extensible.
As a proof of concept, the current version can store a list of tags for each note, but I plan on adding other things like "status: todo/done/delegated/canceled/etc" later on. Right now it also just stores the note; if I want to find it again, I have to use
grep
.I ended up using Generative AI to auto-generate almost all 1500 lines of it (using the AugmentCode extension for VSCode), though the process required some handholding on my part -- the AI won't refactor code on its own, and it doesn't have good 'taste' yet on when to factor out or inline functionality, or refactor in general. Or maybe that's just a skill issue with my prompting technique.
Still, it's quite amazing that technology has progressed to the point where you can write a 50 line spec in English and get a 450 line Rust file out that does everything you outlined in the spec -- mere minutes after handoff. I had to tell it to break the generated code up into smaller modules, but I basically had the initial prototype working using that single (though thorough) prompt. The AI does a lot of legwork once it is running -- writing the code, writing tests, running tests, debugging errors, looking up documentation, cleaning up things the linter didn't like, even 'speaking' JSON-RPC to
notelog
in order to debug an issue with the MCP server I asked it to build for the tool.The MCP server allows me to use NoteLog from inside my IDE or using any other MCP client. Whenever I feel like it, I can just '/log some note with tags +foo and +bar' or have the LLM do it for me ("summarize the points we just discussed into a notelog note").
Some people say that AGI is right around the corner, and I'm starting to believe they're right. You can just hand off tasks to the AI agent, do something else for 5 or 10 minutes, and expect it to have implemented the entire feature you've requested when you get back. (I do run the IDE in a virtual machine though, just in case it does decide to
rm -rf
everything). Heck, you can even ask it to come up with a draft spec for new functionality based on vague ideas, and then review the result, and eventually just tell it to "make it so".It won't be too long until something like it can do a whole day's work with a single guiding prompt.
It's not all rainbows and sunshine though. I'm a bit ambivalent about how capable the darn thing is -- you just start to trust it, start glossing over the text it outputs (there's sooo much of it, though each task completed ends with a summary of what it has done), and I find myself getting less vigilant over time.
In two later instances, the agent silently did (minor) things that were against the prompt/spec I formulated. In retrospect, the spec was wrong on these points, but ideally it shouldn't have deviated from it. At one point it got into a doom loop of trying to read the source code of a dependency because of an inscrutable compiler error; that turned out to have been caused by it doing a web search and finding outdated documentation. It also likes to do LLM things ("I left that legacy function in there because it might be useful later on") once in a while.
Still, I'll probably continue to use GenAI for programming. Programming is fun, but after doing that for 20+ years, I'm starting to feel that getting results fast is even more fun than just coding. Using the agent is an enormous speed-up over doing everything myself, especially writing tests or debugging compiler errors -- but at least for now, I still abhor the idea of vibe coding and truly leaving everything up to the AI agent.
The final downside of the whole GenAI programming shtick I noticed, is that sometimes you become so preoccupied with whether you can write some program quickly, that you don't stop to think whether you should write that program at all...
(Edit: Removed a stray sentence that I forgot to cut out)
Story of my life. Actual code is always like 30% of the overall project size. And I’m always finding new ways to break things!
I enjoyed your write-up of using AI to help with the coding. I experimented this week with using a LLM (Claude) to help me write a bit-more-than-basic bash script to submit computing jobs on the cluster I use for research. It basically set me up a nice little two step pipeline of scripts I can use to submit jobs, have log files generated to monitor the jobs, and create a nice little organized file structure for saving the outputs. It's all things I could have hobbled together, but it did so in a few minutes instead of the afternoon it would take me on my own to brush up and get it working smoothly. So I'm definitely interested in finding other ways to fit it into my workflow. It was the first time I've really felt it saved me a pain in the ass.
The main thing I'm interested in now is interfacing it more directly with my work flow. I'm interested in what you said you did here, using it from within your IDE. Is it able to actually create files, or do you have it just summarize your conversation and then separately you copy that into a notelog file? I'd be curious to learn more about your setup.
Thanks! I tried to make it more entertaining than usual, and I wasn't sure whether that worked or just ended up being awkward.
The Notelog program I wrote (co-authored?) with the AI Agent exposes a Model Context Protocol server -- a program that the LLM can start and directly talk to in order to get things done.
In my case I just tell the LLM to log a note or summarize the conversation (which LLMs are good at), and then it calls Notelog with the note or summary. Notelog looks at the title of the summary and creates an appropriately named file in my notes directory before writing the text to the file, so I don't have to copy anything myself.
If you are on Windows or Mac, you can experiment with Claude Desktop (Tutorial) to get a feel for what is possible. The filesystem server they set up in that tutorial might already allow you to have Claude create files useful for your workflow.
The tool is now in a state that is actually somewhat useful, so I decided to just go ahead and release the source code, in case anyone is curious.
A few remarks:
So I use fancontrol to (surprise!) control the fans in my computer (some of which happen to be zip-tied to my GPU at the moment... long story).
fancontrol is easy-ish to set up. You just run
sudo pwmconfig
in a terminal, and that guides you through the whole process. At the end, you get a lovely config file that looks like this:I'll spare you from having to understand what every part of the config file is, and just say that the important parts are:
DEVNAME
declares the names of the devices that fancontrol is interacting with. Soamdgpu
is my graphics card,k10temp
is my processor, andnct6795
is my motherboard. Notice that each device name is given ahwmon
number.amdgpu
is assigned tohwmon2
for example.DEVPATH
declares the absolute paths of the devices mentioned above. Soamdgpu
can be found at the path/sys/devices/pci0000:00/0000:00:03.1/0000:27:00.0/0000:28:00.0/0000:29:00.0
.hwmon
number is actually a symlink to its absolute path. So if I runls -al /sys/class/hwmon
I will see thathwmon2
is a symlink toamdgpu
's absolute path:../../devices/pci0000:00/0000:00:03.1/0000:27:00.0/0000:28:00.0/0000:29:00.0/hwmon/hwmon2/
Cool, so when we run
sudo pwmconfig
, that command figures out what devices respond to a PWM signal. It then determines the name, absolute path, andhwmon
number of each of those devices, and writes that information to the lovely fancontrol config file above.fancontrol then uses that config file to figure out what temperature sensors, on what devices, it needs to monitor, and where to find the files that report those temperatures. It also knows what fans to bind to what temperature sensors, what speeds to run those fans at, and where to find the files that determine fan inputs.
(Oh right, if you didn't know, basically all of your hardware on Linux writes and reads data from text files. So if I want to know how hot my GPU's hotspot is, I can run
cat /sys/class/hwmon/hwmon2/temp2_input
in a terminal and it will print out the temperature. Similarly, I can control my GPU's fan speed by writing a value into a file, such as with the commandecho 1000 > /sys/class/hwmon/hwmon2/fan1_input
. Remember: "everything is a file".)Unfortunately, fancontrol has one glaring issue:
The
hwmon
numbers change all the time.So when I set up fancontrol for the first time,
amdgpu
is assigned tohwmon2
. When I reboot my computer though, it might be assigned tohwmon3
.fancontrol itself has no way of dealing with this, so it just dies at boot. The "solution" is to re-run
sudo pwmconfig
and go through the entire setup again. Or, I can manually edit fancontrol's config file to replace every incorrect instance ofhwmonX
withhwmonY
.Both of those solutions suck, so I wrote my own. Behold this beauty:
In this script, I declare the names and absolute paths of the devices I want fancontrol to control. The script then automatically determines their hwmon numbers and writes a new fancontrol config file to
$HOME/.config/fancontrol/fancontrol
.Of course, fancontrol's config file actually lives in
/etc/fancontrol
, so I have to symlink it from my home directory to the correct path before restarting fancontrol's systemd service. All of that is handled with the,set_fan_control_config
function in my.zshrc
:I still haven't automated this. So if I boot my computer and the
hwmon
numbers are wrecked, I still have to run,set_fan_control_config
manually. I'm pretty sure I could automate calling that function by writing my own systemd unit file, checking if fancontrol failed on boot, run unit file if it did, etc. I've never done that before though, and this iscursedgood enough for now.It's probably more accurately a learning thing than a real project, but I've been getting deep into the weeds of mesh shaders and voxel rendering. Mesh shaders are pretty cool and the hardware seems to support them fine, but not all parts of the toolchain do. For Vulkan they're also pretty light on the documentation front and so take quite a bit of piecing together. Maybe the most "fun" gap I've found is that reading per-primitive data (something that seems super cool) is impossible in Slang (the shader language I'm using) and not really supported by HLSL either (possible, but exceptionally hacky). In my opinion this is because the spec for them took the wrong approach. Every other case has the input strides of data automatically bound to the outputs at those locations in the prior step, but this case wants the strides baked in unlike anything else (ex. vertex shaders don't have a distinction between per-vertex and per-instance data because that gets bound during pipeline creation). This also makes it harder to reuse fragment shaders than it should be.
I've also ended up way deeper down the rabbit hole of investigating various approaches to voxel rendering than is sane, especially with the fact it's not like I have anything that actually needs optimizing. I've also rewritten that part a bunch of times now, including doing and then abandoning greedy meshing entirely because I didn't like how inelegant the solutions to t-junctions were.
I am quite liking mesh shaders though. I've not used task shaders yet, but that's next up. If anyone has experience with these things, mesh and task specifically, I'd be interested in learning more from someone with a clue. Stumbling through very fragmented and incomplete documentation is not exactly the ideal approach, especially because some of the bits of reference code that I've found aren't even spec compliant (and as a result don't work on my hardware).
So a year ago I posted a comment to a similar thread, indicating that I had started working on a custom App for Bonnaroo using React Native and what I could learn on Chatgpt.
Well, I was able to create a working version. And it worked extremely well. So this year I decided to go nearly all-in in continuing development on it, including adding features I thought of last year but didn't have time to implement, and overhauling the repository on GitHub, getting acquainted with releases and working with Apple's App Store Connect and their TestFlight system.
The result? Archlight, an app for streamlining your Bonnaroo experience. The 2025 version is extremely close to being finished, with only needing tweaks to the app and notification icons ahead of Bonnaroo releasing updated map and vendor information.
Next year I'm brainstorming UX decisions around involving other festivals besides Bonnaroo, and including smartwatch support as well. It's crazy to see how this passion project has evolved compared to some of the first development screenshots I have from the first month of creating it
Just a mini thing from me. I'm attending (just as visitor, not speaker) my first Maker Faire in Prague this weekend.
Since I'm going on train, I wanted to take my Steam Deck with me and play on the way.
But I also wanted to showcase my 3D printing/modelling skills so I'm taking my LTT Tech Sack with over-the-shoulder strap I made for it. The Tech Sack is my daily wear for a bunch of things - LTT stubby screwdriver, bunch of USB cables, powerbank, ID card, credit card, money, earbuds... You get the idea.
So I went and made a strap adapter to clip my Steam Deck to my already existing 40mm wide strap for the LTT Tech Sack. This adapter can clip the Steam Deck to anything up to 40mm wide and around 3-3.5mm thick.
A few years ago I wrote a python script to select the secret santas that my family does. I've been making it a little better each year, but it's always been something I run locally. I've been starting the process of creating a web app around it as a bit of a learning exercise for me and a way for people to update wishlists for their secret santa to get ideas.
I'm not a developer and end up having to research and figure things out every step of the way. I'm using FastAPI with SQLite for a small database. I can get it to run locally in dev mode so it has auto-reloading. I can build a docker container that runs the app with SQLite as a Volume, but then it doesn't have auto-reload for while I'm making code changes.
I'm looking for some guidance on how to setup my local development environment while still having the right pieces in place to eventually deploy to a Digital Ocean droplet. I've looked in to VSCode dev containers, docker compose with watch, docker compose with -f for merging files, scripts and environment variables. None of them quite seem the right fit for what I think I want. Anyone have any suggestions or how to rethink my approach to this? Thanks.
I haven't used FastAPI before, but uv might be helpful for your use case. It integrates with FastAPI and supports building Docker containers from it as well. Basically, you would use uv to manage the dependencies and dev environment, and when you're ready to deploy it on the droplet, you'd build a Docker image to run it from.
Approaches to containerization are highly opinionated with many different approaches. Below is just my standard personal approach for personal projects. Enterprise configurations can get hairy depending on a ton of factors that I don't think should apply to you.
The first thing is that I just accept that I'm not going to use the same container for both dev and deployment. I may make a common base image they share or just abstract out the dependency bits in a way that lets me reuse them between different base containers. But dev containers are just that: for dev. When I want to build something to deploy that is a separate command or pipeline to build and deploy that. Depends a bit on how deep into CI/CD I feel like getting on that project. In general, I don't try to do auto-reload stuff on the containers themselves. I'll make it so my dev container can run the app in watch mode, but that's it.
Where possible I try to fully package my app into a single complete item that is distro-agnostic. How possible this is ultimately ends up depending on the toolchains I'm using, but I like being able to have my "prod" container be able to just be something like "alpine + copied in folder + run executable from folder". It also decouples me from needing to containerize for cases it is cheaper not to. For example, in AWS containers can be a fair bit more expensive than a bare EC2 instance that just downloads a zip, unzips it, cds inside, and runs
./run
.Another reason I wouldn't suggest trying to set up watch + compose together is that you're likely to fill your hard drive fast. While it is possible to deal with, the default behavior for docker building kind of litters your machine with piles of intermediate layers. Having every edit generate more layers due to watch mode triggering more builds would eventually add up. I use podman, my preferred alternative to docker, quite a lot because I do all of my development in containers to avoid globally installing random deps and every few months I purge everything to reclaim what usually seems to be hundreds of gigabytes of space. The strong majority of that space is just a ton of stale layers from things like updating deps in dev containers.
I don't know much about Digital Ocean in particular though. Most of my professional experience is related to AWS. If I was answering entirely from a professional perspective, one that happens to be focused on getting the customer to AWS, the answer would actually be just not to use containers at all. If my previous team were tasked with an HLD for your project it probably be APIGW + Lambda + DDB backed by CFN via CDK. I'm not suggesting at all that you do that though.