12
votes
What programming/technical projects have you been working on?
This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?
I'm pretty proud of this nodejs loader I built, dynohot. If you've ever used
import.meta.hot
in Vite or Webpack, it's like that but directly in nodejs. My team is using it to hot reload our development GraphQL server, for example.It's crazy to me that, for more than a decade, the best we've had is nodemon which just sends a SIGHUP to your process whenever a file changes. So now, we can just reload a subtree of the module graph and its dependents to instantly update only the parts of the application that changed. This is something that I've wanted to build for many years but never got around to it because I thought "someone else will surely do it" but no one ever did.
It was fun getting deep into the JavaScript specification since I had to implement the linkage algorithm in a novel way to make relinkage of a running application possible.
I've found it tough to get the word out though. I really consider it a high quality module that would be helpful for a lot of teams, but I don't really have a strong Twitter presence or anything.
I finally managed to implement drawing tablet support with pen pressure in my infinite canvas drawing application. https://apos.itch.io/mitten
It's not perfect since I had to disable dotnet's trimmer which tripled the app's size. I've been meaning to support tablets since 2021 but I kept having various issues. Next step will be to support that feature in mac and linux too.
I don't have a project to share currently, just want to say that I love reading these.
Also, I can definitely see rich business owners coming here to steal ideas 🙂
I can't remember if I said this, but I got my website audiobookcovers.com to a completed state. I don't think I will ever stop working on it, but I am satisfied with it right now, and I am not actively working on it. The final stack is a handful of microservices and a postgres database hosted on the smallest hetzner virtual server.
My next project is hopefully something I can make some money with. Commerce7 is a common point of sale in the wine industry. It has powerful APIs and an app ecosystem. I am hoping to create a set of tools to sell to wineries that improve and extend Commerce7 in some way. If there are possible legal or address issues with an order, Commerce7 will mark it as "quarantined". These orders have to be manually addressed by an employee before they can be sent for fulfillment. Someone has to manually check for these orders. If one gets missed, the customer will not get their wine. This leads to the customer calling months later and being rightfully pissed.
Commerce7 has a thing called "tasks". They can be assigned to a specific employee with a due date. That employee will get email notifications and a message on the POS website until they address the order. I built an api that consumes C7 webhooks for every order that is created and checks if it is quarantined. If it is, it uses the API to create a task assigned to a specific user. I build a very janky version of this over a year ago using powerautomate to receive the webhooks. My new version is much more robust and flexible.
I tried building it on aws lambda and rds postgres, but that ended up being a much too heavyweight solution. I will admit I don't know enough about AWS yet to really administer it. I rebuilt it using cloudflare pages, workers, and D1. I cannot believe how powerful and flexible cloudflare workers are. As long as you are doing something that works within it's limitations, I would much rather work with workers than any other platform. As a tip for anyone getting into the cloudflare stack: deploy your workers as cloudflare pages, even if you are not doing static site hosting on that worker. The pages version of workers seems newer, and is more powerful. It handles file based routing without you having to roll your own router or try and find one that works with workers.
In this process, I also found the itty selection of node packages which are absolutely spectacular. I used itty-router for a bit before I switched to pages' built in routing, and itty-fetcher is everything I want out of axios with a much lighter package and worker compatibility.
It has been a fun process, and I am hoping it can be a success financially. With how cheap cloudflare workers and D1 is, it will only cost $5 to run even with a large number of wineries using it. If I can create a handful of these types of services that are similarly cheap to run, and get a handful of wineries subscribing, I can hopefully turn this side gig somewhat profitable.
Just wanted to say thanks for audiobookcovers.com. Started consolidating my stuff using AudioBookshelf on my home server a bit ago and it's been a great resource. It's crazy how hard it is finding some covers for the audiobook instead of the epub sometimes.
My side project YOShInOn (an RSS reader) just got its own side project code-named FraXiNUs.
I cut and pasted a lot of code from YOShInOn to make something that is a bookmark manager, web crawler, personal data lake, and image tagging application. I've loaded about 55,000 images into it since last Saturday. I'm planning to make a data structure and UI for a tag which can be something like "this is a picture of a person" and that could have a positive or negative polarity ("this not a picture of a person") or an indeterminate polarity ("the AI model wants you to look at this one")
I've always wanted to add bookmark manager and web crawling capabilities to YOShInOn but I also don't want to mess up a system which is already working just fine but once I understand the problem I think I'm going to cut-and-paste a lot of code back into YOShInOn or import the contents of YOShInOn into FraXiNUs or something like that.
One thing I am enjoying is that the query language for ArangoDB
https://docs.arangodb.com/3.12/aql/
is really a good target for code generation because it lets you do so much with subqueries.
Over the past few evenings, I’ve been getting “back” into R after surface-level working with it in university quite a while ago, in order to help analyze my dad’s recent addition to his house, a photovoltaic installation.
A very surprising and interesting learning, for example, has been the fact that depending on the time of year, the north-facing modules make up almost 50% of the total production, which is something we as non-experts had absolutely never expected!
It still is a rather short script, but it has been fun (mostly) to write and expand it.
All that’s really left to do now is some QOL stuff (e.g. allowing to pass options to the script for PDF/PNG output, which timespans to plot or exclude, etc.), but the elephant in the room is actually an issue we both can’t fix: data sourcing.
The current workflow includes being logged in on the installation company’s website, manually selecting the date(s) from which to export, waiting for the server to very slowly generate the Excel (not csv!) file, and downloading it. Whenever we tried to export datasets over ≈two weeks in duration, it would even hang and eventually after a couple of minutes give up entirely.
Ideally this could be an automated process, but alas, it isn’t. Then again, the use case isn’t production-grade, rather it originated out of curiosity-driven experimentation. At least the raw data is available in some format, and digitally…