16
votes
What programming/technical projects have you been working on?
This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?
For the past 1-2 years, I've been working on and off on a Chrome browser extension to help me manage my tabs and bookmarks. According to the time-tracking app I use, I've spent around 1228 hours of dedicated, focused work (excluding breaks) on this project. This includes starting with reading a book on JavaScript because I was not familiar enough with it.
At first, I was just doing this for myself because I could not find a tabs & bookmarks management extension that answered my needs, so I thought that I would just reuse an existing open source project and modify it free-style to just have a quick and dirty tool to get the job done. But I soon realized that that was not going to work, and that I needed to actually learn the programming language they used in the project to get this done. I ended up starting a new project from scratch.
For quite a while, as I said, I was just doing this for myself, but one day it clicked in my mind that what I was building could actually be useful to other people. There were plenty of such browser extensions available, and some of them had more than 100'000 users even though one of them seemed to be abandoned by its developer. Being a part-time minimum wage worker with quite a low income for where I live, I saw in this an opportunity to start a business and started dreaming that it could become successful. I told myself, there is definitely a need for such a product, hundreds of thousands of people are already using similar products, even though some of them are literally abandoned, why not try to launch your own? Who knows, if you end up with a large, worldwide userbase, and even if only a small fraction of them became paying customers, you would still earn much more than you are currently earning with your part-time minimum-wage job.
I thought, the infrastructure is already there, the internet is already there, payment service providers are already there, people all have computers, it's a digital product, you need no warehouse, you need no large upfront investment, people have a real need for such a product, what are you waiting for? This felt like an awesome opportunity that I would not have had had I lived only a few decades ago.
So this became the focus of my life. I would go to my minimum-wage job during the day, and work on my program during the evening. Then, during the evening and the night. And then, during the evening, the night, and the morning. I would still try get some sleep here and there, enough to remain productive, although sometimes I would get so deep into it that I would barely get a few hours of sleep before going back to my day job.
I hope to be able to ship the first version of this program before June, or maybe July. I've had to strip away a lot of features and requirements to be able to finish on time, and even then, I am not sure that I will be able to make it. Why June? Because that's around the time where the Chrome team will start to roll out Manifest V3, the latest version of the extensions platform. And this is a breaking change, meaning that extensions still using Manifest V2 that are not updated to MV3 will stop working around that time.
Remember these abandoned extensions with dozens if not hundreds of thousands of users? These users will be looking for an alternative, and that's why I'm in a hurry to ship my product before then so that I'll be able to gain these users when they start looking elsewhere.
I think I'll probably make a dedicated post about my product when I release the beta version. I would love to get you guys' opinion on it and get some feedback, and maybe fix bugs that I haven't noticed yet.
To talk about the product itself:
It aims to give users the tools to organize their tabs and bookmarks when they are conducting long research sessions on their computer.
It also aims to make the user's computer faster by reducing the internet browser's memory consumption. Indeed, having many open tabs can consume a lot of RAM, and when the computer has to fall back to using the hard drive, performance takes a hit, as the hard drive is much slower than the RAM.
My tab manager aims to offer many features that are not found everywhere else.
A major advantage of my program is that it helps users avoid ending up with many similar bookmarks folders. When saving tabs in a bookmarks folder, and resuming your research the next day, one opens some bookmarks of the first folder and again saves tabs as bookmarks in a second folder. This results in having two folders, with some bookmarks in the first, others in the second, and some in both. After a few days, one has many folders with many duplicated bookmarks across several folders, and it becomes very disorganized.
I'd say that this program would be useful to master, PhD students and academics in general, software engineers, journalists, etc., basically anyone who does a lot of research on the internet and ends up with dozens, if not hundreds of tabs open at any one time.
Here is a list of the main features (some are still under development and others probably won't be implemented before some time):
I am not done yet, but I hope to show you guys a beta version soon, and hopefully, version 1.0 before June.
Thanks for reading, and take care folks!
It sounds like quite a project! Maybe you can get some of those users, but keep in mind that they’re likely using that extension because it’s free.
What can you put off to start beta testing with some friendly test users well before June?
Thanks! Yes, I agree with you, which is why I intend to offer the base functionality for free under a freemium model, just like the abandoned extension with 100'000 users. I also intend to offer the paid version for a very affordable price (so little that you wouldn't notice the difference at the end of the month), since it's "just an extension" and not a revolutionary product like ChatGPT. I mean, even ChatGPT Plus is cheaper than Netflix's most expensive plan, so I better not ask too high of a price.
Striking just the right balance between offering features for free and enticing users to upgrade with some features put behind a paywall is going to be tricky...
I was thinking about some unusual variations on the typical business models, what do you guys think about them?
I was thinking to offer some mix between:
In any case I think I should probably try different models with different cohort of users and see which one fares better, then switch everyone to that same model.
Edit - As for the beta test (it just occurred to me that I forgot to answer that, my bad...), I need to make a few more important changes, fix some bugs breaking important functionality and to implement or integrate some sort of free trial mechanism, but then I would probably be ready to release a beta version :)
I’m not sure why I feel like giving advice here since I don’t have a lot of business experience, but here goes:
If you want to make a go of this as a business, I think you need to somehow get in touch with some real users who would actually pay for this kind of software because they really need it, and then write new features primarily for them. Then try to find more users like that.
The business challenge is how to find them among the much larger crowd who would use software like this for free, but don’t really need it. Having a popular free extension is useful for getting their attention, but free users are also a big distraction.
As someone who uses one of those free tab management extensions and a few other similar tools that have a freemium business model, I can give you my two cents on this.
The cycling through the free and paid features would be a deal-breaker for me in both your scenarios because the number of days of using paid features isn't as important to me as the consistency. I wouldn't want to keep track of when features would stop working for me and then restart again. This would be break my usual research workflow. Your assumption that would upgrade to get rid of the inconvenience might be faulty, since they might just uninstall and look at other alternatives. A regular adoption is first necessary for me to consider paying for a tool, otherwise if it's too frustrating to deal with in the trial period, I would stop using it.
A much better option for me would be a clear free trial that tells you which features are paid and won't be available after the trial period. I've seen some implement a refer people to unlock paid features - you could perhaps explore this to give free users access to paid features.
I'm guessing you're referring to Aether? I know of it but never used it myself, my username just has to do with the word itself rather than any particular reference.
Hey, thanks for the feedback! I apologize for lacking clarity, what I meant to say is that I would offer a full, clean free trial, and only then would I cycle between the base and the full versions. So, you get the full version for say a month, and then, instead of being downgraded to the base version until you pay to upgrade again, you are still downgraded to the base version, but you are then regularly upgraded to the full version for free, but for a limited amount of time (so you are downgraded again, and the cycle repeats).
So, instead of just getting the base version after the free trial, you regularly get access to the full version. You get a repeating free trial in a sense. Would that clarified scenario be a deal-breaker for you?
PS - Wouldn't you use some sort of P2P alternative to tildes and reddit by any chance?
Thanks for clarifying! Repeating free trial wouldn't work for me because then I wouldn't integrate the paid features into my regular workflow. Since research is something I have to do consistently, I like having a particular process that always just works.
I would second the suggestion @skybrian gave to talk to actual users because a very small percentage of your free subscribers are likely to convert to paid subscribers. Free is good for reaching a wider audience but ultimately, the ones who pay would be the users you would want to focus on and spend the most time understanding how to meet their needs.
Very interesting talking about the mv2 extensions that are gonna be turned off. I half wonder if Google will postpone it again and give you an extension.
Also yeah it would be a great business opportunity to get a bunch of extensions out to substitute for abandoned mv2 extensions in the next few months. Thanks for the idea!
The Chrome team stated on the official website that they would deprecate MV2 on pre-stable versions of Chrome in June and then on the stable version starting in July, where they say it would take at least a month (they wrote 1-X months somewhere) to roll out MV3 gradually.
You're welcome! I hope you'll be successful, but bear in mind that many of these extensions will probably be updated in time to MV3 though. If you intend to launch paid extensions, would you mind talking about their monetization? Are you going to go the freemium route?
I had a meeting with my professor today and it looks like we will be moving forward with our agrivoltaic (agriculture under solar panels) project. He asked me to design an interface to streamline the process between AutoCad - SketchUp - PvSyst.
I am a bit worried because my greatest achievement in programming is designing a GPA calculator in Python. I have no idea how to link up these programs in an automatic style, let alone code it.
Makes sense you are worried. At the same time, means you are going to learn a lot!
Without knowing the details, don't try to reinvent the wheel too much. Depending on how exotic it is what you want to do there often are people that have done similar things before, and sometimes it is so common that there are ready-made libraries out there for a lot of it.
Heck, the way things are in python world, you could probably already
pip install agrivoltaic
Nothing super complex, but this is a kinda neat project I wrote over the weekend:
I have a homelab setup with an OPNSense firewall. I wrote a simple webapp that lets anyone on my LAN use it to toggle between 3 VPNs from a list.
How it actually works is that it grabs the connecting client's IP and adds it to one of 3 aliases in OPNSense. The IPs in these aliases are then routed through a respective VPN (Canada, US, NL) at the firewall level. This way I can get behind a VPN on any device with a web browser without needing to install any client software.
Do you think something like this would work with a clash proxy?
I’ve recently been using my steam deck and it’s such a pain since it doesn’t like to use the proxy rules from the network manager. However I don’t want to force everyone through the proxy either.
VPNs are out of the question as those are throttled quite quickly here.
I'm not sure. I've never used clash proxy, but I see there's an OPNSense plugin for it. You can try it out, see if it supports selective routing.
Whoa, this is pretty cool. I'd be interested to see more details if you end up writing a blog post about it or something.
I've considered making a script to spin up a public web server container and open a port to it when I want to share files with a friend, and a secondary script to close the port.
The way I wrote my code is way too bespoke to warrant a whole blogpost, but I can give a basic rundown here:
On OPNSense, I set up 3 wireguard connections to the respective endpoints from my VPN provider. I also make 3 IP list aliases, one for each VPN. I then set up floating firewall rules so that any IP within one of those aliases gets routed through its respective VPN gateway.
For the webapp itself, I just wrote up something quick using Python + Flask, and leveraging the OPNSense API. It presents 4 radio buttons (Canada, US, NL, None). When you connect, the first thing the webapp does is query the API to see if your IP is in any of the Aliases, then shows which VPN (if any) you're currently using. When you select a VPN to use, it first goes through each alias, checking to see if your IP is there, and deleting it if it finds it. Then it just appends your IP to the respective alias and applies the configuration.
Please forgive my ignorance, but why would you need 3 VPNs?
Just in different locations. The Canada one is in the same city as me, low latency, and it's what I use to torrent Linux ISOs
US/NL are for geolocked content. Some stuff is only accessible in the US or in Europe, so I use those as needed.
I work with gaming MediaWiki wikis, and I help a lot of admins set up data tables so they can query their game's information. The thing that gives people the most difficulty is crafting systems; it requires two tables in a one-to-many relationship with each other, and most wiki admins have 0 SQL experience, so explaining why you need this and how it works can be difficult. A lot of wikis just use like ingredient1, ingredient2, ingredient3, ..., quantity1, quantity2, quantity3, ... instead of Items & Recipes as separate tables and ingredient, quantity, product in the recipes table.
Anyway, so, I decided I will make an example game & then create a wiki for it to be used as a reference & give people some template code. So I built a fake RPG crafting system in React: https://sorcerer.river.me/
It's vaguely fun to "play," I'm definitely guilty of spending some time clicking through it for no reason haha. All the art is drawn by me.
Cool this is an incremental game/demo. You might consider adding a button to turn it into an auto-idle game
haha could do, what would you recommend for the mechanics there? every
n
seconds automatically mine 0-20 of each base resource & randomly craft as many products as possible, allowing the user to prioritize each product?I think that's reasonable. Depending on how deep you want to go with this you can get some UX ideas from https://kittensgame.com/web/
Interesting, thank you! I've heard of cookieclicker but I also heard how addicting it is so I've never played it. But I did used to play Progress Quest quite a bit! This is some neat game design.
I finally started cloud backups for files on my NAS. I set up Kopia to use Cloudflare R2 (which is S3 compatible) to snapshot a directory on my NAS and only upload changes/new files.
I use SyncThing to push client PC files to the NAS (and between clients).
The only annoying thing about Kopia (server) is that it appears it only supports one cloud repository at a time. So I'll have to spin up new instances for each repository.
Coincidentally, I've also been revising my backup system — though my particular setup is a couple of Linux machines using Borg to write to rsync.net. Previously, running ad-hoc Borg commands involved copy/pasting environment variables, but now a brand-new shell script manages everything for me.
How has your experience with Kopia been? Borg has been pretty good for my use cases. The one big limitation I'm aware of is that you need Borg running on the remote end as well, so you can't easily put backups in key-value based cloud storage like S3/R2. (rsync.net has also been very good to me, and they have Borg installed on their servers, but it does limit the cloud providers I can consider.)
Currently happy with Kopia. I want to double check I can retrieve data if everything burned down, so might give that a try soon.
I've also heard that borg may not use the best encryption? I used it for a little until I read into that.
Regarding cryptography, my impression is that they've done their homework but are constrained by poor historical decisions. Off the top of my head, the big one is encrypting data using AES-CTR: this works, but it comes with extra assumptions about the rest of the system behaving well, assumptions that aren't very safe to make. And unfortunately, changing Borg's encryption algorithm is difficult to do in a backward-compatible way.
My anxiety about it is currently at an acceptably low simmer. Though if an attacker was actively modifying my cloud data in an attempt to break Borg, I'd be a lot more worried! Now I'm curious to look into how Kopia does things under the hood.
I've recently gotten into generative AI models and have been exploring their potential for some scientific problems in my field. Mostly been doing a lot of reading, but I've also started to collect a data set to use as training data for either a variational autoencoder or a DDPM. Mostly just playing around at this stage.
I'm building a little toy amplifier using a PAM8403 amplifier module. The electronics is very straightforward because it's all on the board. I just need to add an input socket, some power, and some speakers. This is 9 solder connections to the board, and a few solder connections on the other bits, although some of that soldering has already been done for me.
The thing that is going to be tricky is putting it into a hardware box so it looks nice. In the past I've just crammed everything into an enclosure that fits, usually plastic, and drilled holes that were "close enough". This looks fine and it works but it's a bit hokey and I want this to look nicer. I've got a nice aluminium enclosure, a very nice big aluminium knob, and some nice sockets. So I'm using callipers to make sure everything is square and symmetrical. It's a bit of learning for me, but it's fun.
Still hacking on the FraXiNUs image sorter which I wrote about here
https://tildes.net/~comp/1ewv/what_programming_technical_projects_have_you_been_working_on#comment-c9f7
I mostly got short sessions in this week. The main goal has been converting from async to synchronous code and that's done. I went through "that's easy" to learning a lot about Flask and thinking I was crazy to be rewriting to "it's done". Except it wasn't quite done because I needed to set up a reverse proxy server to make the app accessible via Tailscale and also wanted to have the front web server serve the images to make it easy for the back server. I got that all working and now I've got the little bit left of changing the image URLs it generates so they look like
http://example.com/image/059e1a13-1a61-5580-a0ed-a98a22acbb26.webp
instead of
http://example.com/image/059e1a13-1a61-5580-a0ed-a98a22acbb26
I've had FraXiNUs 2 running alongside the original for a few days, tomorrow I expect to be using it for all my image sorting.
I still want to put my YOShInOn RSS reader through the same process but I need to get all the dependencies that it uses working in WSL2 which I don't think will be too bad but I think I'm going to start the process by writing some new classification code for FraXiNUs in a Jupyter notebook that is easy like telling if pictures were taken inside or outside.
Before I do that though I am going to get celery up an running in WSL2 and write something that makes smaller images for galleries and thumbnails: starting out I was really happy w/ performance on a LAN without doing any of that but sending images in the wrong direction of my DSL connection is painful and it looks like I can get almost a factor of 10 in performance without a lot of hassle. Then there is getting celery to do the other "batch job" stuff like crawling web pages, extracting links, downloading images, etc. And of course the browsing interface for BIGtags is absolutely minimal and it will be a priority to make that better.
Had an idea for trying to port VVVVVV to Playdate since the source and the assets are open, the game doesn't feel too intensive for the hardware, and flipping gravity with the crank addon seems like it could be fun. AFAIK, the game is built on C++ ported from Flash that the device should be able to compile to with a library to poke around on, I guess I should start with the Playdate Dev Forum and see how feasible this is.
I'm still very much a beginner so nothing complex, but I'm working on setting up some automation that connects with an existing command line application for creating videos using the planetarium software Stellarium.
Right now I'm working on something that'll automatically set up a sunset to sunrise timelapse for any given day from any given location and while there are existing modules that'll get those values for me, I'm thinking it'll be fun to write for myself.
I use Obsidian for keeping notes for my novels. Part of Obsidian is its extremely moddable design: there's tons of themes, plugins, so on.
I spent pretty much all day yesterday fine-tuning a "snippet," which is a CSS-based override file to specify what exactly I need the theme to look like. However, since this snippet is a file I use with all my writing projects, and those projects have a diverse set of needs, what I really did yesterday was re-write the snippet from scratch to make it more easily moddable by me in the future. I'm pleased with it!
I’ve been helping a family member learn to code. We’ve been working on remaking a website I use often that is from the early 2000s. The website gets new content regularly but hasn’t been updated since the 2000s.
We’ve remade it to be more modern and add features (like you know.. search!). When we feel good, we plan on reaching out to the owner of the site to see if they’d like our new version. I feel weird about that and I’m not sure the morals of it honestly. But, it’s been a fun and good just for the projects sake. So I don’t know, we’ll see.
After about a year of web scraping and data cleaning, I finally created 510k.fyk. The website enhances the FDA's public medical device database by providing predicate device information for 510k devices.
Work :/
Not that work isn't interesting... but it's still work. Hits differently than a hobby project does.
I have been working on implementing a UDP protocol in Zig, and last week decided to step back and look at things from a higher level. Did end up taking a step back and mulling things over but no real progress.
In spare time I have still been playing around with Zig and lurking on ziggit.dev and the Zig Discord server - learned a couple non-obvious but useful tricks!
Zig has destructuring with tuples, arrays, and vectors. This and anonymous structs are nice to have for multiple return values while hacking, but you'd probably want to break anything anonymous out into a properly named struct long-term.
You can capture the particular error that occurred in
errdefer