17
votes
What programming/technical projects have you been working on?
This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?
My personal workout app is now in beta, see https://www.interval-workout.com/
It's not supposed to be yet another SAAS product. Instead, it's supposed to help me with my workouts. I've been doing body weight interval training for years, and I somehow dislike all the apps that I tired. My requirements are so simple and yet no app (that I know) fulfills them. I just want something that's quick to configure, works on all my devices and doesn't track me. Plus, it was an excuse to do some frontend stuff.
So, there you go. Freel free to use it as well. Feedback is welcome, though I must say I probably won't accept feature request... unless I'd want to use the feature as well.
This is a really nice, clean app.
Thank you :)
I recently lost my saves for a few older games after a os reinstall because I forgot to back them up. It motivated me to make a simple backup manager. So far it has categories to sort games, a list of directories in each game to backup, and a way to restore the backups and possibly point the backup files to new paths in case that's needed.
I know there are plenty of tools that make game save backups but they have too many bells and whistles for my taste. I wanted something simple that just makes/sorts save backups. I'm also debating a way to have scheduled backups but haven't decided on the best way to do that.
You'd be hard-pressed to find something easier/nicer than a GitHub repo and some chron job that runs git add -a; git commit -m 'new'; git push origin master every X hours or whatever you want. Then it's all online and your history is forever preserved perfectly
That would be really good for preserving the main folder. I'm taking save directories and compressing them to store locally. I'd like to be able schedule backups of directories but skip any that haven't had changes since the last backup to prevent redundant files. Unless git has a way to do it that I'm not aware of. I only have a surface level understanding of git.
This is almost the entire purpose of git. Tracking changes to the filesystem (and merging other people's changes).
Yeah -- You almost certainly want git and its structure only commits files with changes -- that is its whole purpose.
You might look at sync thing or rsync too based on your comment
I think git would do something close to what I'm wanting, but the program as it is now will compress all the files and let me give them unique names. Then restore backups from the compressed files and prompt for potentially different paths to account for changing systems or install locations. Git is the best option for detecting changes in the source files but i want to do other things with those files rather than just uploading them to a repo.
I ended up just checking the last modified date of all the files to be backed up and comparing it to the saved date from the last backup. I might still keep the archived files in a github repo but I wanted more control and automation than git gives me by itself.
I've been rebuilding the company's video hosting server. More specifically the background script which does all of the hard work.
After a chat with everyone on here to see if AV1 could be primarily used, I've settled on AV1 first, webm second as a backup. Lots of testing later and we're struggling to find a single device under a decade old that this combo doesn't work on. To my amusement, everything we have tried has streamed the AV1 version, including from iPhone 12 through to Linux using Firefox.
I've just finished updating the comment-at-time-code portion, that's now practically real-time for someone to leave a comment and it logs the time they legt it at. This will be useful for reviews. It's just BITC to do, and watermark overlay really.
This project is because we have in-house over 4800 company videos. These include shows that we sold and aired back in 2004. We keep masters, but we want them to be streamable to prospective buyers. Plus for content ID. No one wants to push around the Master video which is anywhere between 20GB to 140GB per file.
On top of this, my work colleague who also haunts Tildes, wrote the title card extractor with Python and Tesseract which will become part of the upload process. Auto transcription with Whisper is implemented, which also generates a VTT caption file. A crude VTT editor has also been made so humans can fix the transcription, although most of the work is already done for them.
Now, if anyone knows how to grab the Time Code data track from a video file and convert that into a Burnt In Time Code (BITC) overlay in ffmpeg, I'm all ears. Broadcast Master time codes run from 9:40:00 and the show starts at 10:00:00 and I still don't know why (I've never looked it up), but the guys using the system would like this! There is a title card with a countdown clock, then black and the start of the show. It's weird we're still doing this in 2024 when tape is long dead.
Oh yeah, this is all written in Bash! Incron watches an upload folder and a web page (nginx) drops in the video upload, with a form submission for the options to use to build out the completed master web page. It's a few moving parts.
There's no real stack used, just things that work. Php/php-fpm 8.2, nginx, incron, bash and FFMpeg. I think the only web dependency is hosted jQuery, but I could even grab that local.
Earlier this week, a relative who went to see the eclipse mentioned a discussion with friends of theirs who noticed a pink glow on the horizon. One of them thought it might be Rayleigh scattering and the group found a paper whose title seemed to support this (not open-access and no author pre-print available, unfortunately, so I haven't read it myself).
That inspired me to try to write a little skydome renderer for eclipses to see if I could corroborate this theory via simulation. It's been fun refreshing my knowledge of atmospheric scattering and absorption, the solar spectrum and blackbody radiation, multiscatter volume rendering, and spectral rendering.
Sounds cool and I have been wondering about the shine in the sky too -- Have any links to the paper or your work?
Thanks! The paper I was referred to is "Sky Color Near the Horizon During a Total Solar Eclipse".
Regarding my work, I'm still experimenting. I only started writing my little test renderer on Tuesday evening and haven't managed to reproduce the effect with it yet.
I have access! Here’s a copy in the spirit of sharing. :) Good luck with your research!
Oh, awesome! Thank you so much!
I am currently using Godot to make a minigolf mobile game. The courses are procedurally-generated with an ever-growing pool of parts that I'm modelling in Blender.
Ironically, despite the relatively no-frills art style I've gone with, I've learnt a lot about shaders trying to give the ball a shadow. Unfortunately, I couldn't get things exactly right, so I ended up just using a moving decal.
You should check out the AI horde plugin for Godot if you haven't -- Kind of slow unless you have 100000 kudos for real time gen. https://godotengine.org/asset-library/asset/1463
This is cool too but looks like it might have a pricing model now (if you need to make skyboxes) https://skybox.blockadelabs.com/
I appreciate the recommendation! I'll check them out, but it has been a long-time goal of mine to grow some artistic ability, so these will probably only make for placeholders :P
Continuing work on my text adventure game written in Go. At this point it's more of a tech demo as I figure out how to organize my code (not to mention write it since Go is new to me) and do things like create dialogue boxes, draw ASCII art, etc. But I'm happy with the progress because slow progress is better than no progress!
Repo: https://github.com/jd13313/GoHaveAnAdventure
Changelog (with screenshots): https://github.com/jd13313/GoHaveAnAdventure/blob/master/CHANGELOG.md
I think I'm very close to having the bulk of the boilerplate stuff I want done. Next step will be looking into options for taking user input.
I've been wanting to have a dedicated, low power media server in my home for a while now and I've just given myself the push to start. For the last year or so, I've been running Jellyfin on my main PC to act as a media server. I've got all my TV shows and movies on one 10 year old hard drive and no backups or redundancy of that data. I've been anxious about that drive dying and looked into building something out but was too busy with my job to do the research. Finally made the time to do the research and I'm just about to buy the hardware now. Looking at purchasing a HP Elitedesk 800 SFF PC and then 2x4TB WD Red Plus drives for storage. The HP already has a 256GB SSD internally so I'll be using that as the boot drive and hope to run the WD drives in a RAID config. Now, just need to figure out if I want to use something like YAMS to easily configure everything or manually configure and learn as I go.
I've been rebuilding my media server recently with manual configuration. Kicking up a local server to just do simple media hosting was easy. Configuring my whole media management stack to be just the way I want has been work though. May be worth your time to spin YAMS up first and see if it does what you want.
Specifically, I've been trying to work out docker networking so that I can keep some dockers containerized to their own network namespace with a VPN. Docker as a whole is new to me, so that's been its own learning process and I'm still not totally sure how best to configure the network side.
I recently got a second 6TB to act as a mirror in a btrfs pool, that's been a huge peace of mind. The old drive was NTFS, so I used rsync to transfer the data to the new btrfs drive before configuring the pool.
Yeah from my reading, YAMS basically walks you through setting everything up and connecting it together, though as far as I understand, it's not containerized. It'd be nice to have everything containerized through Docker just to make it easily reproducible if I want to change my hardware.
I read a bit about having everything networked through Docker but my previous failed attempts at getting Jellyfin running through Docker have discouraged me a bit from using it haha. I'd definitely like to explore it more in the near future.
I'm no expert on this and have only doen the absolute basics on my own YAMS installation, but it's definitely containerised! It even includes Portainer.
Oh that's great! I don't know how I managed to miss that fact, think my eyes may have just skipped past it.
I had been working on getting my 3d printers more automated but just finished that. Last thing to do is build an octoprint plugin that adds a youtube stream to the dashboard.
Now that Stable Diffusion 3 and that new Tencent 3d Model weights were released, I'm going to try to revisit my Wikipix AI plugin which garnered an INCREDIBLE amount of hate/interest on the Wikipedia reddit. Plans are to move to comfyUI, SD3, add 3d modeling and build better SD flows for different types of images: https://www.youtube.com/watch?v=fD38XIkJ81A&feature=youtu.be
A scalable hosting service for secure LLM inference. Lots of fun trying to serve the greatest number of concurrent sessions for the greatest number of LLMs with no lost conversations, and the fewest trips to cold storage possible, with session multiplexing among lots and lots of users.
I've got density way, way higher than what I'd ever have guessed possible. Enough that my whole org can run on two consumer GPUs in separate enclosures, each at about 50% average load during peak hours, and acting as hot failovers for one another. There's still room for optimization, but we're right at the limit of whether the proverbial juice is worth the squeeze.
Some of the most fun optimization work of my career.
What's your stack for this?
llama.cpp for inference, plain old python for orchestration, nginx for TLS termination and traffic balancing, redis for distributed session caching.
With some careful decisions about what to cache and when, you can minimize the contention quite a bit.
I finally pulled the trigger on getting on home server going. I suspect @artvandelay and I both found similar resources, since I also ended up getting an HP Elitedesk 800 PC (I got a mini instead of the SFF one).
I got proxmox and home assistant running surprisingly quickly. Took maybe 20 minutes.
But the whole process has me feeling... uncomfortable?
I have learned absolutely nothing so far, and have gained no tools for navigating home server administration with any confidence. I have no clue
Most of the home server/homelab documentation has either felt like:
I am in the middle and still feel very lost.
I tried to fix some of my old phones
I had some old phones with broken displays that I wanted to get running again to use as a secondary device. The phones worked fine other than the broken screens; I'd used both in the past year by mirroring them to my computer with scrcpy (one on the day I did the repairs).
But somehow managed to brick both devices?
I replaced the battery and display on the first phone, but was never able to get it to show any sign of life.
I replaced the display on the second, and everything seemed to be working initially. But after a full boot, the device suddenly turned off and has not even vibrated since.
About the home server - if you have the time, maybe keep it running as is for now but do a lot of tinkering and trial & error. I wouldn't go full final set up yet.
It is good to try different approaches roght now to see how it works for you and what suits you best.
Also try to think of al usecases you may want from this server and set them up.
I run Linux on bare metal for example amd no sandboxing for antyhing. If I wanted to use some specific things I would need to setup docker though. Maybe I should have done that in the first place?
Try and set up all you think of and see how it works for you. By doing that you will learn alot and will pick your final solution that works best for you.
I 100% feel your pain w.r.t documentation in that it either geared towards people who don't care about the details or people who care about every single detail. I guess I shouldn't be too surprised since much of the homelab hobby is tinkering and learning things yourself but I'd still appreciate some hand-holding as I explore things haha.