12 votes

What programming/technical projects have you been working on?

This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?

30 comments

  1. [7]
    lynxy
    Link
    In an attempt to look more hirable in the Austrian tech sector (and therefore achieve a work Visa so I can actually live with my partner), I've been playing around with spinning up Kubernetes (K3S...

    In an attempt to look more hirable in the Austrian tech sector (and therefore achieve a work Visa so I can actually live with my partner), I've been playing around with spinning up Kubernetes (K3S on a set of spare RPI5 devices) instances- a tool that I've never really had cause to use in any real capacity as my primary experience in DevOps is largely personal projects. This is limited to plenty of self-hosting, Bash & Python scripting, Linux system administration and network configuration.

    Unfortunately I'm finding the excess of weirdly named tools that the industry fixates on a little overwhelming, as well as the collection of paradigms that everybody appears to be obsessed with at the moment. Kubernetes, Docker, Jenkins, Terraform, Tailwind, Angular, Infrastructure as Code, etc. It feels like everybody is just ticking boxes and making stack acronyms (stackronyms?) instead of writing any actual solutions themselves. This is completely at-odds to my approach to computers (as an example, I'm perfectly happy writing a basic WebGL renderer from scratch using vanilla JS in ~26kB of code).

    I'm hesitant to jump back into typical software engineering, which is closer to my industry experience, but I'm also struggling to convince anybody that the experience that I do have is worth enough (the games industry doesn't really seem to be taken seriously by the rest of the tech sector). There also aren't a large amount of games companies in Vienna, and the games industry as a whole is not particularly healthy right now.

    I guess this is less of a "what am I doing with technology right now" and more of a slightly disillusioned rant about my current situation. Maybe I'll go back to university and get a Masters?

    4 votes
    1. leftside
      Link Parent
      Much of infra engineering these days is picking the right stack of tools and making them connect together smoothly and reliably. If you find yourself writing a solution from scratch, there’s a...

      Much of infra engineering these days is picking the right stack of tools and making them connect together smoothly and reliably. If you find yourself writing a solution from scratch, there’s a good chance there’s an existing app or framework that already exists that does it better - use that instead. It’s the only way to move fast and efficiently. Once you’re working for a company you’ll also have to balance the choice of build vs buy for many of these components, and that choice changes based on current company scale and many other factors.

      3 votes
    2. bj-rn
      Link Parent
      Maybe "creating digital experiences"* for events, museums, trade shows and retail is a viable alternative that's more ajdacent to gaming but also not really typical software engineering. It's...

      Maybe "creating digital experiences"* for events, museums, trade shows and retail is a viable alternative that's more ajdacent to gaming but also not really typical software engineering. It's pretty niche and I don't know how the salary compares though.
      In Vienna there are for examle Bildwerk, Mediaapparat and THIS.PLAY, in Linz there is Responsive Spaces.

      * something along those lines:
      https://vimeo.com/930568091
      https://vimeo.com/371511910

      3 votes
    3. vord
      Link Parent
      If there's one thing I've learned about reading Kubernetes documentation, it's that you'll eventually hit a certain point where reading more Kubernetes documentation just confuses you more. I'm...

      If there's one thing I've learned about reading Kubernetes documentation, it's that you'll eventually hit a certain point where reading more Kubernetes documentation just confuses you more.

      I'm convinced the primary objective of the design and implementation of Kubernetes was not to provide good infrastructure, it was to create vast swaths of new software markets which the original engineers of Kubernetes would be able to best-leverage to build their personal fortunes.

      2 votes
    4. overbyte
      Link Parent
      Back in the day setting up a new app means rack server hardware, install OS, install app. With the transition to virtual machines, the industry has reduced the physicality of getting...

      Back in the day setting up a new app means rack server hardware, install OS, install app. With the transition to virtual machines, the industry has reduced the physicality of getting infrastructure up and running. Provision VM, install OS, install app. Benefits are better hardware utilization and reducing costs by running multiple VMs on the same hardware.

      When you reach enough VMs, you'll need a coordinated way to manage them so this is where config management tools like Ansible/Puppet/Chef come in. Instead of hand-tuning servers, you write what you want your infrastructure to be in code and you get said tool to execute it, so now you get software development benefits like tests, reviews and reusability if you say, want to provision 10 identical web servers.

      With the arrival and popularity of cloud services where running static things 24x7 leads to big bills, there's another shift from virtualizing the hardware (the VM) up one level to the OS itself (containers). So you have a VM that runs an engine like Docker to run multiple containers on it. Benefits are similar in that you can cram more things in the same space as before, as well as isolation. If you have apps that need specific versions of Python/Java/PHP/whatever, compared to installing those apps on separate VMs and or on one and dealing with dependency/package manager hell you run them in separate containers on the same box, the only prerequisite is Docker itself.

      Cloud services are designed to be elastic. Scales up and down with use, pay as you go and use only what you need. If you say, don't need to run 5 VMs in the Asia-Pacific region after 5pm, you can setup automation to delete them and spin up new ones at 8am tomorrow. If your apps are in containers and they're hosted on one VM and it gets wiped (or you need to take it down for patching), you have a problem. So now you need a way to ensure you have enough containers and their underlying VMs running to keep your services up and available.

      When you are at that scale that you have enough containers that run your business apps spread across an ever-changing dynamic pool of VMs, you'll need a coordinated way to manage them. You can probably roll your own solution to all of that, but this is where an orchestrator like Kubernetes comes in.

      The jargon is part of the ecosystem, but it's also designed to solve very real problems with application compatibility and scaling on top of ever changing dynamic pool of infrastructure. Cattle, not pets. The unfortunate case is some companies have taken into jumping on the tech trend without assessing their needs because that stack comes with a base level of complexity to solve some particular multiple hard problems when running at scale.

      2 votes
    5. [2]
      first-must-burn
      Link Parent
      I have some experience with the crazy stack, and it has a sort of "cool factor" that I like. But it is crazy. The biggest win IMO is containerizing different parts of the application. It gets you...

      I have some experience with the crazy stack, and it has a sort of "cool factor" that I like. But it is crazy.

      The biggest win IMO is containerizing different parts of the application. It gets you out of dependency hell, you can cut the environment down to the essentials, and you can have a repeatable environment for development, test and deployment. Deploying and composing your app with open source tools (like adding postures) using helm charts is nice and repeatable.

      The other stuff, like autoscaling, storage back ends, persistent volumes, permission, metrics, logs, is where the craziness is. However, I think a lot of applications are putting all this stuff in because they are thinking about scale way before they really need to. They get bogged down in big infrastructure (and big overheads) before even knowing if the app is any good.

      For small projects, I think k3d is a nice middle ground. You can run it on a single machine and easily get a cluster up so you can deploy your app.

      My quotation database is a pretty good example. It's a VUE SPA for the frontend, a golang–based API server, a postgres db, and a backup worker. Every thing is containerized and deployed with helm and terraform. This was my "do all the parts myself" project. My biggest hill climb was the frontend, because al my experience was on the server-side.

      I run k3d for local dev and I have k3d configured on a single Hetzner instance for "production". Total cost is about €7/mo. A full k8s reliable control plane requires multiple instances, and I could not find any hosted option cheaper than around $30/mo just for the control plane. That's not so much, but way too much for a hobby project.

      1 vote
      1. overbyte
        Link Parent
        Working with plenty of large-scale production clusters on GKE where we actually used the scale, once you get past all the ecosystem-specific jargon I'd also boil it down to essentially shiny new...

        Working with plenty of large-scale production clusters on GKE where we actually used the scale, once you get past all the ecosystem-specific jargon I'd also boil it down to essentially shiny new ways of doing old things.

        Coming from an Ansible/static VM-based traditional sysadmin background herding packaged enterprise apps with terrible vendor support websites, I (and my bank account) overwhelmingly prefer the crazy stack way in a lot of cases especially if the company develops in-house apps (SaaS company or the like).

        Like if I have a herd of apps that write to standard streams, I can route cluster-wide logging to whatever centralized log shipping system the company has that day instead of having to deal with log files strewn across hundreds of VMs or making sure I have filebeat up and running on the VM first. Or how Kubernetes DaemonSets take out a lot of complexity in the traditional way of running Ansible plays (or similar config management tool) to provision something in a new VM. I essentially just say "run X in every node labeled Y" and the cluster does it.

        2 votes
  2. bitshift
    Link
    Casually working my way through Crafting Interpreters, though I'm substituting a language of my own design in place of Lox. It's a macro language for code generation. Think assembly language...

    Casually working my way through Crafting Interpreters, though I'm substituting a language of my own design in place of Lox. It's a macro language for code generation. Think assembly language macros: including files in the middle of other files, deduplicating repetitive code, evaluating template strings with numerical expressions, etc.

    As part of designing that macro language, I've been studying languages that solve similar problems, such as actual assembly languages. My limited experience with assembler macros has been that simple things are easy, while everything else is difficult/impossible. Contrast with Tcl, which offers extreme flexibility — but don't goof up your syntax, because you'll have a heck of a time finding the issue. For my own language, I'm trying to strike a pleasant balance between these extremes.

    I'm pretty far from my ultimate goal (the side project for which I'm inventing this macro language), but I'm enjoying the journey, and that's 90% of the satisfaction I'm hoping for. But I do hope to reach the destination someday! I'll have to fight perfectionist urges to get there, though. The pragmatic Rust mantra of "Keep calm and call clone" is extremely relevant here, as is the general attitude of not overcomplicating things. For example, do I really need parser errors to have line numbers? Probably not. But stuff like that is tempting.

    4 votes
  3. TheWhetherMan
    (edited )
    Link
    Per @creesch 's comment last week, I began developing an app for myself and some friends to use for organizing our Bonnaroo experience (an upcoming music festival). So far I've been able to...

    Per @creesch 's comment last week, I began developing an app for myself and some friends to use for organizing our Bonnaroo experience (an upcoming music festival). So far I've been able to distinguish and/or surpass some main hurdles:

    1. Configuring the lineup screen for listing artists - finished
    2. Configuring the daily schedule - in progress. Right now it's just a matter of configuring a custom layout that looks and feels like a daily view of the Google calendar app, but dividing it between the 5 stages for better organization
    3. Include an artist bio page without adding it to the main navigation bar at the bottom. I understand this to be possible using navigation stacking, but the implementation of it has me scratching my head currently.

    Overall I've made a lot more progress in a week than I thought possible, and I hope the main hurdles can be surpassed here soon, as the festival is in a month and some change.

    4 votes
  4. [5]
    streblo
    Link
    Can I ask for some help from any shell wizards? Right now, I backup my btrfs drives with btrbk to a RAID-1 volume that lives in my house. It's amazing, I love the flexibility the snapshots...

    Can I ask for some help from any shell wizards?

    Right now, I backup my btrfs drives with btrbk to a RAID-1 volume that lives in my house. It's amazing, I love the flexibility the snapshots provide. To protect against theft and fire, I'd like to now schedule a periodic backup to the cloud as well, using the backblaze b2 cli. I'd like to keep everything as snapshots, so my plan is to use btrfs send to send the compressed snapshot to gpg, encrypt it, and pipe it to b2. Since the b2 cli now supports reading data from stdin, I can do something like this:

    btrfs send /btrfs_roots/data/snapshots/some-snapshot --compressed-data | gpg --batch --passphrase 'test password, please ignore' -c --output - | tee >(sha1sum) >(backblaze-b2 upload_file my-bucket - test-snapshot) > /dev/null
    

    That way I don't need any additional drive space to warehouse the data and I can get the sha1sum on the way out. The problem is however, backblaze doesn't compute sha1sums for files that are sufficiently large. These snapshots are ~500GB and will get larger so I need to provide my own sha1sum, which I need to do when I first invoke the cli. Obviously, I can't do unless I dedicate a large drive to being a staging area where I can dump the encrypted snapshot to. Alternatively I could not encrypt them but I don't want to do that either.

    What I'd like to do is basically loop through the contents of the file and stream different chunks of the snapshot each time. Then I can compute the sha1sum them on my end, and make sure the sha1sums in b2 match. But I'm not sure exactly how to do that in bash. I think I can just invoke the above set of processes multiple times if I can filter the bytes in some way, but I'm not sure what tool will let me do that.

    3 votes
    1. [4]
      streblo
      (edited )
      Link Parent
      OK, turns out you can use head and tail to specify bytes instead of lines. TIL. Anyways, that leaves me with this, which is kinda gross: #!/usr/bin/env bash size=$(btrfs filesystem du -s --raw...

      I think I can just invoke the above set of processes multiple times if I can filter the bytes in some way, but I'm not sure what tool will let me do that.

      OK, turns out you can use head and tail to specify bytes instead of lines. TIL.

      Anyways, that leaves me with this, which is kinda gross:

      #!/usr/bin/env bash
      
      size=$(btrfs filesystem du -s --raw /btrfs_roots/data/snapshots/some-snapshot | awk '{ print $1 }')
      chunk_size=1073741824
      
      for i in $(seq 1 $(($((size + chunk_size - 1)) / chunk_size)) ); do
         btrfs send /btrfs_roots/data/snapshots/some-snapshot --compressed-data | head -c $((i*chunk_size)) | tail -c $chunk_size | gpg --batch --passphrase 'test password, please ignore' -c --output - | tee >(sha1sum) >(backblaze-b2 upload_file my-bucket - test-snapshot.$i) > /dev/null
      done
      

      I haven't really thought too hard about this yet, but I think something along these lines can work. Perhaps I can split off most of this into a function that I can put in a subshell instead of spitting out the snapshot x times. Anyways, it doesn't need to be fast or pretty as long as it works. I'm not confident this does yet, but hopefully I can get it there.

      2 votes
      1. [3]
        vord
        (edited )
        Link Parent
        I wonder if you could use split to make chunks of the post-gpg encrypted instead. Not sure if it actually helps... Another thought after glancing at the backblaze docs is that after you've...

        I wonder if you could use split to make chunks of the post-gpg encrypted instead. Not sure if it actually helps...

        Another thought after glancing at the backblaze docs is that after you've uploaded all your chunks, you can glom them altogether into one gigantic file for downloading.

        Edit, a half-baked idea: tee into split, splitting into 100MB chunks, use --filter 'sha1sum $FILE' to generate your local sha1sum for the chunks. Specify the chunk size for upload using --minPartSize 100MB and backblaze will (if I'm reading correctly) calculate and store the chunk size sha1sum and you have your locally generated one.

        There does seem to be a potential edge case where the last file uploaded in a large file must be at least 1 byte in size. Eh what are the odds of everything being 500MB+6 bits :)

        2 votes
        1. streblo
          Link Parent
          I thought split only wrote to a set of files, but checking the man page it looks like there is a --filter option that lets you pass each chunk through a filter command. That could probably work...

          I thought split only wrote to a set of files, but checking the man page it looks like there is a --filter option that lets you pass each chunk through a filter command. That could probably work and would be a lot cleaner...

          I'll definitely look into that when I have some time, thanks!

          2 votes
        2. streblo
          Link Parent
          So I took at the docs you linked, and I think I am duplicating some existing work here. They specify that when using their api, a sha1 must be provided with each part. Despite not mentioning it in...

          So I took at the docs you linked, and I think I am duplicating some existing work here. They specify that when using their api, a sha1 must be provided with each part. Despite not mentioning it in the b2 --upload-unbound-stream --help page, based on what I can see from a quick glance at the source is the --upload-unbound-stream option (or --upload_file when a stream is provided) seems to utilize this api under the hood, calculating the sha1 for each chunk as it gets uploaded.

          So I think I can actually just point the stream at b2 and be reasonably confident that what I'm storing is actually an encrypted btrfs snapshot.

          2 votes
  5. GOTO10
    Link
    Finishing moving all Dockerfiles to Nix* stuff. It's painful but the result is great. *) I'll see where the project mess ends up, I'm sure there will be something in the end.

    Finishing moving all Dockerfiles to Nix* stuff. It's painful but the result is great.

    *) I'll see where the project mess ends up, I'm sure there will be something in the end.

    3 votes
  6. [3]
    vord
    Link
    So I've more or less completed the RFID Jukebox I started last week. It turns out I didn't need a seperate metadata system or a fancy player...good old mpd to the rescue! I had already bought this...

    So I've more or less completed the RFID Jukebox I started last week. It turns out I didn't need a seperate metadata system or a fancy player...good old mpd to the rescue!

    I had already bought this USB-powered Aux-in speaker for something else. When the project bubbled to my mind, I cracked this puppy open to find that a Raspberry Pi 3B+ can easily fit inside....which I also happened to have lying around. The lights on the one side of the speaker also readily detach, and I was able to bust open the RFID reader I bought and fit the board in its place

    Needed to get a right-angle headphone extension and a right-angle microusb extension in order to fit everything in the case cleanly. I probably could have hacked up my existing cables, but for $5 I was able to use these and be able to repurpose stuff later if needed. Added bonus, the microusb extension cable was able to slip out where the old cables used to come out, allowing me to use any microusb cable/charger that had sufficient amperage to power it all (5v3a works well).

    Plug speaker + rfid reader into pi, cram all the cables in, I only needed to make one small internal cut to let some more wires into the center cavity. Everything fits into the package, and bam, all the hardware is there...just the poweron/poweroff is plugging/unplugging it.

    For sofware, I installed regular raspbian, set it up as usual. Then changed to auto-login to CLI, configured pulseaudio to run as a system daemon instead of per-user. Installed mpd, cursed while troubleshooting (can provide more detailed fixes if anybody desires), and then ginned up a quick python script using python3-mpd2 that runs on startup.

    It uses MPD's sticker functionality to assign the RFID card to the song, and then accepts input from the RFID reader (it just reads the code then hits enter) to play the scanned song. Child is young so more sophisticated functionality has been disabled/gated, but is now trivially simple to upgrade.

    The script in question
    #!/usr/bin/env python3
    # coding: utf-8
    from mpd import MPDClient
    from sys import argv
    
    def setSong(cli,rfid):
      song = cli.sticker_find('song','','rfid','=',rfid)[0]['file']
      in_playlist = cli.playlistfind('file',song)
      if len(in_playlist) > 0:
        print('Playing existing song')
        cli.play(in_playlist[0]['pos'])
      else:
        id = cli.addid(song)
        cli.playid(id)
    
    def assign_tags():
      cli = MPDClient()
      cli.connect('localhost', 6600)
      library = cli.listall('/')
      for song in library:
        print(song)
        rfid = input("Assign RFID: ")
        cli.sticker_set('song',song['file'],'rfid',rfid)
      cli.disconnect()
    
    def main():
      cli = MPDClient()
      cli.connect('localhost', 6600)
      cli.repeat(0)
      cli.disconnect()
      while True:
        rfid_scan = input("Scan: ")
        cli.connect('localhost', 6600)
        # Just wipe the playlist
        cli.clear()
        setSong(cli,rfid_scan)
        cli.disconnect()
    
    # Script starts here
    if __name__ == '__main__':
      if len(argv) > 1:
        assign_tags()
      else:
        main()
    

    You'll notice if you launch the script with any argument, it'll put it in 'batch assign' mode. The cards came in a neat 10-long sleeve, so I just scan the cards as fast as possible and it'll assign them in alphabetical order.

    Because I wanted a tightly curated list of songs, and we use Youtube music, this handy one-liner let me download a private playlist, and keep track of what was downloaded so it doesn't re-download.

    yt-dlp -x $PLAYLISTURL --embed-thumbnail --convert-thumbnail jpg -o "%(artist)s - %(track)s [%(id)s].%(ext)s" --download-archive downloaded.txt

    This left me with a bunch of .opus files, with (terrible) embedded album art metadata, perfect for this use case. The whole system is now functional, and added bonus since it's both a pulseaudio and a mpd server to boot I now have dozens of potential future projects (casting it to other linux devices, vice versa, or running an internet radio station)!

    Very pleased with how it turned out physically, and since I didn't need to make any big changes to anything, I can easily repurpose it later if/when desired. I'm sure they'll want better speakers at some point...

    2 votes
  7. Akir
    Link
    After a bunch of canoodling I have finally got a virtual Windows 11 system running on my server to play old games that aren't well supported by Wine/Proton. I just tried remote SPICE for the first...

    After a bunch of canoodling I have finally got a virtual Windows 11 system running on my server to play old games that aren't well supported by Wine/Proton. I just tried remote SPICE for the first time and I was rather disappointed in the performance; it's actually more responsive if I RDP to the GUI I have set up on the server.

    My server is actually my old gaming PC, and it has the best graphics card I own in it right now. From what I understand I can have the card diverted so that the VM has full access to it. I'd be curious to see if anyone else has done this and if they would recommend it.

    2 votes
  8. Eji1700
    Link
    This week I spent several hours attempting to fix a Powerapps bug that affected our production environment that makes 0 fucking sense. Suddenly, out of no where, I can no longer "query" our...

    This week I spent several hours attempting to fix a Powerapps bug that affected our production environment that makes 0 fucking sense.

    Suddenly, out of no where, I can no longer "query" our database if I try to do any sort of date filtering. Functions that have been working for 4 years have started to fail, and no changes have been made to anything related.

    EXTRA stupid, is that if I copy any of the failing functions into a brand new app, it works!.

    Even more extra stupid, the workaround has been to remove the date part of the filter while keeping the rest, wrap the entire function in a sort by descending (on date), then wrap that in a FirstN so I only get a few records, and THEN i can do the date filtering.

    I have got to find time to really learn frontend in F# land because I am utterly done with powerapps. This is just totally unacceptable behavior in a production environment. Especially when I opened a ticket with MS on the 29th and they've basically just sat on their hands wasting my time, only responding when i prod them.

    2 votes
  9. [9]
    Sage
    Link
    Does anyone have any suggestions for hosting a server? More specifically a Node.js rest API. I was using some of the free ones, but eventually their VC funding runs out and it turns to a paid...

    Does anyone have any suggestions for hosting a server? More specifically a Node.js rest API. I was using some of the free ones, but eventually their VC funding runs out and it turns to a paid service instead. It doesn't have to be free, but if you have any to suggest, great! I guess I'm looking for a cheap option for an API that will probably be rarely hit.

    2 votes
    1. [2]
      supported
      Link Parent
      does it have a database or no?

      does it have a database or no?

      1 vote
      1. Sage
        Link Parent
        I have not built it just yet so not currently, no. It will though. Probably postgres.

        I have not built it just yet so not currently, no. It will though. Probably postgres.

    2. streblo
      Link Parent
      Personally I use digital ocean, but I think they're all similar. I don't have it running most of the time though. I just have a small wireguard server snapshot that's small enough I can store it...

      Personally I use digital ocean, but I think they're all similar. I don't have it running most of the time though.

      I just have a small wireguard server snapshot that's small enough I can store it there for free. When I go on the road I simply spin it up for ~7$ a month and I have access to my home network.

      1 vote
    3. [4]
      unkz
      Link Parent
      I would just use AWS. Depending on what you're doing, it can be pretty close to free, even after you leave the 12 month free tier -- $1-3/month for some of the extremely small services. But then...

      I would just use AWS. Depending on what you're doing, it can be pretty close to free, even after you leave the 12 month free tier -- $1-3/month for some of the extremely small services. But then you can easily grow to any scale you like with zero friction.

      1. [3]
        Sage
        Link Parent
        I have been kicking this around, I'm self a self taught dev and have yet to try AWS. I have heard AWS can be a bit annoying, but I'm always willing to learn. Do you have any suggested learning...

        I have been kicking this around, I'm self a self taught dev and have yet to try AWS. I have heard AWS can be a bit annoying, but I'm always willing to learn. Do you have any suggested learning material?

        1. [2]
          unkz
          Link Parent
          I'm also self taught! AWS is quite a rabbit hole, but spinning up a single EC2 instance is dead easy. I'd just follow their guide:...

          I'm also self taught! AWS is quite a rabbit hole, but spinning up a single EC2 instance is dead easy. I'd just follow their guide:

          https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

          1 vote
          1. Sage
            Link Parent
            Thank you! Guess I know what I'm doing this weekend then! Looking forward to mucking around with it. Much appreciated 😁

            Thank you! Guess I know what I'm doing this weekend then! Looking forward to mucking around with it. Much appreciated 😁

            1 vote
    4. Exellin
      Link Parent
      I've been using railway.app for over a year. They used to be free but made the hobby plan the cheapest now which is $5 USD/month. They are pay as you go and I have a node server, a postgres...

      I've been using railway.app for over a year. They used to be free but made the hobby plan the cheapest now which is $5 USD/month. They are pay as you go and I have a node server, a postgres database, and a redis database and my usage normally comes to $2/month, and I won't be charged more than $5/month until my usage goes past that.

      It can build a service based on a dockerfile in your repository and it was quite easy to set up. Let me know if you have any questions!

  10. hedy
    Link
    I've been hacking together my own customized static site generator for my blog! See context here The end goal is to replace Hugo, so that whenever I want to extend or add functionality, I can just...

    I've been hacking together my own customized static site generator for my blog! See context here

    The end goal is to replace Hugo, so that whenever I want to extend or add functionality, I can just adjust my script rather than spending hours pouring through Hugo docs and forums hoping it supports my special use-cases.

    It's written in Go, using text/template for gemini & email outputs, and html/template for html, rss, and atom outputs. Attempting to make yet another general-purpose SSG is a non-goal. It's why I'm planning to move off Hugo in the first place.

    1 vote