16 votes

What programming/technical projects have you been working on?

This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?

27 comments

  1. [3]
    DistractionRectangle
    Link
    I've been nerd sniped. I wanted to play around with some local AI frontends this weekend, and took at look at Jan. Download the appimage and I'm hit with EGL_BAD_PARAMETER. I've encountered this...

    I've been nerd sniped. I wanted to play around with some local AI frontends this weekend, and took at look at Jan. Download the appimage and I'm hit with EGL_BAD_PARAMETER. I've encountered this before with appimages, so I decided to try and debug it. I always figured it was an error with my dual GPU config, that I had set up the default GPU/prime offloading incorrectly. This was reinforced by the fact that prime-run <appimage> works.

    After some toiling around I come to the conclusion that it's not me that's wrong, it's the appimages. Some light sleuthing later, and I find that every single appimage where I've encountered this error was built with Tauri. Okay, so that's actionable. Looking into that, and it appears a version of the same error was corrected, but people still encounter this error with Mesa + Wayland, and that it's a product of binary stripping. The rabbit hole goes deeper. Alright. So what handles binary stripping for tauri? Linuxdeploy. Looking into Linuxdeploy, I didn't find anything directly related to fixing said error, but I did find that tauri uses a version that's 3+ years old. More than likely whatever the root cause is, it's been fixed by now.

    Tauri cli is well behaved in that it doesn't overwrite and clobber cached dependencies. So if Linuxdeploy exists in the cache, it won't overwrite it. So I can pin the Linuxdeploy version independently of tauri cli. Great!

    Now to test it if fixes the problem. Well, fuck. Jan doesn't have a reproducible build environment. At least they have a make file to build the project, so just a matter of trying to build it and resolving whatever errors/dependencies are missing, right?

    No. You see, the facts were these. Apparently, Linuxdeploy enumerates dependencies of bundled files and errors when dependencies are missing. Reasonable. However, you occasionally don't want to bundle a runtime dependency because you'd rather it be supplied by the users system. Linuxdeploy has cli option for this, but Tauri cli doesn't expose it in their build process. To work around this, the devs of Jan have a magically incantation in their git release workflow, where they don't bundle their files with Tauri (no files == no dependencies), call Tauri cli to create a skeleton and build a pseudo appimage, throw out that appimage, add files to the skeleton manually, and finally rebuild the appimage manually. This, of course, is not even remotely reflected in the makefile.

    So after cobbling together the build environment, and writing my own custom build script, I can finally test if the fix works. Thankfully, it did. So I've put in a PR for that, and now I am banging out a PR to remove the magic from the github build process and provide reproducible Linux builds w/ pre-configured dev container. What that'll look like is writing a shim for Linuxdeploy so allow us to exclude dependencies via environment variable, so we can both upgrade Linux deploy and add exclusions while still calling Tauri cli like normal. At some point I'll work on getting this fixed in Tauri.

    13 votes
    1. [2]
      hobblyhoy
      Link Parent
      I'm fascinated by you folks with the gumption to dive into problems like this. Years of working for startups has brought me a devil on the shoulder whispering 'ship it, fast'. I get reverse nerd...

      I'm fascinated by you folks with the gumption to dive into problems like this. Years of working for startups has brought me a devil on the shoulder whispering 'ship it, fast'. I get reverse nerd sniped- The second problems get serious my brain halts and starts evaluating for sunk cost, alternatives, opportunity costs, etc. I'm kind of jealous, I want to get sucked into these problems but it feels like fighting my nature now. Anyways.. good job!

      5 votes
      1. DistractionRectangle
        Link Parent
        You make it sound like a super human feat 😅. Okay I made it sound that way when I wrote it up, because who shouldn't be the hero of their own story? The reality though, is closer to a monkey with...

        You make it sound like a super human feat 😅. Okay I made it sound that way when I wrote it up, because who shouldn't be the hero of their own story? The reality though, is closer to a monkey with a keyboard. Really, I ignored this for a while until I had finally encountered this particular error one too many times and it crossed that threshold demarking annoyances I tolerate as "facts of life." And if it turned out to be more difficult to trace/fix, I would have just redraw my "facts if life" boundary to include it again.

        Also, shoulders of giants and all that. Other people did the real work of debugging. I'm just over here with Google, free time, and a free github account so I can code search the related code bases without having to download them. Like, I didn't even root cause the issue. I trusted that someone accurately traced the issue to binary stripping based on their comment on a different project, saw that the dependency responsible for that was 3 years old, figured that bumping it would A) probably solve the issue B) wouldn't cause more problems. Nail meet sledgehammer. The stars aligned and all my assumptions held.

        Start to finish, from error to dreaming up the proposed solution took maybe 10 minutes. Figuring out how to implement it took 5. Actually testing it was annoying, because they didn't provide a dev environment, and like a caveman I tried the make build try fix error, try make clean && make build loop until head hurt. Felt a little silly once I realized that the github release workflow templated the build environment and build commands and all I had to do was copy what they did there - Caveman discovers fire.

        Admittedly, I'm actually having to do a little work to understand the build process in order to reconcile the local build process with the release workflow. Folding the extra post build steps back into make file and doing so cleanly (my fix for the workflow was manipulating items in .cache/tauri, which is fine in a devcontainer/github runner, but tainting the cache of a local dev machine is a big no) requires a minor refactor of the build pipeline, which in turn requires some modifications to the release workflow, so it's probably going to be broken into multiple issues/PRs.

        But really, this is just an excuse to learn about tauri, since it promises to be write once deploy everywhere framework, but lighter than electron.

        2 votes
  2. [4]
    Timwi
    Link
    I've gotten really into jigsaw puzzles lately, and although I've solved a number of physical ones, I play most of them on Tabletop Simulator. Unfortunately, Tabletop Simulator’s built-in jigsaw...

    I've gotten really into jigsaw puzzles lately, and although I've solved a number of physical ones, I play most of them on Tabletop Simulator. Unfortunately, Tabletop Simulator’s built-in jigsaw system has many flaws, shortcomings and limitations. So these past two weeks I've been writing my own:

    • JigGen — Tabletop Simulator jigsaw puzzle generator

    Initially it was just about supporting greater numbers of pieces, unique innie/outie shapes and arbitrary image aspect ratios, but it has now grown to a pretty complex system:

    • You can design your own piece cut in Inkscape.
    • You can draw straight lines and have JigGen turn them into innies/outies automatically.
    • It can auto-generate a “normal” piece cut made of squares or, my newest addition, a piece cut modelled after Cairo pentagonal tiling. I've already solved a jigsaw with that and it was great fun. I'm using my implementation of Cairo tiling in RT.Coordinates (documentation) for this and intend to add maybe a couple more (imagine a piece cut generated from Penrose tiling... one can dream).

    I'm currently playing a 700-piece jigsaw with a friend whose piece cut is derived from the CircularCell grid structure (which I generated using the library and then manually turned into a rectangle using Inkscape).

    It's been a wild ride. I haven't been so enthusiastic about a random just-for-fun programming project for some time. It's actually giving me nostalgia. And I get really cool, fun and challenging jigsaw puzzles out of it.

    7 votes
    1. [3]
      first-must-burn
      Link Parent
      Have you considered the Einstein tile? One catch to it is that some of them have to be flipped over, so a truly evil puzzle would have the same image (or a similar image) on both sides.

      Have you considered the Einstein tile?

      One catch to it is that some of them have to be flipped over, so a truly evil puzzle would have the same image (or a similar image) on both sides.

      1. [2]
        Timwi
        Link Parent
        I have considered the Spectre tile. I hadn't thought of the idea of having image on both sides of the tile — mostly because I'm using a type of object in Tabletop Simulator that automatically...

        I have considered the Spectre tile. I hadn't thought of the idea of having image on both sides of the tile — mostly because I'm using a type of object in Tabletop Simulator that automatically turns itself right side up when picked up — but thanks for the nightmares, now I have something to truly fear I might do one day lol

        1 vote
        1. first-must-burn
          Link Parent
          Thanks for the tip on the specter tile! I had not seen that I have a multi material 3D printer waiting to be assembled and used, so printing puzzle pieces in a couple of colors and two-sided is...

          Thanks for the tip on the specter tile! I had not seen that

          I have a multi material 3D printer waiting to be assembled and used, so printing puzzle pieces in a couple of colors and two-sided is definitely in reach. If I manage to manifest something like that, I will let you know.

          2 votes
  3. [7]
    IsildursBane
    (edited )
    Link
    So I am waiting on a friend to finish 3D printing my case design for my audio player so that project is currently on hold (I may do some simple coding on it, but nothing major is being worked on)....

    So I am waiting on a friend to finish 3D printing my case design for my audio player so that project is currently on hold (I may do some simple coding on it, but nothing major is being worked on). I have been working on my React learning project of creating a recipe website.

    On the backend of this website, things have been pretty smooth. I am using Express, Sequelize, and SQLite3. I chose these because I am familiar with all of them, and I did not want to get bogged down on the backend. I am looking into adding some automated testing in this to help me. I will not go full TDD, but it is getting to the point that having some automated tests would be beneficial. I am considering looking into using Bruno for automated tests, as that is what I have been using for manual testing. I am also considering already doing a rewrite from Javascript into Typescript, but I might push that down to after having the project working, and instead prioritize having a functioning program.

    On the frontend, I am starting to get the hang of React. I have CRUD for two models, and since the models had the same number of attributes, I was able to reuse the components between them which is nice. I am now working on one model that has a relationship, so have had to create new CRUD components, but they have been easy to build off of from the previous CRUD components, so it is going quick. I may consider doing some automated UI testing as well, but unsure of that yet.

    One other area I am considering is restructuring the project on Github. Right now, it is the same repository for the frontend and backend, but I am considering splitting it into two separate repositories. The reason for this considered change is to make it more modular, so that I can do rewrites easier (like the considered change to Typescript once I have it functioning).

    Edit:
    So I have been working on finishing up the update for this model with an association, and having way more difficulty than I expected. For reference, I have been using Semantic UI React to help on the frontend, as earlier on when I was figuring React out, it was used in a tutorial. I have come to the conclusion today that Semantic UI React has horrendous documentation. All I want it to do, is to do an API call to get the list of options for a dropdown, and then have one be selected based on whichever entry I am working on. However, despite this seeming like a relatively normal use case, there is no documentation for this in the official docs.

    2 votes
    1. [6]
      tauon
      Link Parent
      While I know next to nothing about the JS “ecosystem”, shouldn’t it be possible to rewrite file-by-file/first backend, then frontend to TypeScript, or just starting with JSDoc annotations?

      While I know next to nothing about the JS “ecosystem”, shouldn’t it be possible to rewrite file-by-file/first backend, then frontend to TypeScript, or just starting with JSDoc annotations?

      1. [4]
        zestier
        (edited )
        Link Parent
        In my opinion time spent writing JsDoc comments as pseudo-TS is kind of time wasted in most projects. There are some exceptions, like if you never plan to switch to TS but want consumer...

        In my opinion time spent writing JsDoc comments as pseudo-TS is kind of time wasted in most projects. There are some exceptions, like if you never plan to switch to TS but want consumer compatibility (ex. Svelte), but most of the time it means you'll just be doing that work twice. Unfortunately "strong" TS configurations also don't really like having mixed JS either. It's doable, but you have to loosen the rules and no one ever remembers to strengthen them again later. Although these days you'd maybe get decent results just telling an AI agent to convert to TS.

        There's usually a far better reason to divide into a monorepo or multiple repos though: dependency management. If I'm doing a full stack project I always start with at least 3 repos (usually in a monorepo). This minimal splitting is front end, back end, and deployment infrastructure. Such an organization forces each to define its own dependencies which can turn out to be very helpful. Sometimes it just reduces the blast radius of bumping package versions, but there are also a lot of packages that are only well-suited to one environment (ex. works in browser but not in node or vice versa). Keeping that stuff isolated makes it a whole lot less likely that you'll accidentally end up with cases like having React end up embedded in your backend.

        In enterprise I also sometimes end up with packages that I can't ship to users in any form for licensing reasons. So having the clear dividing line that makes it impossible for my build system to bundle that stuff into the front end, because the front end doesn't take a dependency on those packages at all, can be valuable.


        I also think it's unfortunate how much market share React has. React kind of sucks and is full of foot guns, but it's what everyone learns. I'd be thoroughly surprised if at least one hook-related bug couldn't be found in the majority of React projects using hooks.

        2 votes
        1. [3]
          IsildursBane
          Link Parent
          Yeah, in school I was taught Vue because the instructor did not like React. I understand that viewpoint, but I think he did us a disservice by not teaching us React as that is more commonly used....

          I also think it's unfortunate how much market share React has

          Yeah, in school I was taught Vue because the instructor did not like React. I understand that viewpoint, but I think he did us a disservice by not teaching us React as that is more commonly used. Especially when almost every other class in that program was focused on what is used in industry, instead of just teaching us what was commonly used in academic settings

          1 vote
          1. [2]
            zestier
            Link Parent
            If I was doing something for myself I wouldn't use it, but yeah it's very useful professionally. If I was teaching someone with the intent to get them into the workforce I'd feel compelled to...

            If I was doing something for myself I wouldn't use it, but yeah it's very useful professionally. If I was teaching someone with the intent to get them into the workforce I'd feel compelled to teach it to them because I've read job listings this decade. It's so useful professionally I've even got pings at work from random people on different teams that I didn't know that started with something like, "so and so told me you're a React expert, can you help me with ___?"

            Web dev will be in a healthier spot when it dies though it'll probably be another decade before that happens. I was going to joke that hating it is the nature of knowing too much about certain tools, but I actually kind of love TS most of the time and I was deep in the trenches with TS too during most of my time dealing with React.

            1 vote
            1. IsildursBane
              Link Parent
              Yeah, if I was creating this project for just personal use, I probably would not have gone with it. However, while this project is useful personally, it is also intended to help build out my...

              Yeah, if I was creating this project for just personal use, I probably would not have gone with it. However, while this project is useful personally, it is also intended to help build out my portfolio when applying for jobs, so learning React through doing this seemed like the sensible choice to improve my portfolio.

              1 vote
      2. IsildursBane
        Link Parent
        That is kind of my plan once I get that far, but at the moment I am staying with JS to continue learning React, then return back to do a rewrite in TS (maybe). Probably not as efficient as...

        shouldn’t it be possible to rewrite file-by-file

        That is kind of my plan once I get that far, but at the moment I am staying with JS to continue learning React, then return back to do a rewrite in TS (maybe). Probably not as efficient as starting the switch over to TS now while the files are low, but I kind of don't want to be bogged down in TS at the moment.

        1 vote
  4. Weldawadyathink
    Link
    I guess I am going to have to double post on this thread. I got nerd sniped by an idea that has been in the back of my mind for a long time. It's actually a combination of two ideas. Unless you...

    I guess I am going to have to double post on this thread. I got nerd sniped by an idea that has been in the back of my mind for a long time. It's actually a combination of two ideas.

    • Unless you are on linux and can run a zfs raid-z setup, there are almost no tools to prevent bit rot. If you aren't aware, the data that is stored on your computer isn't guaranteed to stay the same. It can change over time for a variety of reasons. Most computer storage mediums don't have a way to detect or correct these changes. Some things, like CDs and DVDs, have error correction built in. That is why when you scratch a DVD, it can often still be played. chkbit is a pretty nice tool to detect bit rot, but it can only tell you if it detects a bit flip, it can't repair it.
    • For a long time, I have wanted a version of git that is suitable for large files. I know about git-annex and git-lfs, and I may explore those more in the future. But since those are based on git, git itself is more complex then I really want for this use case. Diffs are nice for text files, but most large files aren't text. I really just want a program that lists what files have changed and allows me to rollback to the last "saved" copy of changed files. Bonus points if it can store previous versions of the files. Extra bonus points if it can handle branching files.

    I don't know who to attribute it to, but there is a fantastic quote for programming: "We do things not because they are easy, but because we thought they would be easy". Many years ago, I watched the fantastic 3 blue 1 brown video about hamming codes. For those unfamiliar, they are a pretty simple way to detect if data has changed. At it's most basic form, it can detect if a single bit has changed and which bit changed so you can fix it. Often it adds one additional bit so that it can also detect if two bits changed. With the extra bit, it can't fix two changed bits, but it can report an error.

    For the bit rot part of this idea, I thought it would be easy. Just calculate a hamming parity, store that in a hidden directory, and check it when convenient. Simple! Why has nobody done this before? And for the git style version control, that should be pretty simple too. I don't need to handle diffs, merge requests, syncing, anything. I just need to put some file versions in a hidden directory and find a way to surface them to the user. It should be so easy to code this, I thought.

    Somehow this idea nerd sniped me today and I spent most of the day working on it. I started with a misguided attempt to get overlayfs-fuse and osx-fuse working on macos. I may revisit this idea in the future, but it is far beyond my skill right now. I wanted an overlay filesystem so that large files only needed to exist once on disk. Overlays would have allowed me to map the working directory to the entire tree to all the hidden folder commits without having to copy anything like git does. (Now that I am writing this, I just thought that it would be really difficult to handle deleted files with that setup. Maybe it's a good thing I went a different direction.) The breakthrough today came when I realized that apfs has copy on write. I have sworn windows out of my life, and if I am on linux, I can use zfs or another COW filesystem. So I can just copy like git does. As long as I can do it in a way that triggers COW, there is no extra storage space used at all!

    For a bit, I went down the rabbit hole of more advanced error correction codes. However I think for this project, a hamming code is actually quite sufficient. It doesn't protect against one bit flip per file, it protects against one bit flip per block, and you can set the block size to (almost) whatever you want. Smaller block sizes mean more "wasted" data storage for the parity information, larger block sizes mean less overall resistance to flipped bits. The worst case is Hamming (4,8), where 4 bits store data and 4 bits store parity. You have to double your storage space for that implementation. I went with a more modest Hamming (120, 128). For every 120 bits of data, I have to store 8 bits of parity data. For a 100GB dataset, that would be 6.6GB of parity data. Perfectly reasonable. And that can protect against 100 million (evenly spaced) bit flips in the data.

    I absolutely love TypeScript, specifically for it's powerful typing system, but it seemed like a poor choice for this project. Even if it worked perfectly, the end product may have to calculate a bitwise math expression on millions of bits, even for a relatively modest dataset. I don't have particularly strong feelings about type safety, never having used Rust or similar languages, but it did seem like a reasonable idea. I don't mind c, so that was an option. I absolutely despise c++ because of previous bad experiences with really bad computer science teachers, so that was not an option. Anyway, I ended up going with Swift. I haven't heard anything but good things about the language, and I want to get into iOS development, so it seemed like a good idea to learn. So far, I think I like it, but not as much as TypeScript. I have already had to spend a good time debugging things that the typescript language server would have caught before my code even ran. And, while the crash logs are better than some languages I have worked with, they are worse than typescript. I will admit that most of these issues are probably just language unfamiliarity, but I still wish I could have TypeScript's type system in every programming language.

    Now back to the real issue: I had to actually implement hamming codes. This was way more difficult than I had realized. There wasn't any single difficult parts of the algorithm, but making everything work together was difficult. Having to learn a new language syntax on top of it didn't help. Also, the algorithm is designed for the parity information to be interlaced with the actual data. I want the files to exist on disk unmodified, so the parity needs to be separate. Currently the code interlaces the parity into the data for computing it, and separates it out for storage. Behold:

    Initial:
    Block 0: 01110010 | VALID | AAAAAAAAAAAAAAA
    Block 1: 00101100 | VALID | BCDEFGHIJKLMNOP
    Block 2: 01110001 | VALID | QRSTIVWXYZ
    Initial (reloaded):
    Block 0: 01110010 | VALID | AAAAAAAAAAAAAAA
    Block 1: 00101100 | VALID | BCDEFGHIJKLMNOP
    Block 2: 01110001 | VALID | QRSTIVWXYZ
    Bitrot:
    Block 0: 01110010 | E:037 | AAACAAAAAAAAAAA
    Block 1: 00101100 | VALID | BCDEFGHIJKLMNOP
    Block 2: 01110001 | ERROR | QRSTIVWXYY
    

    It works! For this proof of concept, it has to calculate the hamming parity for a text file, load that into a sqlite database, read it back, check it against the original file, and check it against a modified file that simulates bitrot. As you can see, there is a single bit flip on block 0, and a double bit flip on block 2. It has detected both, and printed the bit that was incorrectly flipped in block 0.

    Long term, I think I want to make this a full blown version control system. I do want to try different error correcting codes, but I don't yet understand them mathematically. I came up with the name "inversion". It's a play on subversion. But whereas subversion and other VCSs are designed primarily for small files, and check for data corruption, mine is designed for large files, and will fix data corruption. Therefore an inversion. I think I will have it store everything in a .inversion hidden folder, with an inversion.db sqlite database. I want everything to exist as plain files on disk, so that if the software ever is unmaintained in the future, the user doesn't have to deal with extracting proprietary data formats. I think I will just stash it all in the inversion directory, with the filenames as sha hashes or something.

    I always like starting a new project that I am properly excited about, and so today was a fun day!

    2 votes
  5. Parou
    Link
    fml, I was working on this big project not knowing we got JavaScript modules in browsers for a while, too. I was too long out of the loop. Was busy today, seperating so much stuff into different...

    fml, I was working on this big project not knowing we got JavaScript modules in browsers for a while, too. I was too long out of the loop.

    Was busy today, seperating so much stuff into different modules.

    It didn't help these past few weeks that everytime I searched for something like that in browser based JavaScript, all I got were old Stackoverflow questions and some pages purely talking about nodejs.

    1 vote
  6. lynxy
    Link
    Still on-and-off attempting to get the Dell XPS 13 9345, an X1E device, booting under ArchLinux- I'm starting with the ArchLinuxARM generic linux-aarch64 image, which has an ancient kernel because...

    Still on-and-off attempting to get the Dell XPS 13 9345, an X1E device, booting under ArchLinux-

    I'm starting with the ArchLinuxARM generic linux-aarch64 image, which has an ancient kernel because it was last updated ~two years ago, then building the kernel from the repository myself, with a few patches (mostly audio fixes- a lot of the work has been mainlined at this point, it seems). I've isolated the required modules to first get the built-in input working, and then fix the device black-screening on boot. I'm just utterly struggling to get the device to fully boot- it can never find root. Whether it's identified using root=LABEL, root=UUID, root=/dev/sdx, or root=/dev/disk/by-uuid/uuid. When I get into an emergency shell and attempt to list block devices, only the internal NVME shows.

    Now, I should point out that I am booting this instance off of a USB stick, plugged into a USB C (USB4) port. There are some mutterings about a certain module, QCOM_Q6V5_PAS, causing issues with USB C on boot- but I've disabled that with no effect. I've also disabled USB thunderbolt pass-through in the UEFI, as recommended. Secure boot was never enabled. I can't possibly plug the USB into a non-USB C port because this device only has USB C. I'm truly at a loss, and I might actually reach the limits of my stubbornness. I'm not a kernel dev. I don't feel smart enough for this.

    1 vote
  7. [4]
    xk3
    (edited )
    Link
    My brother-in-law is an indie filmmaker and he suggested I make a version of Disk Prices on eBay for digital cinema camera storage (ie. CFexpress, XQD, etc). So I'm starting work on that... It's...

    My brother-in-law is an indie filmmaker and he suggested I make a version of Disk Prices on eBay for digital cinema camera storage (ie. CFexpress, XQD, etc). So I'm starting work on that...

    It's pretty straightforward but I keep getting distracted by other things like deciding how to use the Midea Air recall refund: whether or not to upgrade my furnace with central air conditioning, install a mini-split, or just get 1 or 2 replacement window air conditioners... the main problem that I want to solve is that the main room of my house does not have any windows that would be suitable for a window air conditioner and the two rooms with the air conditioners do not mix with the cool air fast enough to provide meaningful coverage--even with a box fan pushing the cool air out into the main room.

    Aside: Thermogeographically speaking, it probably makes more sense to push the hot air into the "return vent" rather than trying to move cold air but the "return vent" of a window air conditioner is a small target. A bit difficult to move air with finesse in this case. I'm not an air bender.

    I currently have a Midea U 10K and an 8K. It seems like a lot of the outlets in my house share the same 20 A breaker. During the heat wave there were times when it tripped when both were on. Usually that doesn't happen and they can both be on. A single-zone mini-split might reduce the burden of the 2 window units a bit to prevent this from happening as I have a "free" 220v breaker in the box connected to baseboard heating which I don't use.

    I got a quote for ~$5,500 USD to buy+install a mini-split. It would be a lot cheaper to do it by myself but I'm not 100% confident that all the wires exist where I would need them to be. They also quoted me ~$9,500 for a 1.5 ton Carrier furnace/air conditioner system+installation (I don't have space for a separate central AC system but my furnace is pretty old anyway) so I might just do that instead... or maybe install it myself

    I'm also thinking of removing the through-the-wall air conditioner which doesn't work and has been sitting in the wall in the main room and turning it back into a normal wall. I'm also thinking about moving to St. Louis instead... but to sell the house I should probably do one or more of the above first.

    1 vote
    1. [3]
      first-must-burn
      Link Parent
      I think the costs to run the minisplit will be significantly lower than the window units.

      I think the costs to run the minisplit will be significantly lower than the window units.

      1 vote
      1. [2]
        xk3
        Link Parent
        Yeah, I'm leaning more into the mini-split idea. I contacted a few electricians today to see how much it would be for them to add a disconnect box. I could probably do it myself but I'd rather...

        Yeah, I'm leaning more into the mini-split idea. I contacted a few electricians today to see how much it would be for them to add a disconnect box. I could probably do it myself but I'd rather have someone insured be the one drilling holes in the wall.

        From what I'm reading online it seems like doing an energy audit is also a worthwhile endeavor. If my house was more insulated I think I'd be fine with 12k BTU but right now I'm leaning slightly more toward 18k BTU. 12k is probably the right size (a bit oversized for the 380 sqft room but slightly undersized for the whole 675 sqft house). But I'm a bit worried about extreme weather events and it not being powerful enough to keep up with all the air bleeding--though I suppose at that point I could patch up the house or get 1 or 2 window air conditioners again so I should probably just get a 12k BTU mini-split

        1. first-must-burn
          Link Parent
          Climate change is real, so if it were me, I would oversize it. I need to get an electrician out to put a switchover for our panel so we can connect the whole house to a generator. Good luck and...

          Climate change is real, so if it were me, I would oversize it. I need to get an electrician out to put a switchover for our panel so we can connect the whole house to a generator. Good luck and stay cool!

          1 vote
  8. elight
    (edited )
    Link
    I'm developing an iOS app to provide support to neurodiverse people. Just went to a friends & family alpha. Week or few and I should have an invite-only alpha. Frankly, I'm writing it for me....

    I'm developing an iOS app to provide support to neurodiverse people. Just went to a friends & family alpha. Week or few and I should have an invite-only alpha.

    Frankly, I'm writing it for me. Hoping it can help other people.

    1 vote
  9. [3]
    Whitewatermoose
    Link
    I am working on a podcast with an ai assistant. It is just in the beginning phase. But, it is coming along so far.

    I am working on a podcast with an ai assistant. It is just in the beginning phase. But, it is coming along so far.

    1. [2]
      hobblyhoy
      Link Parent
      Are you saying you're going to have an AI as like your live co-host on the podcast? That would be neat.

      Are you saying you're going to have an AI as like your live co-host on the podcast? That would be neat.

      1. Whitewatermoose
        Link Parent
        Yes, trying to. It is a high level concept that makes it sort of snarky and funny, while talking about periodic topics. Such as AI, robot automation, streaming, push for “de tech” or digital...

        Yes, trying to. It is a high level concept that makes it sort of snarky and funny, while talking about periodic topics. Such as AI, robot automation, streaming, push for “de tech” or digital minimalism, Nintendo, Sony and Xbox. Etc.

        2 votes
  10. Weldawadyathink
    Link
    I just pushed my rewrite of audiobookcovers.com to production! The actual usage of the site is incredibly low, so if there is an issue I likely won't see it for a while, but I haven't seen any...

    I just pushed my rewrite of audiobookcovers.com to production! The actual usage of the site is incredibly low, so if there is an issue I likely won't see it for a while, but I haven't seen any issues yet. I added some basic analytics (rolled my own, just async writes from the server to the database) for stuff like what people are searching for, or specific images that are popular, etc. But all that data is stored as a jsonb in postgres, and I haven't put together any sql queries or dashboards, so I don't how how much usage I have yet. There are 2000 searches, 7000 image page views, and 700 front page hits. But I have no idea how much of that was my own testing. Analytics was one of the last things I implemented, so there shouldn't be too many of my own hits, but I am sure there are some. I am planning on making myself a full dashboard page in the admin panel, but I haven't had the time yet.

  11. uh-huh
    Link
    I’m working on a tui time tracking app. I want to make a separate backend with a REST api so I can also have a little website. It’ll basically be my own stripped down version of time warrior.

    I’m working on a tui time tracking app. I want to make a separate backend with a REST api so I can also have a little website. It’ll basically be my own stripped down version of time warrior.