13 votes

What programming/technical projects have you been working on?

This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?

15 comments

  1. [3]
    zoroa
    Link
    Foot Pedal (This is more a basic electronics project than a software development one) Problem I'd been thinking about getting a foot pedal to use with my PC for a couple months, but I wasn't...

    Foot Pedal

    (This is more a basic electronics project than a software development one)

    Problem

    I'd been thinking about getting a foot pedal to use with my PC for a couple months, but I wasn't particularly enthused by the selection I saw for purchase:

    • It looked like most of my options used proprietary configuration software - I don't have much faith in the continued availability of the configuration software, especially from the lesser known companies that produce the cheaper pedals.
    • Wasn't a huge fan of the build quality I was seeing, which made me concerned about repairability.

    Requirements

    • Configurable with open-source software - I generally trust that open source software will be kept available and functional in the long term.
    • Repairable - I expect that the foot pedal itself would be the primary point of failure, so I want to be able to swap it out.

    Plan

    My idea was to approach this like building a mechanical keyboard:

    • There's a ton of resources available for building or buying keyboards in a ton of different form factors.
    • Via is a popular open source keyboard configuration software that I already use for my current keyboard.
    • Repairability is part of the culture of the hobby (replacing switches), so there might be a way I can tap into this to make it easy to swap out the pedal.

    This approach reduces the hard part of the project to "How do I connect a pedal to a mechanical keyboard?". My insight was that pedals for digital pianos and mechanical keyboard switches work exactly the same way: it's a basic electrical switch. So I just needed to get a keyboard switch, remove the stem and the spring, and then solder a connection from the metal leaf inside the switch to a female TS jack . This would give me a "Keyboard Switch to Female TS Jack" connector that lets me connect any digital piano pedal to any mechanical keyboard.

    I expected the solder joint to be a weak point, so I hoped that filling the keyboard switch with epoxy afterwards would reinforce the solder joint.

    Materials

    • 1/4 inch Female TS to Banana Plug Speaker Cable
      • The banana plugs are irrelevant, I cut them off to expose the metal wire. I got this to make my life easier by having 2 discrete wires I could work with.
    • Compact Sustain Pedal with Polarity Switch
    • Epoxy (Never ended up using)
    • One Mechanical Keyboard Switch
      • Any switch should work, so long as you have access to the metal leaf when you open the switch, and the switch is compatible with the keyboard you want to put the switch on.
    • 9 Key Macropad, Via Compatible, Hotswappable Switches
      • This would work with a single key macropad, but the 9 key I found was very close in price to the single-key keyboards I saw with similar features.

    Challenges

    • Wires were WAY too big - I didn't put much thought into the wires of the female TS jack I bought, and ended up with wires with a low gauge (=thick wires).
      • I had to remove some of the plastic inside the keyboard switch to make space.
      • The TS jack's wires barely fit through the opening for the keyboard switch's stem.
        • I had planned to pour epoxy through that opening to reinforce the soldering I did, but the thickness of the wires left little space to make that happen. I abandoned the idea since it seemed like more trouble than it was worth at that point.
      • The wires were stranded , so I had to be careful that a rogue strand didn't cause a short circuit.
      • The wires were so thick that it was difficult to maneuver them in place for soldering.
    • I had poor soldering equipment - My soldering iron is super old, has a tip that needs to be replaced, no temperature control, etc...
      • I was constantly having issues just getting the solder to melt

    Improvements for next time

    • Use thinner wires
    • Get better tools
      • A better soldering iron (or at least better maintained) would have made a world of difference.
      • I might've been able to completely avoid soldering if I used something like solder seal connectors or even just heat shrink.
    • Aesthetics of the "Keyboard Switch to Female TS Jack" connector
      • Getting rid of the stem of the keyboard switch means that you'll have a switch on a keyboard without a keycap
        • It might be worth using a keyboard switch with opaque housing. Both to color match with the rest of the keyboard, and hide the soldering.
        • Filling the switch with an opaque/colored epoxy would also help hide the soldering
      • It might be worth leveraging some techniques used to build DIY keyboard cables to make the wire connecting the switch to the Female TS Jack look more premium.

    Miscellaneous Notes

    • You probably want to get a piano pedal with a polarity switch. There isn't any standardization on whether digital pianos expect a pedal that is "normally open" (actuating the switch lets electricity flow) or "normally closed" (actuating the switch prevents electricity flow). Polarity Switches let you swap between both modes, so that you can be sure that you can make the switch behave as you'd expect.
    • The "Keyboard Switch to Female TS Jack" connector feels like an idea that someone could refine and commercialize.
      • This may be a use for "defective" keyboard switch housings, since you don't need a spring or a stem.
      • It's a straightforward sell to mechanical keyboard users, since they can just add it onto their current keyboard in lieu of a key they seldom use.
      • Could be sold alongside a more complete kit that includes a keyboard PCB, the "Keyboard Switch to Female TS Jack" connector, a housing for the PCB, and a piano pedal.
    11 votes
    1. em-dash
      Link Parent
      This sounds cool. As an enthusiast of cursed, hacked-together input devices, I approve. Heat shrink, by the way, is not really intended to hold things together. It's to cover joints that are...

      This sounds cool. As an enthusiast of cursed, hacked-together input devices, I approve.

      Heat shrink, by the way, is not really intended to hold things together. It's to cover joints that are already soldered or crimped.

      If you're not already, I'd also recommend trying a leaded solder (63/37 is nice to work with). Lead-free solder is harder to melt.

      3 votes
    2. Weldawadyathink
      Link Parent
      Congrats! For the soldering iron, I can highly recommend the pinecil. It gives you so many features for the price. Only drawback is it took me a long time to get mine, but they may have fixed...

      Congrats! For the soldering iron, I can highly recommend the pinecil. It gives you so many features for the price. Only drawback is it took me a long time to get mine, but they may have fixed their supply chain since then.

      3 votes
  2. [5]
    first-must-burn
    Link
    I've been working on a set of related websites, each deployed using Astro for static site generation and deployed with Cloudflare pages. There was a rush to set each of them up, so right now I...

    I've been working on a set of related websites, each deployed using Astro for static site generation and deployed with Cloudflare pages. There was a rush to set each of them up, so right now I have three separate git repositories with slightly different versions of the Astro template customization / basic components because I've been refining them as I go.

    I want to set up a monorepo with a separate package for the components and a package for each website deployment. That way I can have a common template and keep the look and feel in sync between them.

    I was able to make this example work, but no matter what I do, I can't get the intellisense in vscode to resolve the imports from the other package. Not the end of the world, but pretty annoying.

    When I started porting my own code into the monorepo structure, I broke something else so I get some kind of error about not having a rendered for the type.

    I'm pretty sure this is the "right" way to do it, but it has been super frustrating to try to set up. I'm on the verge of just putting the common components into one of the sites and symlinking the directories into the other folders. I'd have to test that that works in the Cloudflare deploy, and it feels barbaric, but I'm almost at my wit's end.

    If anyone has any insights or suggestions to share, I'm open to suggestions. I'm fairly new to Typescript and JavaScript, so maybe I am missing something.

    4 votes
    1. [3]
      smores
      Link Parent
      I’ve done this so many times, and it’s always a little painful. Is this a public repo? Or, would you feel comfortable adding me to the repo if it’s private? Happy to take a look, I’m sure we can...

      I’ve done this so many times, and it’s always a little painful. Is this a public repo? Or, would you feel comfortable adding me to the repo if it’s private? Happy to take a look, I’m sure we can figure it out!

      4 votes
      1. [2]
        first-must-burn
        Link Parent
        Thanks, I will put something up tonight or tomorrow and DM you.

        Thanks, I will put something up tonight or tomorrow and DM you.

        1 vote
    2. first-must-burn
      Link Parent
      Thanks to @smores generous offer, I was transferring my (anonymized) code to a public repo and found the issue along the way. Code is here for anyone who is interested Because I hate it when...

      Thanks to @smores generous offer, I was transferring my (anonymized) code to a public repo and found the issue along the way. Code is here for anyone who is interested

      Because I hate it when people don't post their solutions, here's what happened:

      Problem 1: VS Code Intellisense couldn't see the common package because Yarn uses symlinks inside of node_modules. The build environment is dockerized, but I was running VS code outside of WSL in my Windows environment. That's the lazy default, but usually VScode is pretty good at handling that. When I connected the VS Code environment to WSL, then it resolved the common package and I got a much more useful error message.

      Problem 2: Even if the Intellisense was not working, Astro should have been able to run/build the site, but I was getting this error from Astro when doing yarn run dev:

      Unable to render Card because it is undefined!
      Did you forget to import the component or is it possible there is a typo?
      

      With Intellisense working, I got a much more useful error inside the index.astro page: Module '"/path/to/multi-astro/sites/multi-astro-common/index"' has no default export. Did you mean to use 'import { Card } from "/path/to/multi-astro/sites/multi-astro-common/index"' instead?

      So of course, changing the line from

      import Card from 'multi-astro-common';
      

      to

      import { Card } from 'multi-astro-common';
      

      fixed the problem. It seems like a stupid thing to spend so much time on, but one of my takeaways is that being in a fully Linux dev environment is generally a better way to start. The DevContainer VS Code tools make this pretty easy even on a Windows machine.

      @smores, thank you again. Here is your honorary Duck Badge:

        __
      <(o )___
       ( ._> /
        `---' 
      
      3 votes
  3. xk3
    (edited )
    Link
    I've been reading a lot about LTO archival. So far I've learned that: tape is kind of a pain. Even if you think that you are storing them correctly, if you don't test your backups regularly, you...

    I've been reading a lot about LTO archival. So far I've learned that:

    • tape is kind of a pain. Even if you think that you are storing them correctly, if you don't test your backups regularly, you might find on the day that you need it that the data is unrestorable.
    • random seeks and specific file retrieval likely doesn't scale well because it adds wear to the expensive tape drives. Although there is likely an "island of stability" here where if you have a few dozen tape libraries and mirror >1GB large chunks in some S3-like system it might work pretty well... assuming you have someone on staff that could repair drives as they fail from extended use.
    • one other thing to think about is that, although some LTO tapes have lasted for more than 30 years this is likely abnormal for most home environments if you live somewhere where the humidity levels are not constant. Another thing to consider is, similar to VHS tapes, there are a lot less VHS players nowadays than 30 years ago... it's still pretty easy to find working LTO-2 drives but LTO-1 drives are already pretty rare. Yes, LTO-2 and LTO-3 drives can read LTO-1 tapes but it is still something to think about...
    • one reason why I didn't really seriously consider LTO-4 ... LTO-7 is that the quantity of tapes needed when operating at a scale where this makes financial sense is that it requires a lot of space. 384 GB of LTO-7 is already 64 tapes and that takes up quite a bit of shelf space. LTO-7 is very similar in price to LTO-8 as well.

    I only have a couple hundred TBs and I'm pretty content now--I'm not looking to extend into the >600 TB range right now so my conclusion is that a tape drive probably doesn't make sense for me right now. But if someone wants to pay me to sign up to USENET and just download stuff then I would invest in a tape library first.

    Other than that I've been researching parallel/cluster compute platforms like HTCondor, Nomad, Triton DataCenter, and other tools listed here: https://github.com/dstdev/awesome-hpc. This week I've started writing something lightweight for spinning up systemd services across PCs (a bit like telefork, Outrun, or Exodus) taking into account (simple polling) resource allocations like %iowait, cpu_idle, available memory and excess network capacity.

    4 votes
  4. [2]
    Raspcoffee
    (edited )
    Link
    After a few weeks at my new workplace I'm beginning to understand the code I work with more and more. The longer it goes on the more I appreciate the importance of good code design. Coming from...

    After a few weeks at my new workplace I'm beginning to understand the code I work with more and more. The longer it goes on the more I appreciate the importance of good code design. Coming from physics well, most code written there is uhrm... Suboptimal. My own code back then included.

    I've also been appreciating C# a lot. It's pretty powerful and easy to read.

    All in all, I've written very little but have been understanding the code more. It's a strange feeling where, the less you have to show you sometimes have actually achieved more.

    4 votes
    1. countchocula
      Link Parent
      Yeah i used to do electrical engineering and it was a shock getting into that from a comp sci degree and then another shock 5 years later when i got back into a more traditional comp sci role. C#...

      Yeah i used to do electrical engineering and it was a shock getting into that from a comp sci degree and then another shock 5 years later when i got back into a more traditional comp sci role. C# is fun and a widely transferrable skill, congrats on the new job.

      2 votes
  5. skybrian
    (edited )
    Link
    This week on repeatTest, I finished up the work I was doing on tables. A table is an array type that’s restricted to storing objects (also called rows), and can restrict some of the row properties...

    This week on repeatTest, I finished up the work I was doing on tables. A table is an array type that’s restricted to storing objects (also called rows), and can restrict some of the row properties to be unique keys. Up until now, rows could only have a single shape, but I added support for having multiple shapes by making the row a tagged union. The property for a unique key needs to be defined the same way in each shape.

    You can see them in use in this schema, where each node has a name property, and the names need to be unique.

    Still need to do a release.

    3 votes
  6. lintful
    (edited )
    Link
    I found myself needing customizable output from tests in JS (TS), and the test framework I normally use doesn't support it. I looked at the Node builtin test runner and some other popular ones,...

    I found myself needing customizable output from tests in JS (TS), and the test framework I normally use doesn't support it. I looked at the Node builtin test runner and some other popular ones, and wasn't happy with the APIs and capabilities, so I starting writing a new test framework for myself because it sounds like a fun diversion with some concrete benefits to my workflow, and implementing one isn't that difficult.

    It's not open source yet but it will be soon, I have a few major blockers remaining. I'm naming it Zest I think.

    I had the first iteration working and fairly well-tested using itself, but now I'm changing the API to support nested groups - this meant rewriting a lot of the internals. I'm at 1600 LOC of tests, trying to be as thorough as I can because bugs in a test framework seem extra bad.

    The first iteration of the API was mostly inspired by uvu:

    import {test, suite} from 'scope/zest';
    
    // Tests are defined like this:
    test('test name', () => {});
    
    // Tests can be grouped at the top-level:
    const test_in_suite = suite('suite name');
    
    test_in_suite('test name', () => {});
    

    Creates:

    file some.test.ts
      ✓ test name
      suite name
        ✓ test name
    

    And I'm changing it to also support nested groups:

    import {test} from 'scope/zest';
    
    test('test name', () => {});
    
    test.group('group name', () => {
    	test('test name', () => {});
    
    	test.group('nested group name', () => {
    		test('test name', () => {});
    	});
    
    	test('test name 2', () => {});
    
    	// can create any tests/groups (including using `await`)
    	// except inside `test('name', cb)` callbacks,
    	// and duplicate names for tests/groups
    	// are disallowed in the same group scope
    });
    
    const group = test.group('also works');
    

    Creates:

    group some.test.ts
      ✓ test name
      group name
        ✓ test name
        nested group name
          ✓ test name
        ✓ test name 2
      also works
    

    Basically like Deno's BDD maybe with some specific choices and inspiration from uvu. The visual weight of test( and test.group( feel good to me.

    Each test module is implicitly in a group with the name of the relative path to its file. This gives us a really simple hierarchical data structure of tests and nestable groups, where tests must be leaf nodes.

    I think the main idea that I haven't seen in other frameworks (I didn't look very hard, I could be way wrong and this is common, thoughts/references welcome) is that I'm separating the execution of test files into two distinct phases:

    1. First is the planning phase where discovery/initialization/import/registration happens - the TS test modules are imported, and the tests and groups all get registered. By the time a test module finishes importing, its tests and groups are immutably defined in a plan object in the registry. This means test.group() calls are executed synchronously during module import and once the module exits, you cannot add new groups or tests through the normal APIs.
    2. Second is running the tests from a plan. Any subset of the tests or groups may be run, maybe multiple times like in a watcher. Results are statefully assembled in the runner as it runs tests, and you can have multiple runners for the same registry.

    So a plan is created when modules import, and with top-level await and async group callbacks, you can construct plans with arbitrary code at startup. A plan can be run multiple times in a long-running process, and its data is exposed in full fidelity at runtime or output to JSON without running any tests. I think I could streamline refreshing plans with a reused worker thread and really nudge users to doing setup in hooks (which are skipped when just reading the registry, so e.g. you wouldn't need to wait on a db connection just to get the test structure), but tests themselves will probably run with full process isolation at the granularity of groups/tests that you can configure.

    Code can restructure or modify plans as desired. I think this will work well when adding opt-in parallelization.

    There's a lot of implications to this design that I'm still thinking through. One tradeoff is that you cannot create tests or groups after the module has been imported. Trying will throw an error, but maybe that could be relaxed and you just have some caveats for the capabilities around dynamic tests.

    I think this is more complicated to deal with e.g. file changes and plan refreshing, but possibly more powerful in that it abstracts away the filesystem and makes the framework's internals a nice exposed API. (rule of least power) I'm ignorant of how most test frameworks work, but e.g. Jest doesn't have good reporting of registered tests, and I browsed some popular frameworks without seeing what I was looking for.

    With the restrictions and complexity come some nice benefits, because you know tests and groups won't be created on the fly, so you can treat it like data, even reactively. It's easier to do things with the test metadata, like making a UI with fine-grained handles on it - I have an early prototype, this probably excites me the most.

    There's a programmatic API available so you can make your own "test root" objects like test above. It's perhaps strange to have test be so magical but I liked the convenience of a single import. I'm trying to make it as modular as I can, and most behavior should be pluggable.

    It has a plugin architecture with hooks for all of the events, so it's easy to add multiple custom reporters or other integrations. Right now I just use them for outputting data - in one case logging event/summary info to stdout, and the other outputting JSON to stdout for the plans and results. All outputs are implemented as plugins so the core stays minimal. I'm thinking about maybe adding control flow mechanisms like error handling capabilities to the plugin hooks. (e.g. result mapping like enhancing errors, stopping/changing events, I think I want to give the hooks as much control as possible and let userland deal with the mess)

    It's been a lot of fun to make, and I'm still open to rethinking the API to get it as nice/flexible as possible, input welcome!

    3 votes
  7. lynxy
    Link
    I've been experimenting with smart-home setups these past couple of month- we have a range of Ikea Zigbee compatible bulbs and remotes, a Raspberry Pi 5 with Sonoff Zigbee controller, and a number...

    I've been experimenting with smart-home setups these past couple of month- we have a range of Ikea Zigbee compatible bulbs and remotes, a Raspberry Pi 5 with Sonoff Zigbee controller, and a number of other Sonoff smart sensors and such that we're trying out. The majority of the Zigbee communication is being bridged to the controller through Zigbee2MQTT, which has been almost flawless so far. Apart from a few issues when it comes to binding remotes that like to enter deep-sleep modes, the bridge has worked very well, it's HomeAssistant where the problems lie. It never keeps sync with the network state, it's overly convoluted and for all of the extra nonsense it has, the configurability is still poor. It doesn't help that Zigbee devices often have limitations when it comes to modifying parameters (brightness, colour temperature) when they're toggled off (but still powered!).

    As a result, I'm writing my own Python-based automation system, with minimal dependencies, and I'm really enjoying it a lot more than I ever did HomeAssistant. I'm enjoying the simplicity of it. It's called "hab", and it mostly handles customisable bindings for switches, sensors, remotes, and lights- with more conditionals. The flexibility of the resulting system should be much better than that of purely using native Zigbee bindings, without being a behemoth of unnecessary code. The big problem that I am trying to solve is in sunrise / sunset re-configuring of lights and scenes. The solution appears to be to write automations which only change the parameters of devices which are currently toggled on, then catch any future remote / switch actions and adjust the target device / scene as necessary. With the use of smart switches like the Sonoff ZBMINIR2, the system will still be able to fail-dumb if the controller is ever knocked out, which is quite important to me.

    3 votes
  8. tomf
    Link
    this doesn't count, but I need to barf it up somewhere. Last year a Windows 11 update broke my bluetooth. Existing devices were fine, but I couldn't add new ones. I don't use bluetooth for a lot...

    this doesn't count, but I need to barf it up somewhere.

    Last year a Windows 11 update broke my bluetooth. Existing devices were fine, but I couldn't add new ones. I don't use bluetooth for a lot of things, but its nice for headphones.

    Anyway, the logs said 'The mutual authentication between the local Bluetooth adapter and a device with Bluetooth adapter address failed.'

    For each of the services, you need to change the Log On to Local Service with no password. fixed.

    2 votes