13 votes

What programming/technical projects have you been working on?

This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?

4 comments

  1. zestier
    (edited )
    Link
    I decided I wanted to make some silly simulation project inspired by some projects I've seen that simulate ant colonies and such. I don't have a super solid direction for it, but I do know I want...

    I decided I wanted to make some silly simulation project inspired by some projects I've seen that simulate ant colonies and such. I don't have a super solid direction for it, but I do know I want to simulate a very large scale of hundreds of thousands to millions of entities.

    I figured that since at that scale I'd need to exit a game engine's game object model anyway I may as well ditch using an engine in the first place. I have old previous experience with SDL and graphics programming, but from the pre-Vulkan and DX12 days. Decided that this project would be a good time to try out SDL3 and their new GPU API. This GPU API, while reasonably similar to Vulkan and DX12, is very different from my previous experience and is taking a while to learn.

    So far it's going fine, but it definitely does feel like there are some gaps that are partially SDL oddities but mostly a result of a very fragmented ecosystem.

    A few things off the top of my head that stuck me as odd:

    1. I don't see an escape hatch in SDL to get it to create and attach valid GPU devices without using their whole API. It would be nice to be able to use Vulkan directly for all the rendering, but without needing to fight with the platform-specific bits to get it bound to a window. I don't think I would do that at the moment as I'm partially trying out their abstraction, but I could definitely see this project going in the direction of wanting to make Vulkan calls directly in the future.
    2. Shaders feel like chaos. The decision by Khronos to not create a modern standard human writable shader language has basically meant that the shader language space has had control given over to Microsoft with no resistance. GLSL is too old and based around an outdated graphics pipeline to be taken seriously so it seems like everyone is all-in on HLSL. This basically makes all the targets, except DXIL, some form of second-class citizen.
    3. Shader compilation is a mess. To get the shaders into a reasonably portable format it seems you write HLSL, compile it to SPIR-V using dxc, then convert that to Metal and back to HLSL with spirv-cross, then use the new HLSL to compile it again through dxc to DXIL. Of that process you keep the SPIR-V, Metal, and DXIL as outputs. Presumably on Apple devices you further run the Metal through a binary compiler, but I've chosen to ignore this because my track record with osx tells me it is not worth caring about.
    4. There is a tool called SDL_shadercross that seems to create a CLI to do the stuff from 3. Ironically, it has a somewhat needless dependency on SDL that makes it a pain to build and it doesn't release any binaries. If I continue down this rabbit hole I'd probably just rewrite that logic in Python using only std imports (to avoid consumers needing to deal with venv junk). If you're curious why the dependency makes it a pain: I develop against SDL added as a submodule so that I can control the version (as is recommended in the docs) but this project seems to want it global installed for CMake to find it. I don't even know specifically which versions this tool is compatible with.
    5. For some reason in SDL you have to set the swapchain format in the same call as is used to set the present mode. This is super weird to me especially because I don't see a way to query the current state of either of those settings so, at least as far as I can tell, there's no way to just set one.

    The project itself isn't very far. I mostly just got a bunch of instances rendering with 2D positions and rotations as a starting point. My intended first draft is to do the simulation on the CPU and feed the index data to the GPU every frame, but I know that won't scale to millions. Once I have logic I like I'll need to convert much of it to compute shaders because of transfer bandwidth (even with only 3 floats per instance right now that's 12 million bytes per frame to push across the GPU bus with a million entities). This also means I need to be careful with what logic I do so that a transition to compute is actually possible.

    4 votes
  2. [2]
    TonesTones
    Link
    I have been learning Rust; I've done many of the introductory exercises I can find online and am nearly done with the Rust book. Now, I'm trying to build mimics of some command line tools so I can...

    I have been learning Rust; I've done many of the introductory exercises I can find online and am nearly done with the Rust book.

    Now, I'm trying to build mimics of some command line tools so I can get a better grasp on the language. I'm starting with ls and a subset of glob. I might continue with hecto once I've finished those.

    • I'm quickly beginning to understand why so many programmers are Rust enthusiasts. The program structure and type system reminds me of functional programming in OCaml. I'm noticing that the borrow checker memory management system forces me to use strange programming paradigms sometimes, but usually they are equivalent or better than my first instinct.
    • I have a sticky note with "Is it a move, copy, or borrow?" on my desk. It's really weird to think about memory this way instead of "stack or heap?". I think I like it, though.
    • I love the way modules are structured, especially the way you can write inline modules. I don't find myself using inline modules very much, but I can imagine such a structure is very helpful when writing good, delineated libraries.
    • The biggest shortcoming is the complexity. For example, good Python code can just read as a description of what it does. Sometimes in Rust I'm writing idiomatic code that isn't immediately obvious to me why it works or not (as a reader). Comparing it to Python is unfair, though; to me, C feels more readable right now, but once I get more comfortable with Rust, that might change.

    Overall, this has been a good thing to occupy my free time. Definitely will be using Rust over comparable low-level languages in the future if I can.

    3 votes
    1. xk3
      Link Parent
      Completely agree. This is the only reason why I don't use Rust everywhere. If there is a benefit to this, at least people more rarely overestimate their understanding of the code. It's interesting...

      The biggest shortcoming is the complexity

      Completely agree. This is the only reason why I don't use Rust everywhere. If there is a benefit to this, at least people more rarely overestimate their understanding of the code.

      It's interesting to look at mature Rust projects like https://github.com/sharkdp/fd to see all the edge cases that should be handled in a simple-but-widely-used tool (including performance optimizations)

      3 votes
  3. lynxy
    Link
    I have been attempting to understand / re-implement large portions of the AACS Bluray encryption scheme in order to rip a number of discs which libaacs appears to struggle with, although I'm...

    I have been attempting to understand / re-implement large portions of the AACS Bluray encryption scheme in order to rip a number of discs which libaacs appears to struggle with, although I'm butting up against some gaps in knowledge due to the vague monopoly the developer of MakeMKV has quite deliberately built themselves.

    --

    From what I can tell, the steps are as follows:

    The Host Certificate is extracted from a licensed Bluray playing software and can be used to generate a Read Data Key, an important key for navigating bus encrypted discs.

    The Device Key is extracted from the Bluray drive (how?), and is used to first generate a Processing Key (still unsure on the process), then extract from the Media Key Block the Media Key (is it this way round or the other?). The Processing Key is unique to each version of the MKB file and each disc.

    Using the Host Certificate (or Device Key again?), we can commune with the Bluray drive, containing a UHD Bluray disc, in order to fetch the Volume ID. I believe this is done using SCSI MMC (MultiMedia Commands).

    The Volume Unique Key is found by AES decrypting the Volume ID, using the Media Key as the decryption key in ECB (Electronic Codebook) mode. The Volume Unique Key is the primary component of what is distributed in the KEYDB file.

    The Volume Unique Keys can then be used to calculate the Title Keys (through some form of hash with title metadata?), which are the actual keys used to decrypt individual titles inside the BDMV/STREAM directory.

    Finally, for some discs with BD+ protection enabled (very few at the moment), we must utilise BD+ tables, which appear to be unique for each version of the MKB file. BD+ tables contain the necessary patches to fix video errors and circumvent the need to emulate the BD+ VM.

    --

    I'm still trying to work through a couple of the steps as implemented by libaacs and libbluray, though my C is a little rusty and the steps can get a little (very) convoluted. I have a number of hurdles to navigate. Primarily, libaacs appears to make a number of considerations for AACS2 but this is "not fully implemented". The specification for AACS1 is fully public, but the specification for AACS2 is distributed only to licensed companies under NDA. The biggest blocker in this process is usually the bus encryption which is mandatory for all UHD discs, but I'm using a couple of Asus BW-16D1HT drives which have been flashed with a LibreDrive compatible firmware, which should allow me to avoid bus encryption entirely.

    From what I've read, this works because of the difference in operation between "UHD Official" and "UHD Friendly" drives. The former are drives which are UHD compatible and have been licensed to play UHD content. The latter are drives which are UHD capable but have not been licensed and so refuse to play UHD content, but when flashed with LibreDrive that forces the drive to act like a dumb-reader, skipping all anti-piracy computation, so that you can dump the contents of a UHD Bluray just fine. It was my understanding that once a drive has been flashed to be LibreDrive compatible, you can simply mount the disc as if it were external storage and dump the contents without issue, although I've seen a few posts on the MakeMKV forums by the developer which appear to imply that LibreDrive is a state that must be triggered and can only be triggered by MakeMKV itself, being disabled upon power-cycle? Either way, I can mount the discs and have dumped what appears to be the entire disc contents without issue.

    To be honest, I've understood and re-implemented multiple steps in this process but I'm still struggling a little with dumping the contents of the MKB file. Past the initial file metadata it doesn't follow the expected file structure at all. I wish MKB and certificate file structures were neatly documented somewhere, but alas.

    Finally, forum posts appear to indicate that some drives will not play certain media until a disc with the newest MKB version has been inserted- something about un-revoked keys? I wasn't aware keys could be un-revoked as well as revoked. I'm still not sure why this works, or even if it works. I'm unsure if Device Keys work on MKB versions below a specific version, or between two specific versions. Will an un-revoked Device Key that works with MKBv82 work with all prior versions? Who knows.

    --

    As an addendum, for anybody asking why I don't just use MakeMKV- I'm not a fan of the developer's attitude and approach. MakeMKV started life relying upon the collaboratively sourced and public KEYDB file, fetching VUKs from it in order to decrypt content. At some point it transitioned over to a "hashed keys" file, a proprietary database of keys which is distributed by MakeMKV internally and is stored encrypted on disk. When asked if MakeMKV could expose to users the VUKs of media which they were decrypting using the tool, the developer replied that it would be "too complicated to extract a VUK" from their software. It's only too difficult because that is by design. It would be trivial for the developer to implement, but they're building a walled garden- and it's no wonder the majority of people use the tool when it works more consistently than the very few alternatives that exist. This approach is very anti-collaborative, and completely at odds with the philosophy of the software / piracy / media scene.

    I don't know- I'm probably wasting time in a fit of stubbornness.

    1 vote