TangibleLight's recent activity

  1. Comment on AI and the American smile in ~humanities

    TangibleLight
    (edited )
    Link Parent
    While I was writing my comment I thought about adding a section at the end addressing situations where LLMs are useful. You brought up both of the examples I considered, grammar checking and new...

    While I was writing my comment I thought about adding a section at the end addressing situations where LLMs are useful. You brought up both of the examples I considered, grammar checking and new languages, so here's my take on those from this lens of basic statistics.

    Both are contexts where uniformity, middling quality, and lack of soul are acceptable or beneficial. For perfect grammar, you want to eliminate variance. To learn a new language, it's an improvement to regress to the mean.

    In general the people I see who are proud of using generative AI appear to believe it has a place everywhere. Your comment I replied to seemed to suggest that AI can compose meaningful text, if only you manage to prompt it just the right way. Until someone gives a nuanced description of their opinions as you've just done, they are indistinguishable from grifters trying to push AI into contexts where it doesn't make sense. So I apologize for incorrectly placing you in that group.

    The real argument in my first comment is that I categorically reject the idea that generative AI should have any place in the humanities, aside from some limited applications in language and visual arts. I think it's more important to be critical and intolerant of AI grift than to be polite to benign enthusiasts, so the "derisive sneer" is justified.

    I didn't address it in that comment, but I also categorically reject that they have any place in the sciences. The argument from basic statistics is that higher-accuracy events have lower probability, so generative AI can't be applied to contexts where accuracy or correctness is important.

    The new o1 demos certainly look accurate, so they challenge this accuracy argument, but I'm still skeptical. I haven't had a chance yet to interact with 1o myself, but what I've heard from others is that the improvement from 4o isn't as substantial as the demos make it seem. My experience with other "multi-step workflow" AI products is not good, and OpenAI hasn't given me much confidence that o1 has any real secret sauce over the others apart from sheer volume of compute resources. The rake kickflip meme comes to mind.

    Basically, knowing that LLMs work in a certain way does not translate to knowing how well they perform in specific tasks and situations. In fact, it can stop you from exploring these possibilities, which in turn limits your practical understanding of them.

    Fair enough. The models are black boxes, so it is impossible to make accurate predictions about how well any one performs in a given context.

    But I counter that the basic statistics is a good lens to predict which contexts generative AI as a technology could possibly do well. And as long as they remain black boxes, it will be impossible to reliably engineer their output to do well in non-obvious contexts.

    1 vote
  2. Comment on AI and the American smile in ~humanities

    TangibleLight
    (edited )
    Link Parent
    I don't think so. Anyone using that phrasing is clearly biased against the thing, but it's not wrong. The thing is a statistical model, so it makes sense to consider statistical questions about...

    In practice this is such a simplification that it is nothing more than a derisive sneer.

    I don't think so. Anyone using that phrasing is clearly biased against the thing, but it's not wrong.

    The thing is a statistical model, so it makes sense to consider statistical questions about it. Fundamentally, a "generative" model just samples whatever abstract space the model uses to parameterize the training data. Even if it seems naive in context, basic statistical ideas like regression to the mean and central limit theorem still apply.

    Qualitatively, I'd say regression to the mean manifests as "tangentially related lowest common denominators". In layman's terms, I'd describe intuition on central limit theorem as "all it knows how to do".

    And yes, by definition the output is related to the input, so you can bias the expected result one way or the other. Be more specific with the prompt, you can get pretty far! You're still subject to the same basic statistical ideas, though. The "less average" you want the output to be, the more work you have to put in the prompt - and context size puts a hard limit on that.

    I can't speak for chocobean, but I personally tend to use that derisive subtext because the general appearance of the people who proudly use LLMs is that they believe regressing to the mean and eliminating variance is somehow desirable and efficient. In practice, it sucks the voice and soul and culture out of every piece of content that LLMs touch. It's backwards and wasteful.

    5 votes
  3. Comment on best way to go about with a script that seems to need both bash and python functionality in ~comp

    TangibleLight
    Link Parent
    Python gives you total control of all the input and output streams of all the subprocesses. It's a bit more verbose than Bash but I promise it's not missing any features. A demo of some PIPE...

    Python gives you total control of all the input and output streams of all the subprocesses. It's a bit more verbose than Bash but I promise it's not missing any features.

    A demo of some PIPE gymnastics in case it's helpful: Two subprocesses interacting with each other and Python code to glue them together.

    The first subprocess is a bash script that echoes lines from stdin with -I prepended. Python forwards stdin directly to this subprocess, but it intercepts stdout.

    Each of those outputs is passed as an argument to a new date process. The -I<format> prints the ISO timestamp to given precision date, hours, etc. Python again intercepts the output.

    If the date process succeeded, it URL-escapes the formatted date and prints it. If it failed (due to invalid argument), it prints the error message unchanged.

    Keyboard interrupt Ctrl+C works at any point. You can also send end-of-file Ctrl+D to end everything gracefully.

    Probably not the best way to solve this problem, but a demonstration of data weaving between processes without delays.
    import sys
    from subprocess import Popen, PIPE, run
    from urllib.parse import quote
    
    COMMAND = '''
    while read -r line; do
        echo "-I$line"
    done
    '''
    
    proc = Popen(
        ['bash', '-c', COMMAND],
        stdin=sys.stdin,
        stdout=PIPE,
        encoding='utf8',
    )
    
    for fmt in proc.stdout:
        result = run(
            ['date', fmt.strip()],
            capture_output=True,
            encoding='utf8',
        )
        if not result.returncode:
            print(quote(result.stdout.strip()))
        else:
            print(result.stderr.strip())
    

    Some sample output:

    $ python script.py
    date
    2024-09-11
    huor
    date: invalid argument ‘huor’ for ‘--iso-8601’
    Valid arguments are:
      - ‘hours’
      - ‘minutes’
      - ‘date’
      - ‘seconds’
      - ‘ns’
    Try 'date --help' for more information.
    hour
    2024-09-11T22-04%3A00
    ^CTraceback (most recent call last):
      File "/home/allem/PycharmProjects/scratch/runner.py", line 20, in <module>
        for fmt in proc.stdout:
    KeyboardInterrupt
    

    Now obviously in practice this particular problem could be solved in a million easier ways - but I hope this convinces you that Python is not missing any features regarding piping data and handling interrupts. I haven't even touched import io or os.mkfifo which together generalize shell scripting concepts like process substitution and tee and similar.

    If Ctrl+C doesn't work for some reason, then some component somewhere is improperly intercepting those interrupts. It's probably not a Python (or bash) issue.

    8 votes
  4. Comment on How to monetize a blog in ~tech

    TangibleLight
    Link Parent
    I think you might have stopped reading a little too early.

    I think you might have stopped reading a little too early.

    20 votes
  5. Comment on Types and other techniques as an accessibility tool for the ADHD brain - Michael Newton in ~comp

    TangibleLight
    Link Parent
    Here's my takeaway on fuzzing and property testing; anyone more familiar please correct me on any nuance here. In fuzzing, you litter code with assertions to check pre- and post-conditions; if the...

    Here's my takeaway on fuzzing and property testing; anyone more familiar please correct me on any nuance here.

    In fuzzing, you litter code with assertions to check pre- and post-conditions; if the program is ever in an invalid state, it fails fast. Then the fuzzer runs with as much input as possible; if it never fails, then you have confidence that every property implied by every assertion holds. Run for longer on more inputs, and that confidence (in principle) goes higher. So if that's right, that's why I see some parallel to property testing. Every assert is a property that's being tested.

    Property testing is more local; you're suggesting similar granularity to unit tests. So you can reasonably enforce constraints on the shape of the input data to avoid wasting time checking invalid inputs. The number of properties checked in any particular unit is less, so you don't need to run as long to gain confidence you've checked everything. I guess you also get some sensible "coverage" metric to be sure the properties you want to check actually do get checked.

    My sense is these strategies aren't exclusive. You could litter code with pre- and post-condition assertions; constrain property test inputs normally, the test checks all those assertions you're interested in for the unit. All the same assertions are in play when you run the fuzzer. Property tests then are units that check the happy path, and fuzzing is to be sure things don't fail even in arbitrary potentially-invalid inputs.

  6. Comment on Types and other techniques as an accessibility tool for the ADHD brain - Michael Newton in ~comp

    TangibleLight
    (edited )
    Link
    This was illuminating and has challenged some of my beliefs and assumptions about programming. I related to his descriptions of adult ADHD - and the discussions after the talk - more than I...

    This was illuminating and has challenged some of my beliefs and assumptions about programming.

    I related to his descriptions of adult ADHD - and the discussions after the talk - more than I anticipated. I almost find those discussions more valuable than the particular "coping mechanisms".

    Up till this point it never even occurred to me to think of these things as accessibility tools, but that lens gives some very insightful context on the kinds of tasks I struggle with in software development. I hope that awareness puts me in a better place to look for other tools to make things easier on myself.


    As far as the technical details; I find value in static types for the same reasons discussed. I'm not familiar with property based testing as a term, I guess it is a more nuanced sort of fuzz testing? I feel I would fall into the trap of over-complicating the constraints.


    Thanks to whoever updated the tags.

    5 votes
  7. Comment on The American/Western right-wing is a threat to queer people worldwide in ~lgbt

    TangibleLight
    Link Parent
    That's exactly my point. I suspect that the ugly truth is there's some fraction of the population everywhere that behaves this way. I suspect that normalizing this by population would be more or...

    That's exactly my point. I suspect that the ugly truth is there's some fraction of the population everywhere that behaves this way. I suspect that normalizing this by population would be more or less a solid color on the map.

    If that's not the case and there are islands of high population with no hate, you can ask what those places are doing right. Or, vice versa, what are the other places doing wrong.

    6 votes
  8. Comment on The American/Western right-wing is a threat to queer people worldwide in ~lgbt

    TangibleLight
    (edited )
    Link Parent
    I look at the SPLC map and it more or less looks like a population map. That California has the most hate groups is not particularly surprising given that California has the most people to form...

    I live in the state of California, and it seems that everyone around the world thinks this is a liberal utopia. But the reality is that the SPLC has identified more hate groups in this state than any other in the union.

    I look at the SPLC map and it more or less looks like a population map. That California has the most hate groups is not particularly surprising given that California has the most people to form those groups. Texas, Florida, and New York are similar.

    I am very curious to see such a density map normalized by population, say by state or by county. So far I haven't found one but I'll keep looking or see about building my own.

    8 votes
  9. Comment on The monospace web in ~comp

    TangibleLight
    Link
    I have been toying with a similar idea for a personal blog... nothing quite polished, though. I enjoyed playing with typesetting in general with this, it's interesting to remember you're not...

    I have been toying with a similar idea for a personal blog... nothing quite polished, though.

    I enjoyed playing with typesetting in general with this, it's interesting to remember you're not actually limited to the terminal character grid here. I built a small static site generator that renders LaTeX out into MathML and embeds it; in JavaScript I inspect inline math tags and add some inline padding to make them an integer number of characters wide. I'd rather do this in the static site generator but I can't figure out how to reliably compute the width for each math tag. More reliable to just do it in JavaScript when the page is rendered.

    You can also play with variable fonts and font sizes. For example mixed font families; block quotes could be proportional font, headers could be double- or one-and-a-half height, or different content could use different fonts. In my prototype I used the monaspace family. The different variants all align to the same grid and typeset together nicely. Main body was sans, headers and code were slab serif, and asides/pseudocode were handwritten. The ligatures and "texture healing" look fine but it's not super important to me.

    I haven't worked on the project in a while though, I was stuck on a couple things:


    Fixed width: I enjoy writing justified to a fixed character width, but if the display is too small or the user's font too large, it breaks everything. Why bother to write justified if a mobile browser will just reflow everything anyway? I could pick a narrower width, but some readers will still have the problem (and too narrow looks silly on desktop). I can't be bothered to write justified to multiple widths, that seems like a nightmare and I think would produce strange Moiré-ish patterns of whitespace on the page.

    I also encountered a strange issue on mobile where <pre> tags have much much smaller font than main body text. It's related to some accessibility setting where font size is increased to match the OS font size, but it doesn't seem to apply to everything and I'm not sure how to make it do so.

    Violating fixed width: When there's space, I'd like to be able to place figures and asides outside the fixed-width limit in the margins. In theory it would be straightforward with float but I never got it to work right.

    Generating figures: ASCII art and box drawing characters are too tedious for me. I'd like the static site generator to compile tikz figures to svg and embed those. I did get it partially working, but the font size and DPI is all wrong. It's important to me that the tikz text and the $\LaTeX$ text be the same size and font. Fonts are easy, but DPI is not. Arrows and borders were also rendered too thin, like less-than-a-pixel width so it's a pain to read. I suspect it's something to do with the template \documentclass or some esoteric command-line flag to set units but I never quite figured it out.


    This has me inspired to work on the project some more! Thanks for sharing!

    3 votes
  10. Comment on A symbol for the fediverse ⁂ in ~design

    TangibleLight
    Link Parent
    I've never heard of such a symbol of distress. I'm sure some exists, and I'm curious to know what it is, but surely it's not universal. Offhand the most similar things I can think of are the...

    I've never heard of such a symbol of distress. I'm sure some exists, and I'm curious to know what it is, but surely it's not universal.

    Offhand the most similar things I can think of are the biohazard and ionizing radiation warning symbols - they both have that three-fold symmetry - but IMO they're hardly similar to ⁂.

    4 votes
  11. Comment on YouTube without a working ad blocker in ~tech

    TangibleLight
    (edited )
    Link Parent
    If your goal is to watch YouTube ad-free, this may or may not be true. If you aren't watching ads and you aren't paying for Premium, YouTube only loses money on you*. If they strike some optimal...

    I think youtube may be doing some A/B testing to see how people react if they cannot block ads at all. If that is the case, stopping youtube usage until you can block ads again is the best course of action.

    If your goal is to watch YouTube ad-free, this may or may not be true. If you aren't watching ads and you aren't paying for Premium, YouTube only loses money on you*. If they strike some optimal level of annoyance with their ads, the people who use adblock will leave and not strain their servers, but the people who tolerate ads or pay for premium will stay. Maybe they can improve their margins even if overall viewership goes down.

    * Except that people who use adblock do still count toward viewership, and they do still (usually) watch sponsorships which subsidize YouTube's creators. If viewership goes down, maybe sponsorships will also lose value and YouTube would have to spend to incentivize creators to stay on the platform.

    Honestly I don't know. Neither of those arguments seem quite sound to me (Does "optimal annoyance" even exist? Would sponsorships really lose value?). My real point is that the issue is complicated and I don't know that you can definitively say any course of action is best.

    I do think you can definitively say, if your goal is to watch YouTube ad-free, the worst signal you could give is to switch to Premium in response to increased ads or adblock-block. There's a separate argument on whether that's ethical or not. Personally I think, even if you want to pay, this is a bad time to do it since it just incentivizes enshittification.

    Maybe the best you can do is say only to yourself, "I don't want to watch these ads," and don't watch them. Cut out YouTube altogether if that's the only way to do it.

    19 votes
  12. Comment on Anyone can access deleted and private repository data on GitHub in ~comp

    TangibleLight
    (edited )
    Link
    GitHub prevents making part of the fork network public while any part is [currently] private. Accessing commits from deleted forks is surprising, but it shouldn't really be considered a new...

    This is mildly concerning if you're not expecting it, but I don't think there's much to worry about here. If any part of the fork network is public then all commits in the entire network are visible - but if none of the fork network is public, then none of the fork network is visible.

    GitHub prevents making part of the fork network public while any part is [currently] private.

    Accessing commits from deleted forks is surprising, but it shouldn't really be considered a new vulnerability. If you committed a secret to a fork of a public repository, you committed a secret to a public repository. That secret is already public. You must delete the commit, not the branch, and not the repo. You should treat that secret as compromised in any case.

    This might have an impact if you have multiple orgs and multiple developer roles with given permissions - a developer might gain access to commits in an org for which they don't have permissions. However doing this requires they have permissions in at least one org which can see the repository.

    Trying it myself

    To double check all this:

    I create a private repo from a public template. I can view commits on the template via my private repo, but I cannot view private commits on the public template repo. There doesn't seem to be a "real" connection so things are fine; I think those commits from the template are just waiting to be garbage-collected.

    I create a private fork of my private repo (in another org that I own). I can view any commit from either repository, which is not intuitive, but all is private so things are fine.

    I try to make the private fork public. GitHub does not allow this: "For security reasons, you cannot change the visibility of a fork." Fair enough, things are fine.

    I make the upstream public. GitHub severs the connection to the fork and duplicates history - there is no longer a connection via the fork network. Any commit which was not in upstream before making it public is not visible via the public repository. I also can no longer view public upstream commits via the private fork because it is not in the fork network.

    I try to create a new private fork of the public upstream. GitHub does not allow this: there simply is no way to choose the visibility of the fork. I try to make the fork private after creating it. GitHub does not allow this: "For security reasons, you cannot change the visibility of a fork."

    I delete the upstream repository. GitHub severs the connection to the fork and duplicates history. Any commit which was not in the fork before deleting upstream is lost.

    This behavior that I observed is different that what the article describes. I don't know if that means GitHub has changed this since the article was published or if I did something else differently.

    Ah, the trick is that if you delete the private fork before making upstream public, GitHub never severs the connection (the fork is gone so there is no connection to sever) so the commits in the private repository are stuck in the fork network of upstream. When upstream is made public, it brings those along with it.

    GitHub could fix this by maintaining some record after a fork repository is deleted that it was private, so when upstream is made public those commits are dropped as they are when the fork still exists.

    I don't think there's anything to be done about the behavior when everything is public. You have to treat any secret ever committed to a public repository as compromised. That's always been recommended security practice.

    8 votes
  13. Comment on What have you been listening to this week? in ~music

  14. Comment on What programming/technical projects have you been working on? in ~comp

    TangibleLight
    (edited )
    Link
    Last week I wrote - paraphrasing - "Vulkan tutorials are so confusing! Why do they recommend this convoluted approach? It's so much simpler to do this!" and described a simpler way that I was...

    Last week I wrote - paraphrasing - "Vulkan tutorials are so confusing! Why do they recommend this convoluted approach? It's so much simpler to do this!" and described a simpler way that I was attempting in a toy rendering engine I'm writing to learn more about Vulkan.

    You ever hear of Chesterton's Fence?

    This week, I learned why things are usually set up in that convoluted way. The only reason I had not discovered it last week is that Vulkan has three four six ways of handling vsync and I misunderstood which one I was using. I thought I had vsync-equivalent off, but I actually had it on, and turning it off revealed all the errors in my approach.

    Present modes

    When using mailbox or fifo present modes (more-or-less vsync on), you can't seem to get an out-of-date image. I assume this is because window resize events (on X11) can only occur after a vblank, so if you lock your framerate to vblanks and rebuild the swapchain after every window size change, you'll never get an out-of-date.

    However, using immediate or fifo_relaxed present modes (more-or-less vsync off) makes vkAcquireNextImageKHR and vkQueuePresentKHR out-of-date more often than not while the window is resizing which means you either drop frames or deadlock.

    I do think the tutorials complicate things a bit more than they need to, there's probably a more straightforward way to explain things. But I'm now confident none of what they were doing is unnecessary. As one should expect. Chesterton's fence and all that.

    What I'm doing now is to put image acquire and queue present in a particular nested loop that should maintain the synchronization primitives and prevent deadlock. It definitely feels simpler than the ad-hoc approaches the tutorials seem to use. I'll keep playing with it and see if that's true, but, having learned my lesson on wanton oversimplification, I'm a bit suspicious.

    So I fixed that bug, and fiddled around a bunch with graphics pipelines and synchronization.

    Synchronization is still a little scary, but I think I'm building a decent mental model of things. I struggled through intentionally trying to safely do things the wrong way: let multiple frames in flight safely read from a vertex buffer which is updated each frame on CPU. You aren't really supposed to do this; each frame in flight should get exclusive access to its own region of memory so there are no memory access violations. But doing it anyway and figuring out how to properly set up synchronization for this was illuminating.

    Behold, my new greatest achievement: a wiggly box.

    The graphics pipeline really seems to be where Vulkan shines compared to OpenGL. I'm still a bit out of my element, but I'm learning enough to tell that I like it.

    Next step is to set up some simple 3d geometry. I want to experiment with render-to-texture and set up some shadow maps.

    4 votes
  15. Comment on What programming/technical projects have you been working on? in ~comp

    TangibleLight
    Link Parent
    Great success! I've gotten ImGui working here via cimgui. https://i.imgur.com/QV0gvUJ.png This has taught me a ton about the zig build system. I have an in-source wrapper package that pulls imgui...

    Great success! I've gotten ImGui working here via cimgui.

    https://i.imgur.com/QV0gvUJ.png

    This has taught me a ton about the zig build system. I have an in-source wrapper package that pulls imgui and cimgui; the wrapper package's build.zig runs the cimgui generator (for the GLFW and Vulkan backends) and collects all the outputs into a cached directory. It builds a shared library for cimgui and exposes a Zig module for a binding wrapper.

    From the main project, all I need to do is declare that in-source wrapper as a dependency, and add an import. This in-source package approach is new to me, but I like it!

    .dependencies = .{
      .cimgui = .{ .path = "cimgui" }
    }
    
    const dep = b.dependency("cimgui", .{});
    exe.root_module.add_import("cimgui", dep.module("zig-cimgui"));
    
    Linker hell

    Up to this point I've been trying to statically link everything, but doing this I encountered linker issues that I don't yet know how to work around. cimgui needs to link against GLFW to build the backend, but my app also needs to link against GLFW. Building cimgui as a static library causes some issue here.

    I was able to get things work by building cimgui as a shared library, but still statically linking to GLFW. I have a feeling this mixed linkage is going to bite me later on. But since it seems to be working okay, I'll leave it for now and revisit once I've had a bit of a break from fighting the build system.

    1 vote
  16. Comment on What programming/technical projects have you been working on? in ~comp

    TangibleLight
    Link Parent
    Is this an off-the-shelf engine or one you're writing just for this project? Curious how the details might or might not relate to the cbscript implementation.

    imports a Java rigid body physics engine

    Is this an off-the-shelf engine or one you're writing just for this project? Curious how the details might or might not relate to the cbscript implementation.

    2 votes
  17. Comment on What programming/technical projects have you been working on? in ~comp

    TangibleLight
    (edited )
    Link Parent
    Back at my PC with a bit more time to talk about this. I say my greatest achievement is a red box, but this is obviously reductive. I think it is a very well behaved red box. Some things I'm...

    Back at my PC with a bit more time to talk about this.

    I say my greatest achievement is a red box, but this is obviously reductive. I think it is a very well behaved red box.

    Some things I'm trying to do correctly:

    • Smooth rendering while resizing the window
    • Triple buffering
    • Proper command pool management and single-shot command buffers
    • Validation layers and debug messages
      • Integrate with Zig std.log and Safe/Fast optimization levels.
      • Everything seems compliant, even while rebuilding the swapchain. This was not the case when I followed https://vulkan-tutorial.com/ or https://vkguide.dev/. Resizing the window interrupted rendering and/or caused various validation errors and/or deadlocked the program.
    • Input event bus
      • Zig tagged unions here are nice, instead of glfw's global callbacks.
      • Currently just an ArrayList. Some SoA structure might be better in practice, but I'm not bothering for now.
    • No leaking memory or Vulkan objects.
    On the event bus

    In my past projects I try to be "reactive" and handle events as they come with the GLFW callbacks. This always ends up a nightmare and severely limits my attempts at parallelization; all events originate from the main thread when glfwPollEvents or similar are called, so I'd end up building these little synchronized event buffers everywhere that caused issues. I'm hoping a global buffer will be fast and make handling events on appropriate threads possible.

    I've also thought about making a general event bus for application events. A big mmap ring buffer could be fast, but I'm not sure if this is a good idea or not. If I do want to pursue multithreading I'll need to handle thread communication one way or another. An event queue seems reasonable enough but I'm not sure how that shakes out in practice. What I've gathered from research online is that things tend to use a big global event buffer and that's it... this is surprising to me but I'm not sure.

    On the swapchain and bad tutorials.

    In past attempts at Vulkan, inspired by those tutorials, the swapchain has always been a nightmare, especially to smoothly handle window resizing. The guides I'd seen have this flow like:

    1. Acquire the next image
      • If it's out of date
        • rebuild the swapchain
        • rebuild sync primitives
        • acquire a new image
          • If that image is out of date, drop the frame.
        • record triple-buffered command buffers
    2. Submit command buffer
    3. Present the image
      • If it's out of date (it probably is)
        • rebuild the swapchain
        • rebuild sync primitives
        • acquire a new image
          • if that image is out of date (it probably is), drop the frame
        • record triple-buffered command buffers
    4. Poll events

    Steps "rebuild the swapchain" and "rebuild sync primitives" are doing a lot of heavy lifting here. If you don't do the sync primitives right, your program deadlocks or fails validation. There's all this state you need to manage about the current handle and the previous handle. Sync primitives get invalidated and confused with triple buffering. and and and.... in the past this is where I'd give up.

    My insight this time is: Just keep the create info around, and poll events at the top of the loop. State management and synchronization problems are gone.

    1. Poll events
      • If the framebuffer changed size, set handle = .null_handle
    2. If handle is null
      • Update info.image_extent
      • Rebuild the swapchain and set handle
      • Set info.old_swapchain = handle
    3. Acquire the next image
      • If that image is out of date (I don't think it can be)
        • Set handle = .null_handle and drop the frame.
    4. Reset triple-buffered command pool
    5. Record single-shot command buffers
    6. Submit commands
    7. Present the image
      • If that image is out of date (I don't think it can be)
        • Set handle = .null_handle and drop the frame.

    The resulting code is flatter. There's only a fraction of the state management (handle, info, images, and views). You never have to rebuild sync primitives. Triple buffering the command pools is easier. You can also maintain a "global" command pool that contains reused command buffers, but it's not required as in the guides. You never record a command buffer you don't end up submitting.

    If any Vulkan gurus are out there, I'm curious if you see any issues with this. Are there any references that get this stuff right?

    7 votes
  18. Comment on What programming/technical projects have you been working on? in ~comp

    TangibleLight
    Link
    Making my nth attempt at learning Vulkan on the side out of an interest in rendering engines. I've known OpenGL fairly well for many years now, but always frustrated by the state machine and heard...

    Making my nth attempt at learning Vulkan on the side out of an interest in rendering engines. I've known OpenGL fairly well for many years now, but always frustrated by the state machine and heard good things about Vulkan. All my prior attempts, being hobby projects, have failed somewhere in the swapchain and render pass setup. Why go through the trouble when OpenGL is right there?

    This attempt seems to be going better. These diagrams have been invaluable. Pieces are starting to fit into place and I feel I'm starting to grasp the big picture of things.

    Behold, my greatest achievement: a red box, produced with nothing but the Zig compiler and my bare hands. All written without reference, so I think I'm in good shape!

    Well, not nothing but the Zig compiler. The binding generator and dynamic loader Snektron/vulkan-zig. And GLFW C interface. Dynamically loading libraries and platform-specific windowing APIs are not really something I'm interested in dealing with, especially not for a hobby project.

    8 votes
  19. Comment on At some point, JavaScript becomes indefensible in ~comp

    TangibleLight
    (edited )
    Link
    This doesn't really have anything to do with JavaScript in particular, really, but that last remark reminded me of this talk by Jon Blow: Jon Blow - Preventing the Collapse of Civilization -...

    For how many centuries are you going to carry this monumental burden?

    This doesn't really have anything to do with JavaScript in particular, really, but that last remark reminded me of this talk by Jon Blow:

    Jon Blow - Preventing the Collapse of Civilization - DevGAMM 2019

    And on the surface it seems entirely unrelated, but I'm also reminded of this keynote:

    Timothy Roscoe - It's time for operating systems to rediscover hardware - USENIX ATC 21 / OSDI 21

    The common theme to me seems to be an issue of communication. Passing knowledge from one generation to the next. Passing knowledge from low-level to high-level programmers. Communication from high-level to low-level seems alright, except that there's not much the low-level people can do against sheer momentum aside from micro-optimizations. Coordination and communication between disparate hardware manufacturers, operating systems developers, and platform developers.

    I don't think the median programmer fears respects complexity in the way they probably should. Rather than reducing complexity overall, there's a tendency to reduce surface-level complexity by reaching for abstractions over existing complexity that simplify things in the short term. In reality, all that does is add compounding interest when others inevitably do the same process on top of your component.

    I don't think the median executive cares about complexity at all. "Executive" is not the right term here - generally, I'm talking about people who are able to direct the labor of others. Truly reducing complexity is hard, takes a lot of resources, and requires cooperation and solid communication among all involved. The only people who can orchestrate that are those who direct labor of others - many others - and why would they invest all that effort when the existing stack is, like, right there?

    I don't think I'm quite the luddite (pessimist? extremist? not sure what he'd call himself) that Jon Blow is, but I do try to do my own part to avoid stacking layers of abstraction where I can. It only works to a point, though, I don't have resources to deploy anything more fundamental than an OS-native binary. I can avoid writing new JavaScript, though.

    3 votes