TangibleLight's recent activity

  1. Comment on Game simulation programming: Continuous time in ~comp

    TangibleLight
    (edited )
    Link Parent
    I think most one-line adages like that, if you take them to their extreme conclusion, are empirically wrong. Here I was trying to talk about the essential complexity in the problem. If two...

    essential complexity is essential. inessential complexity can be removed. this seems tantamount to saying 'nothing can be simplified', which is empirically wrong

    I think most one-line adages like that, if you take them to their extreme conclusion, are empirically wrong. Here I was trying to talk about the essential complexity in the problem.

    If two entities in the physical simulation cannot interact, there trivially is parallelism there. If two entities are currently interacting, there trivially isn't. If they might be interacting, the problem is harder. And that is the problem. You can shuffle how you frame it, look at different heuristics or binning strategies, look at graph analysis to try to prove which entities can't interact... but ultimately that's the thing it seems like we're trying to address.

    in the case of collisions, the classic approach (accelerate the overlapping objects out of each other) works and is pretty easy; can't speak for quality, but i'm told super mario bros has good game feel.

    Fair enough. I was focused on collision since that's most of the examples that Sean used in the linked video, but I think there's obviously some more interesting problem in there about looking at the causality in the system and seeing which parts of it you can actually run in parallel.

    why is it better at determinism? i think they both suffice to achieve it, except that with a fixed timestep, you also need to interpolate to match the display frequency, and you have to trade off input latency vs. update frequency vs. extrapolation hacks

    I think I had numerical error in mind. If you fix the time-step you could guarantee bitwise reproducibility. Even if the simulation misbehaves, it will misbehave in a consistent way that you can work around as a designer. You may or may not be able to do that if you allow variable timestep... For a given piece of the piece-wise function, sure, but once some interaction occurs, you need to determine the next piece. I guess if the solver you use to find the next piece of the function is deterministic then the whole system would be. You just need to be careful to never use the particular frame time anywhere in that solver, and I don't know how that would generalize.

    Whether that matters in practice... I have no real empirical evidence one way or the other in general. Seems likely that in most cases it doesn't really matter, but there are definitely cases where it's critical. Anything that's meant to support TAS (mostly platformers, probably?). Old Line Rider tracks come to mind. Engineering/factory games where production rates need to all match up correctly. But, again, for more ephemeral things it probably doesn't really matter.

    functional parallelism is fork-join—importantly, deterministic. locks are nondeterministic: multiple concurrent acquires can happen in any order. it's a question whether you need determinism, but think it's only worth it if it significantly changes the effort:performance curve

    in the physics case, there is emphatically not: suppose there is a line of blocks touching each other; if i push the first one, i expect the whole line to move. (well, in real life the impulse travels at most at the speed of light, but that's basically instant at frametime-scales.) to be honest, i don't know how you solve this in a sequential setting either

    but i don't think that that graph actually exists. i think in general every game entity will have some radius of other entities around it with which it could interact, and that entities will generally be dense enough that every entity will be connected to every other entity. even though the actual realised interaction graph will have quite a lot of parallelism. not sure where to go from there, but it seems to require optimism and therefore conflict resolution. i'm not sure conflict resolution necessarily has to be complicated, though, then; it could just be 'these entities form a conflict set; please write some sequential code updating them as a group'. then you write exactly the same code you'd write in a totally sequential setting, and the tricky thing is conflict detection

    also, btw, re my other comment, i was informed that it's been tried before; sent a couple of mediocre papers (my version is obviously better) and one interesting one (not read yet—too much substance to skim); may be of interest: https://arxiv.org/pdf/1403.5645

    Gonna need to chew on those for a while. These three four points are very interesting to me, but I don't have a good response right now. I'll revisit this later tonight or tomorrow and follow up after I can think about it a bit.

  2. Comment on Do signed or annotated git tags have any special advantage over lightweight tags? in ~comp

    TangibleLight
    Link Parent
    It's also helpful for pinning versions in build systems. For C++ projects I'll use CMake FetchContent or ExternalProject with a SHA archive link. Much faster downloads on large dependencies than...

    It's also helpful for pinning versions in build systems.

    For C++ projects I'll use CMake FetchContent or ExternalProject with a SHA archive link. Much faster downloads on large dependencies than checking out the entire repository, or trying to download archives in some other way.

    Zig's package manager is meant to be decentralized, so requires all dependencies to be an archive download link. It has pros and cons. Using tag or SHA archive links is great for this.

    1 vote
  3. Comment on Do signed or annotated git tags have any special advantage over lightweight tags? in ~comp

    TangibleLight
    (edited )
    Link Parent
    Tags (and branches, and arbitrary commits) are downloadable as zip or tar. The main thing about the releases process is you can upload additional artifacts like binaries and assets, and you can...

    Tags (and branches, and arbitrary commits) are downloadable as zip or tar.

    The main thing about the releases process is you can upload additional artifacts like binaries and assets, and you can include a description with release notes etc. I suppose you can add a message to a tag, but I don't believe there's any way to attach assets to a tag otherwise.

    There's also the helpful /latest release which GitHub manages for you, rather than managing a latest tag yourself.

    Example archive links

    Tags:

    https://github.com/psf/black/archive/refs/tags/24.4.2.zip

    https://github.com/psf/black/archive/refs/tags/24.4.2.tar.gz

    Branches:

    https://github.com/psf/black/archive/refs/heads/main.zip

    https://github.com/psf/black/archive/refs/heads/main.tar.gz

    Commits:

    https://github.com/psf/black/archive/3702ba224ecffbcec30af640c149f231d90aebdb.zip

    https://github.com/psf/black/archive/3702ba224ecffbcec30af640c149f231d90aebdb.tar.gz

    Releases:

    https://github.com/psf/black/releases/tag/24.4.2

    https://github.com/psf/black/releases/latest

    Note the two "source code" links refer to the same archive/refs/tags urls, and they manually uploaded binaries as assets.

    Compare to python/cpython which does not use releases, only tags.

    https://github.com/python/cpython/releases/tag/v3.12.4

    2 votes
  4. Comment on Discussing AI music - examples and some thoughts in ~tech

    TangibleLight
    (edited )
    Link Parent
    So as a disclaimer, this is near the edge of my knowledge so take this as speculation with a grain of salt. I do know this is all active research; I'm not sure anyone knows for certain. If there...

    Do you know if it is possible to achieve higher precision, or is there a limitation of how these models function which means such precision can never be achieved? Is there something in principle which forbids being able to use more precise language and get more consistent results, like an impossible computational limit, or a logical impossibility? I'm trying to expand my knowledge.

    So as a disclaimer, this is near the edge of my knowledge so take this as speculation with a grain of salt. I do know this is all active research; I'm not sure anyone knows for certain. If there was an easy way to significantly improve precision we'd already be doing it.

    I speculate that there is not. I'll try to explain why I think that in general terms:

    My understanding is generally the issues are not hard limits, but diminishing returns. Say we could get 2x performance by 10x the training resources. Is it worth it? What if we could 3x performance by 100x the resources? I don't know what the actual numbers look like, but in any case at some point it's just not economically viable.

    The basic way the models work is two phases. First, they train on a huge set of data and categorize concepts in that data into a "embedding space". Imagine there's some coordinate (x, y, z, ...) that corresponds to "red", and another coordinate (a, b, c, ...) that corresponds to "yellow". The training phase runs on a huge set of data to figure out a way to convert from the input to the embedding space, then back to an output.

    The second phase is inference, where a particular input is given to the model, then it converts that to the embedding coordinate, does linear algebra on it, and converts the result back to output format. Models that convert between text or sound or image have ways to convert multiple formats to the same embedding space. So you could put text in, convert to embedding space, then get an image out.

    The main thing is that the embedding space encompasses all the concepts the AI can deal with. If a concept is missing from embedding space, the AI just cannot understand it. It doesn't interpret it right as input, and it cannot produce it as output.

    The reason I speculate there's no way to improve precision is the only way for a concept to be added to embedding space is for it to appear frequently enough, consistently enough, in the training data. If some concept is new, or rare, or overly precise, or inconsistent, it won't be in the embedding space at all and the AI simply cannot understand it.

    So the "solution" is to add more training data, and hopefully the rare concepts will occur enough to reserve a spot in embedding space so the AI can understand it. Or maybe there's some trick to the training process to do it. Or maybe we can make the conversion between input/embedding/output more complicated. Or maybe maybe maybe... this is where we run into the issues of diminishing returns.

    Another problem: it's well known that using AI-generated content to train new AI makes things worse. Unless there's some way to filter it out, all the AI content flooding the internet over the last year might mean the internet in general is just no longer a good source of training data. We might already have the best training data we ever will.... then how will we 10x or 100x the training resources in the future?

    The alternative is to somehow let the model create or refine concepts in embedding space while it's in use. My understanding is this is fundamentally incompatible with the separate training/inference stages of the current paradigm. I expect this is another area of active research, but I suspect any improvement here would require a massive breakthrough and a comprehensive change to how these AI operate. I wouldn't hold my breath.

    There are obviously other approaches being researched that I'm not aware of, but I suspect any significant improvement would also require a huge breakthrough and be similarly comprehensive. I suspect the current paradigm is close to, or already past, the maximum efficiency in terms of performance-per-dollar. I suppose we just have to wait and see.

    For example [...]

    I'm not sure I follow. I know the holodeck, but I'm not sure which parts of that experience you're referring to in the thought experiment.

    I could certainly imagine a system with that voice interface - you ask it for some music, it starts playing, you ask it to tweak some things, it does, and you keep refining and refining. There's a question of how long you can keep the game up. The model has limits, and at some point you will run into them. Either the AI will start forgetting things from context that the listeners still remember, or your refinements will become so specific that they're incompatible with the embeddings and the AI just can't obey.

    The other big thing is just statistics. When you tell it "play jazz" - that could be anything. There are a huge number of sounds it could play that are jazz. So it'll pick one at random. And, on average, you'd expect it to pick one that's average. You wouldn't expect it to reliably generate something remarkable. This is what I was referring to with "regression to the mean".

    1 vote
  5. Comment on Discussing AI music - examples and some thoughts in ~tech

    TangibleLight
    (edited )
    Link Parent
    I agree, but why would the business pay for AI marketing? Why are businesses paying for AI marketing? I don't think it's wise on their part. That's a compelling example. The critical part I...

    Liberation from being influenced by advertising sounds heavenly (unless you rely on a pension fund backed by customer goods corporations).

    I agree, but why would the business pay for AI marketing? Why are businesses paying for AI marketing? I don't think it's wise on their part.

    That's just prompting and using the raw AI output. For someone whose editing skills far surpass their fresh creation skills, AI gives them the pieces to massage together into a collage.

    [...] AI allows people to focus on different skills. The canonical example is someone who has written a story and programmed some fun mechanics to make a game. However, they lack the art skills to execute their vision [...]

    That's a compelling example. The critical part I suppose is the sense of ownership or stake in the work. Writing the prompt and copying the output is trivial, so there's not much stake there and not much fulfillment. The editing - or the programming or whatever other composition - on top of it is where the real work and fulfillment comes from.

    Maybe it's semantics, but I'd argue in that case the AI is not the creative outlet; the editing or other composition is. You could do that with other (human-made) assets if you had them. The AI is just a cheap means to get some. Then you start getting into the weeds on copyright and all, which I consider a separate issue and don't really want to discuss here. As far as recommendations - I'd concede it's probably fine to recommend AI as a way to generate assets for some other more specialized creative outlet.

    There is something to say that the quality of those assets is lower than if you hired a specialist, same as anything else, but that's probably a fine tradeoff for an independent or noncommercial project.

    I also think it's more virtuous to hire a human to do the work, but I recognize that's obviously not possible in many cases as in your example. "Programmer art" is probably more virtuous and authentic, and I'd personally prefer it, but it's certainly not part of the vision so obviously not as attractive to the creator.

    I guess that's also not too different from what @Thomas-C did; the workflow they used is a sort of editing process, where they have the machine generate musical snippets and then splice them together. Creating mashups or some other format of editing pre-existing content might feel similar.

    4 votes
  6. Comment on Discussing AI music - examples and some thoughts in ~tech

    TangibleLight
    Link Parent
    So, to reiterate, I'm not trying to discount the effort or the particular direction you gave to the AI. I'm specifically saying that, in general, all AI art in some genre tends to feel more or...

    You're totally welcome to discount my effort

    So, to reiterate, I'm not trying to discount the effort or the particular direction you gave to the AI. I'm specifically saying that, in general, all AI art in some genre tends to feel more or less the same as all other AI art in that genre. Even when you give the AI specific direction, it's hindered by all the other training data to just be... less, in some vague sense. Less character? Consistency? Intent? I'm not quite sure how to qualify it, and it's in multiple aspects.

    The content below grew quite a bit larger than I intended. Sorry. I could probably strip out a lot of the repetition, but I'll leave it all in because I want to emphasize why I think all the different flavors of AI seem to behave the same and have the same limitations.

    The last paragraph is the most interesting part to me, I'd be interested if you have more to say on that.

    Software and language are my expertise. I am thoroughly unimpressed with AI's output there.

    On software: software development requires precision. For a given problem there are usually only a few reasonable approaches to a solution. There are numerous ways to express that (think variable names and other cosmetic features of the code) but the approach usually only has a couple options.

    Similar to asking an AI image generator to show you a blank white screen. There's really only one right answer, and it just fails to do it.

    You ask the AI to come up with a solution for any reasonably sized problem, and it fails. It'll get the cosmetics right, it looks passable at a glance, but if you dig deeper the approach is all wrong. Like, it's not even close, even if I specifically ask it to use a certain approach. For simpler problems - especially the "classic" problems used for teaching purposes - it can get away on the numerous examples in its training set from textbooks and internet forums and language documentation etc. For any real-world problem it doesn't have those examples, and it just fails to synthesize any new information.

    That "failure to synthesize" is why I always say AI cannot innovate in any domain. It can only produce compositions of things it has already seen. Regression to the mean is also relevant here.

    On language: text output of all the AI I've seen is very... formulaic. It reminds me of the mid-tier 3-paragraph persuasive essays I'd write for English class in middle school. I was not a good student, I didn't care about the content. I'd just write filler in the required structure to get a grade. The AI text, even with clear direction to not do this, feels the same. No interest or emotion, no character. Just filler text to pad up a word count.

    There are also the "context" limitations. Basically the AI has very little working memory; if you let it write too much or you let the conversation go too long it forgets things and starts repeating itself, contradicting itself, and other hallucinations. I often see people make claims like: "the AI on its own is limited, but in the future the AI will fact check itself, or come up with a plan and then work through the plan" or similar. I don't see that happening, just because it can't keep all the information for any moderate task in memory at once.

    The context problem is harder than it might seem due to space complexity. Basically, to increase the context size by some factor, you have to increase the available computer memory by the square of that factor. Say you want to 3x the context size, so the AI can remember three times as much at a time; then you need to 9x the amount of GPU RAM that the computer has available. GPU RAM is expensive, and barring some major shift in the cost of computer memory, I don't see any real improvement there being economically viable even with all the hype and funding available.

    I'm not a painter, and I'm not a musician, but I have fiddled around with AI in those domains. I have no reason to believe AI is any more remarkable there than it is in software or language.

    So consider those limitations regarding AI music:

    "Context" corresponds with the amount of direction, and the length/detail of the music. Generate too much music and it'll start being inconsistent. Ask for many parts playing together and it'll fail. It doesn't understand motif or theme without extreme effort in the prompts. And there's a fundamental limit to how specific you can make your instructions.

    "Precision" corresponds with musical fundamentals; keys, chords, time signatures, etc. You can't really give these kinds of directions to the AI because there's really only one right answer. You can only speak in generalities and theme, and the AI will select these things on its own.

    And just qualitatively, as I said, all the AI music of a given genre tends to feel the same, just how all the text from ChatGPT feels like filler for a middle school essay. The topic might change but the feeling is still there.

    Regarding AI images:

    "Context" corresponds with the length of the prompt and the size/detail of the image. There's a limit to the resolution of the image. There's a limit to the amount of detail you can have in the image. There's a limit to how self-consistent the image can be. There's a limit to how many distinct subjects the image can have. And, again, the limit on how specific the instructions can be.

    "Precision" corresponds with artistic fundamentals; shape, pose, framing, perspective, etc. You can speak in generalities and the AI will choose these that look passable at a glance, but if you specifically request them or look too close it tends to fail.

    And, again, all the output of a certain genre just feels like filler content.

    4 votes
  7. Comment on Discussing AI music - examples and some thoughts in ~tech

    TangibleLight
    Link Parent
    Edit: oops, I misread and thought you were the same person I replied to originally. I'll leave the comment here unchanged since it's still relevant, but bear that in mind regarding the tone....

    Edit: oops, I misread and thought you were the same person I replied to originally. I'll leave the comment here unchanged since it's still relevant, but bear that in mind regarding the tone.


    That's more or less what I was trying to get at, without the rambling to try to defend my position. Which brings me back to: what's the point?

    As an artist, it's unfulfilling.

    As a consumer, it's not engaging.

    In a business, I expect it'll underperform and be uneconomical in the long run.

    the people who are really passionate about giving their exact voice to the world will only use these tools until they just can't convey the proper voice, and then they'll drop them for tools that give them more control over the creative process.

    Wouldn't they be better served learning those skills from the start? The AI seems like an unnecessary intermediate tool to make it more likely for the median artist not to pursue manual work. I know that's just speculation, which is why I specifically said I'm morally opposed to recommending AI as a creative outlet. I'm not sure if it's moral or not to use AI as a creative outlet; that's a whole other can of worms with copyright and ownership that I'm not really talking about here.

    And that applies to all contexts, not just art. For example using ChatGPT as a learning tool has similar issues: for search, writing, programming, etc. More effective in the medium-long term to do the work for oneself.

    So whenever I see something advocating (or which could be perceived to advocate) use of AI for that purpose, and if I have time to engage, I'll pipe up and advise against it.

    3 votes
  8. Comment on Discussing AI music - examples and some thoughts in ~tech

    TangibleLight
    (edited )
    Link Parent
    Strong agree, and it's sad. If we agree that commercial AI art will destroy professional artists, then the only art we're really talking about is that of individual creators. Do you expect...

    (It will kill the arts being profitable for most).

    Strong agree, and it's sad. If we agree that commercial AI art will destroy professional artists, then the only art we're really talking about is that of individual creators.

    Honestly there will always be artists and consumers who are more interested in sharing human made things with humans, regardless of profit. Digital painting hasn't killed physical paintings. Nor will AI kill the arts. (It will kill the arts being profitable for most).

    Do you expect individual creators using AI to be more or less fulfilled than individual creators doing the art themselves?

    I am morally opposed to recommending AI as a creative outlet.


    Perspective shift: if most humans became expert level physical painters or physical musicians over night - you'd be in a similar position asking what's the point. If the talent pool is oversaturated, would that stop you from creating?

    Creators - most humans, I believe - have an intrinsic need to create. That won't go away if all other people become expert, and it won't go away if AI becomes expert.

    My concern with AI is it gives a false creative outlet to people. They put in a prompt and say, "I made this," then they don't go create anything real. There's a hard limit on the quality of the AI output. There's a hard limit on the joy and satisfaction the person can get from "creating" the thing. There's a hard limit on the innovation that can occur (none). None of those problems are present if all people became experts in a given craft overnight.

    The "AI evangelist" culture that's appeared is especially concerning to me, since it ignores or outright denies this. Plug a prompt into the AI. Declare that's the best form of creativity. It's more efficient. It's more accessible. It's more rewarding.

    It's just not.

    Aside from the obvious that for business the point is to spend less (and we should demand living standards regardless of job status because this will eat everyone's lunch)

    Sure, it reduces cost. And you get what you pay for.

    What happens when you and all your competitors are running on AI slop? What's your competitive edge?

    What happens when all your marketing materials are identical to all your competitors marketing materials, and the general public is numb to AI slop?

    What happens when people (regulators) notice the privacy implications of centralized models like these? Is every corporation going to self-host? You know how expensive that'll be?

    I don't expect these to doom AI adoption, but I expect them to contain it to some extent in the long run. I think any company that goes hard on AI is going to find themselves in hot water in the future once any/all of these things occur. I also somehow doubt that any jobs lost to AI are going to be returned to humans when any of these occur. More likely they'll be shunted to a smaller group, somehow expected to be more efficient.

    6 votes
  9. Comment on Discussing AI music - examples and some thoughts in ~tech

    TangibleLight
    Link
    After bashing AI music in my other comment, I should probably actually respond to the opener. This in particular. I've fiddled with "prompting" before. I know it's not trivial and there is nuance...

    After bashing AI music in my other comment, I should probably actually respond to the opener.

    I'm not sure what I think, to be honest. On the one hand, a lot of the creative decision-making wasn't mine. On the other, the song would not be what it is without me making decisions about how it came about and what feelings/moods/genres were focused upon/utilized. I think the best I can say is "use the tool and see whether it's enough to count". To me it feels almost 50/50, like I've "collaborated with my computer" rather than "made music". Does it matter? If the sound is the intended sound, the sound I hoped to make and wanted to share, is that enough to say it is "my music"? Is this perhaps just what it looks like to be a beginner in a different paradigm?

    This in particular. I've fiddled with "prompting" before. I know it's not trivial and there is nuance to it. It does take effort to get a particular output. And I don't really want to discount that OP put that in, and that effort seems to have come from a real interest in music. That's valuable.

    But...

    The fact it takes so much effort tells me the tool isn't as powerful or useful as people make it out to be.

    The fact that, even with all that effort, all the outputs are so similar to each other and bland tells me the tool isn't as powerful or useful as people make it out to be.

    I think that effort would be better spent fiddling with a synthesizer or physical instrument. I think the output will be much more rewarding and lasting. Synthesizer in particular is pretty quick to get a simple melody and beat put together, even if you don't really know what you're doing. In my experience, while it might not leave the same impression as AI muzak, it feels more real and you feel more ownership of the thing.

    Critically, there's room to grow and an unlimited skill ceiling when you are the artist. The "skill" ceiling on prompting is low to middling at best.

    Is this perhaps just what it looks like to be a beginner in a different paradigm?

    Work with the AI in a domain in which you have expertise. It's painfully obvious that the AI output is unremarkable at best. Why are we so quick to assume the output is remarkable when it's in a domain where we don't have that expertise?

    Software and language are my expertise. I am thoroughly unimpressed with AI's output there. I'm not a painter, and I'm not a musician, but I have fiddled around with AI in those domains. I have no reason to believe AI is any more remarkable there than it is in software or language.

    9 votes
  10. Comment on Discussing AI music - examples and some thoughts in ~tech

    TangibleLight
    Link Parent
    I don't see the point. Thanks to the Internet there is an endless supply of music of all varying genres I can listen to, and real actual artists with a reputation that I follow. Same as in visual...

    I don't see the point.

    Thanks to the Internet there is an endless supply of music of all varying genres I can listen to, and real actual artists with a reputation that I follow. Same as in visual media. A real artist, with their own style, catalogue, and reputation, is something I can learn about and engage with over time.

    "Art" from a machine - even if the prompt has been tuned by a human - is much too uniform across the board to do this. I can't follow Thomas-C, I can only follow Udio. I can't follow any digital painter, I can only follow Stable Diffusion.

    And each model is just the combined and reduced average of all the data it was trained on, so naturally the outputs are... average. "Muzak" is apt.

    Every so often there is a remarkable piece. Usually obtained with some transformer or particular prompt that narrows the output to some small subset of the training data. All that's done is rip the interesting style from those creators. I'd rather see their art, instead of the AI regurge that's based on it. What good fortune that it's fundamentally impossible for an LPTM to cite sources for a given output!

    So, like, what's the point?

    9 votes
  11. Comment on Don’t fix it just because it’s technical debt in ~tech

    TangibleLight
    (edited )
    Link
    I'm reminded of this post from last year which is on the same track, but takes the metaphor a bit further and examines things in more detail....

    I'm reminded of this post from last year which is on the same track, but takes the metaphor a bit further and examines things in more detail.

    https://tildes.net/~comp/17t4/tech_debt_metaphor_maximalism

    My big problem with the whole thing is that financial debt has clearly marked terms and rates. Technical debt does not, and programmers are generally not good at giving realistic time estimates. If you take the predictability away it's harder to make concrete decisions. In general, as a guiding principle, I think it is still helpful.

    I also recall a keynote (I'll try to find it and add a link here. I don't remember the conference... I think it was by Kevlin Henney) where the speaker talks about the idea of a healthy amount of technical debt. You can have too much, but you can also have too little.

    Edit: Technical Neglect - Kevlin Henney - NDC London 2024

    10 votes
  12. Comment on Former US President Donald Trump has been found guilty of thirty-four counts of falsifying business records to influence the outcome of the 2016 election in ~news

    TangibleLight
    Link Parent
    I'm still parsing through this, but I want to reply real quick and say that this part in particular really got to me, and now reading your prior comment makes more sense. I don't have a good...

    I'm still parsing through this, but I want to reply real quick and say that this part in particular really got to me, and now reading your prior comment makes more sense.

    Most voters don’t sort ideologically, they sort by whoever in their community has a lot of pull. So you don’t get various flavors of “liberalism” and “conservatism.” You get instead parties around “Black people” or “Farmers” or “Unionists” or “Evangelicals.” They will make weird alliances between each other with the goal of maximizing the advantage of their interest groups.

    I don't have a good response. I agree it's better for the parties to sort ideologically, and I see your argument that our current system forces them to do that. But our current system also forces the dominant party to trend extreme when a fringe group within it threatens to fracture.

    So how do you encourage the parties to sort ideologically while also providing a viable outlet for fringe groups to get fair representation so they don't pull the dominant parties along with them?

    I don't know. I doubt it can be done with FPTP as a general framework, but you're arguing that any other framework would break ideological sorting...

  13. Comment on Game simulation programming: Continuous time in ~comp

    TangibleLight
    (edited )
    Link Parent
    I'm suspicious of the continuous time paradigm that Sean is advocating. In cases where accuracy is paramount and you're reaching for any bits of optimization you can find, it makes sense to look...

    I'm suspicious of the continuous time paradigm that Sean is advocating. In cases where accuracy is paramount and you're reaching for any bits of optimization you can find, it makes sense to look at analytical representations and try to take advantage of that. Real time simulation is not such a case. The more important thing there is determinism (reproducibility/consistency) and I think a fixed time step accomplishes that much better in most cases.

    For example he spends some time around 26:00 trying to come up with an issue where a parabolic motion should collide with something, but due to discrete timestep it incorrectly tunnels through the object. He struggles to find a good example. Mathematically, I would expect him to struggle to find a good example simply because of inertia. High curvature motion tends to be the slowest part of some trajectory, since accelerating through that high curvature at speed takes more force. Unless there's something to provide that force, all the speed happens on straighter sections of the trajectory. So either it moves slowly along the higher curvature, and naturally gets more samples from the discrete timesteps, or it moves quickly along the lower curvature, and the linear approximation is fine.

    What he's looking for is a situation where curvature and velocity are correlated. One example that does demonstrate this is orbital mechanics; think n-body problem or solar system simulation. Objects have highest kinetic energy at closest approach, and that's always the point of highest curvature on the trajectory (highest gravitational force). If you don't account for the errors you get inconsistent orbital procession and energy non-conservation that looks very wrong after only a few orbits. And orbital simulators tend to do these continuous-time strategies that Sean talks about. "On-rails" planetary motion is probably closest to what Sean discusses. You can also compute error terms and correct for it down to some upper bound with more advanced integrators. Or you can break up the timesteps into stages which are each energy-conserving to ensure errors don't accumulate and blow up over time. None of those require a fixed timestep, but determinism still does.

    I'm really interested in what he talks about around 19:49

    [...] a classic way to do that is you look at all the objects and you find the time of their first collision [...] without doing any more simulation of them [...] you take the lowest one, and you simulate all objects up to that point, because up to that point, nothing else can happen [...]

    [...] that can be expensive [...] but you still have all the old collision times from the original thing so say you had a hundred objects and two of them collide at this point. You can now just take those two objects and say when do those two objects collide with anything else including the other objects. And you can't miss anything because all the other objects, if they interact with each other, they're not affected by these two objects until these two objects collide with another object [...]

    You can think of all the collisions as nodes in a graph, where edges correspond with uninterrupted motion of a particular object. He's really just saying that interactions can't propagate backward in time - it's a DAG - and if you look at a particular point in time you'll have disconnected regions which have no data dependencies between them.

    This starts to smell like a task scheduling problem to me.

    I wonder what use a tool like Bend might have here... surely Bend itself and its other performance constraints are not suitable for real-time simulation, but the paradigm in general of how it parallelizes graph traversal might be relevant?

    This then starts to smell like what I understand of your linked comment. "Interactions" are where data dependencies come from, so make these first-class and parallelize across distinct cliques of interactions.

    I'm still suspicious. Timestamp in Carmack's keynote.

    [...] Since everyone's looking at the previous frame's rendering: two people approach a narrow hallway, they both say they want to go into that hallway, they both think it's clear, so they both go into it. Well, what do you do next frame? They both say, well, I'm here, but two people wound up at the same place. So there's a little bit of resolving that has to be done there [...]

    An adage I like but I'm not sure who it's attributed to: complexity is always conserved.

    Yes, writing the thing with barriers and locks is a nightmare. But those are the data dependencies. You can't just get rid of them. You can reframe the problem with immutability and create a bunch of parallelism in part of the process, but then you've traded the complexity of all the barriers and locks for the complexity of this conflict resolution. I suspect that this conflict resolution in general is more complex than writing the locks and barriers.

    I suspect an easier way forward is some functional paradigm and some compile-time analysis that identifies as many data dependencies as it can and injects the locks and barriers for you. This is why I mentioned Bend. I don't think Bend itself is a solution, but something like it seems more on-track.

    1 vote
  14. Comment on How it feels to get an AI email from a friend in ~comp

    TangibleLight
    Link Parent
    Hm. I think you're right. And now you've got me thinking in that space, I can only imagine. The appeal of AI here makes a lot more sense.

    I think you may underestimate the potential negative response some parents would have to an honest but blunt message from their child's teacher.

    Hm. I think you're right. And now you've got me thinking in that space, I can only imagine.

    The appeal of AI here makes a lot more sense.

    10 votes
  15. Comment on How it feels to get an AI email from a friend in ~comp

    TangibleLight
    Link Parent
    I am not a teacher and I'm sure there are other factors at play with parent-teacher-communication so forgive my naivety. Why not just send that? As you've written it here. Yes it's blunt. I'd...

    I am not a teacher and I'm sure there are other factors at play with parent-teacher-communication so forgive my naivety.

    Why not just send that? As you've written it here.

    Yes it's blunt. I'd rather read a brief but blunt message written by a human, rather than the same general content with a bunch of dressing added and all character removed by some robot.

    I guess the thing for me is: if someone sends me an AI email, that means they put a prompt into the machine and it gave back this slop. Why don't they just send the prompt to me directly? It's gonna have all the same content - or, if the AI did add some content, I don't want to see it because it's not real.

    15 votes
  16. Comment on How it feels to get an AI email from a friend in ~comp

    TangibleLight
    (edited )
    Link Parent
    Another thing I've noticed - only a couple times, thankfully - are messages with that ChatGPT flair that include proprietary details, codenames, status, etc. What was the prompt for that? Please...

    Another thing I've noticed - only a couple times, thankfully - are messages with that ChatGPT flair that include proprietary details, codenames, status, etc. What was the prompt for that?

    Please write a message explaining that <proprietary feature> is behind schedule.

    You have to treat it like a data breach. I'm glad my organization has policy and training in place in response, and I haven't seen that in a long time. But I fully believe this problematic person I mentioned would do it without a second thought.

    Imagine if you had access to all the messages that people said to the robot. Imagine if you got people to register with online accounts so you could cross reference and figure out who they are and who they work for. Imagine if you had a machine that could read the messages and summarize the status of those organizations.

    Sure is a good thing that we have a respectable and trustworthy company at the helm to make sure nothing like that can be abused.

    16 votes
  17. Comment on How it feels to get an AI email from a friend in ~comp

    TangibleLight
    (edited )
    Link Parent
    I received an AI email from a colleague, where the colleague obviously copy-pasted the response from ChatGPT. This isn't the first time the person has done this, but it's the most offensive to me....

    I received an AI email from a colleague, where the colleague obviously copy-pasted the response from ChatGPT. This isn't the first time the person has done this, but it's the most offensive to me.

    It was obvious, because they accidentally included "Certainly! Here's an email addressed to X".

    "X" was misspelled.

    I was - am - very offended. I almost reply-all with a screenshot of a ChatGPT prompt and response: "I have a colleague who keeps sending me AI generated emails. It is rude. Please write a polite email requesting that they stop." The email it produced was actually decent, and in its classic three-paragraph middle school essay format, clearly explained why it is so rude. The irony was incredible.

    I never did send that response, for a couple reasons - given how excited the person is about AI, the irony certainly would have been lost. I think it would have burnt some bridges I'd rather not burn (yet). Most importantly: I am not X, X is a close coworker of mine and I was copied in a large group. It didn't seem appropriate to send this charged response when I wasn't even the one directly addressed.

    It's lingered on my mind and I wonder if I should have sent the email anyway. I think probably not, but.....

    I also wonder if they would have even been insulted. They clearly don't think it's rude to send to others, so they should be okay with receiving it, right? I'm somehow doubtful, but, again, I think the irony would be lost on this person in particular.

    The same person has added this "assistant" https://www.read.ai/ to each and every meeting they're a part of. I don't really like the idea of this thing recording me, so I'm keeping my video off when it's in the call. They don't like it but I don't care. It's wonderful, because they add us to these meetings that really should be emails, except if the email comes from them I know it's actually a robot. And now, we get an email about the meeting that should have been an email - also from a robot. And the content really doesn't seem to have anything to do with the meeting.

    How much is [the] thing costing their organization? And are they even aware of the privacy concerns?

    It's a shame because I don't think the person realizes how toxic the whole thing is and how people - at least everyone I know in my more technical department - are starting to avoid and cut ties because of it. Those times I'm aware of that people complain, it's just all the same AI hype rhetoric parroted back and never any change in behavior.

    Edit: https://i.imgur.com/SrimKQu.png

    15 votes
  18. Comment on Game simulation programming: Continuous time in ~comp

    TangibleLight
    Link Parent
    I've added it to watch later, but I learned my lesson in the other thread and will not speculate on what I expect the video to be about. I am hoping he touches on parallelism. Like I mentioned in...

    I've added it to watch later, but I learned my lesson in the other thread and will not speculate on what I expect the video to be about.

    I am hoping he touches on parallelism. Like I mentioned in the other thread, I really don't know how that works in practice. My own attempts in toy projects usually go poorly.

    1 vote
  19. Comment on Lerp smoothing is broken in ~comp

    TangibleLight
    Link Parent
    Following up on https://tildes.net/~comp/1gpy/lerp_smoothing_is_broken#comment-cw3u after watching the video. Oops!! This is not what the video is about, but it is an interesting video. I think...

    Following up on https://tildes.net/~comp/1gpy/lerp_smoothing_is_broken#comment-cw3u after watching the video.

    Oops!! This is not what the video is about, but it is an interesting video.

    I think the use of lerp at all in the video title is a bit misleading - although it is not inaccurate - since it's really about a particularly simple control system often used in graphics. You want a value to follow an arbitrary target, but you don't want to implement a full PID controller and you don't want the result to be terribly jittery. Say you're making a camera follow the player. A common pattern is, on every frame, move the value X% closer to the target. This can be expressed with lerp but I don' think it's fair to say it is lerp, which is what the title in isolation seems to imply.

    The issue, as discussed, is that it's framerate dependant. You do some math and find it's really equivalent to exponential decay, which is continuous, then you use that continuous formula to create a framerate independant formula. A wonderful talk on recursion, calculus, and extension! But not quite what I thought it would be from the title and your description here.

    7 votes
  20. Comment on Former US President Donald Trump has been found guilty of thirty-four counts of falsifying business records to influence the outcome of the 2016 election in ~news

    TangibleLight
    (edited )
    Link Parent
    Eh, I'd still expect there to be two dominant parties. The difference is just that a minority (third or smaller) party can still have representation, without being completely overrun by the two...

    Eh, I'd still expect there to be two dominant parties. The difference is just that a minority (third or smaller) party can still have representation, without being completely overrun by the two dominant parties. I also would expect the two dominant parties to trend more moderate, and third or fourth parties to trend more extreme. Any president elected would certainly only be of one of the dominant (moderate) parties, so you shouldn't expect an extreme minority party to get the win. Our current system forces the dominant parties to trend more extreme - or, rather, a dominant party must trend extreme if any significant part of its base does. It can't afford to remain moderate and fragment.

    I actually think our ex ante form of organizing advantages the left more than it disadvantages it

    I'm not sure I follow but I'm curious. Could you elaborate?

    it’s just that leftists have a dramatically overexaggereated sense of how representative they are

    Probably. I think we also have an oversimplified view of the "median conservative". Clearly we have much nuance in our views and could be better represented by multiple parties, but they are just Trump. This is obviously not true. In a system that encourages smaller parties, I would fully expect multiple right-leaning parties to appear, and gain representation. There'd probably be some "MAGA Party", lead by a character like Trump, with a few seats in congress. That is fair. With a multi party system I'd expect their representation to be closer to the actual size of their base, rather than the entirety of the current Republican party, which is forced to back Trump for fear of fragmenting their vote.

    I also suspect the current "median conservative" has a similarly oversimplified view of the "median liberal". And, if we had a multi party system, I'd expect the dominant liberal party to trend more moderate, and a smaller "Socialist Party" led by a character like Sanders to appear, with an appropriately small number of seats in Congress.

    I, personally, think a multi party system in America would collapse into fascism even more rapidly than ours has been.

    Again, I don't think I follow. I suspect that might come from my expectation the dominant parties in a multi-party system to trend more moderate, and you may not agree? Really not sure, but I'm curious if you could elaborate.

    4 votes