post_below's recent activity

  1. Comment on Bun is joining Anthropic in ~tech

    post_below
    Link Parent
    I'd add to this that, from a dev perspective, Bun is a pretty low risk proposition. Javascript is still javascript regardless of what happens with Bun. If things were to go in a direction you...

    I'd add to this that, from a dev perspective, Bun is a pretty low risk proposition. Javascript is still javascript regardless of what happens with Bun. If things were to go in a direction you didn't like you could fairly easily switch back to Node, or you could switch to whichever Bun fork you liked most, or fork it yourself.

    This is open source working as intended.

    It would be a bit different if Bun wasn't already VC funded, to me that's the bigger yuck. Open source companies trying to figure out how to force monetization to please investors often leads to unfortunate outcomes and misaligned priorities. Compared to that road, it seems to me that this increases the chances of Bun being a quality product, for longer.

    13 votes
  2. Comment on EU backs away from chat control in ~society

    post_below
    Link Parent
    Agreed, governments trying to (effectively) outlaw encryption has been an evergreen issue for decades, so this is far from over. What we really need are laws proactively protecting encryption, or...

    Agreed, governments trying to (effectively) outlaw encryption has been an evergreen issue for decades, so this is far from over. What we really need are laws proactively protecting encryption, or even better, enshrining basic principles of digital privacy into law.

    But for now this is fantastic news!

    3 votes
  3. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    That's a good point, they probably don't like to cache results for too long, especially in areas where information changes quickly, but of course they've gotta be doing caching. Newer LLMs are...

    That's a good point, they probably don't like to cache results for too long, especially in areas where information changes quickly, but of course they've gotta be doing caching. Newer LLMs are surprisingly token efficient for summaries too, which decreases the compute needed.

    And still, the scale of Google search is mind boggling. They're burning a lot of compute and cash on those summaries.

    1 vote
  4. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    Yeah Gore, little did we know how right he was! He didn't seem like a hero to me at the time, just another politician, but in retrospect I have a lot of respect for his conviction and sacrifice....

    I do think these days there's much less of a "champion" for the environment compared to when I was growing up. [...] I'm not sure if that same emphasis occurs today.

    Yeah Gore, little did we know how right he was! He didn't seem like a hero to me at the time, just another politician, but in retrospect I have a lot of respect for his conviction and sacrifice.

    It's this weird situation because there's more awareness around climate change, and a lot of conversation, but somehow the impact feels off despite that. Amongst people who work in environmental areas and climate scientists there was a popular term for a minute: Climate salience. That being the point that the reality really lands for a person or population and the conversation changes as a result.

    Having arrived at climate salience repeatedly, it's still hard for me a lot of the time to really concepualize the magnitude of the mess we've made. I guess in part because life keeps happening and rawdogging the reality of the anthropocene in everyday life isn't going to help anything.

    2 votes
  5. Comment on GPT-5 has come a long way in mathematics in ~tech

    post_below
    Link Parent
    It's moving so fast that a mental model of LLM capabilities from even 6 months ago is completely out of date. I assume there will be a plateau at some point, but so far all the claims of impending...

    It's moving so fast that a mental model of LLM capabilities from even 6 months ago is completely out of date. I assume there will be a plateau at some point, but so far all the claims of impending plateaus have been proven wrong by the next generation of frontier models.

    And it's not just math, or audio, or brainstorming, or video, or coding, or learning, or research, or (pseudo) reasoning, it's all of the above and a lot more besides. It looks more and more like it will prove to be the most impactful advancement in the digital revolution so far. With all the exciting and frightening up and downsides that go along with that.

    6 votes
  6. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    This is gonna blow your mind, but when I posted it wasn't the 3rd comment in the thread. They mostly supported nonprofits that are taking care of people. They understandably care a lot about that....

    It's the 3rd comment in the thread.

    This is gonna blow your mind, but when I posted it wasn't the 3rd comment in the thread.

    Well what's the other 98?

    They mostly supported nonprofits that are taking care of people. They understandably care a lot about that. They like pets too, they're teenagers, they went for the low hanging fruit based on their life experience. Half of them were just waiting for the bell to ring. But if climate change and the environment were a little bigger part of the conversation, possibly they would have been more likely to make the connection between people and the planet.

  7. Comment on AGI and Fermi's Paradox in ~science

    post_below
    Link Parent
    All good points, it would run into all sorts of constraints, just like we have. Still, iteration speed and self editing are powerful tools. One constraint it wouldn't have is biological evolution....

    All good points, it would run into all sorts of constraints, just like we have.

    Still, iteration speed and self editing are powerful tools. One constraint it wouldn't have is biological evolution. It wouldn't just be getting smarter in terms of knowledge about the universe, it would be evolving its structure and capacity. We gain knowledge and pass it on to successive generations who then improve on it, but only after spending 20+ years learning the basics from scratch. Meanwhile the human brain isn't changing significantly each generation. This imaginary AI mind would be able to edit its brain at will. It would be able to evolve very fast. We wouldn't be talking about 10x smarter, we'd be talking about an unimaginable factor of greater intelligence within a short time. It would likely require a new definition of intelligence. For example, it could run evolution tests in a massive array of parallel sandboxes, keeping the best results from each round and applying them to the next. But it would come up with a far better way of doing that than I can imagine pretty quickly.

    I'm not sure there's any way to predict or anticipate what a truly self aware AGI could achieve in terms of cognitive ability. It seems to me that it would find creative ways around many of the constraints that we struggle with.

    And of course at this point its pure science fiction. I think of it as two singularities. True self aware AGI is the first, and full agency is the second. If the first ever happens, maybe we'll have come up with a way to mitigate the second, for a while anyway.

    3 votes
  8. Comment on Global health series - ultra-processed foods and human health in ~food

    post_below
    Link Parent
    It's great to see a comprehensive look at ultra processed foods, and in The Lancet. The cynical truth is that, as they pointed out, the international food conglomerates will fight hard every step...

    It's great to see a comprehensive look at ultra processed foods, and in The Lancet.

    The cynical truth is that, as they pointed out, the international food conglomerates will fight hard every step of the way to protect their profits. In the US, where corporate political influence is particularly strong, we won't see meaningful changes when it comes to processed food until there's a huge political shift. If we're being hopeful, the 2030's maybe.

    Fortunately there are countries where that's less true. The other problem, though, is that a lot of people are defensive about their food. People don't want to hear that the stuff they've been eating since childhood is now maybe bad for them. Food comes with all sorts of emotion. It's security, nostalgia, safety, survival. People have big reactions.

    Refined sugar, refined carbs in general, is a good example of this. The science has been there, undisputed, for decades. Not just a little science, A LOT of it, and that's despite the research money being sparse for much of that time. But we're only now, sort of, arriving at a place where it's becoming commonly accepted that it's one of the most dangerous things in the food supply (at the volumes many people consume). Also the most dangerous part of a lot of ultra-processed food. Part of the reason is industry money, both political and marketing, but the other part is that it's a key ingredient in delicious things that make people happy.

    5 votes
  9. Comment on Blue Origin reveals a super-heavy variant of its New Glenn rocket that is taller than a Saturn V in ~space

    post_below
    Link
    I can't help but translate space rocket news into "billionare penis" terms.

    I can't help but translate space rocket news into "billionare penis" terms.

    Blue Origin reveals a super-heavy variant of its new Jeff Bezos space penis that is biggest Jeff Bezos space penis yet

    2 votes
  10. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    As this comment seems to be illustrating, sitting sadly down here at the bottom of the thread... When it comes to LLMs, people aren't particularly interested in talking environmental impact. It's...

    As this comment seems to be illustrating, sitting sadly down here at the bottom of the thread... When it comes to LLMs, people aren't particularly interested in talking environmental impact. It's not as exciting as the tech, or the implications of the tech on society. And I get it, the tech is insane, for better and worse.

    I personally believe the climate considerations are pretty important. Not just in the case of AI, it's just the biggest example, but with every advancement. Even now in 2025 the carbon footprint part of the conversation often gets crowded out. Climate is a big part of a lot of conversations these days but it's a long way from being a big enough part. As long as that's true, politicians will be less motivated to care.

    Somewhat related... Where I live, each year a foundation allocates money to local high schools under the premise that students should decide which nonprofits/causes to support with it. This year, out of about 100 causes students chose to support, 2 were about the environment or climate change. It just doesn't seem to be in the zeigeist as much as you'd hope.

    10 votes
  11. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    I could easily be wrong about the degree of difference, but the compute needed to handle a request to a frontier model is definitely dramatically higher than that needed for a local model. The two...

    I could easily be wrong about the degree of difference, but the compute needed to handle a request to a frontier model is definitely dramatically higher than that needed for a local model. The two things aren't even in the same ballpark, even accounting for better hardware on the datacenter end.

  12. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    Surprisingly I can't find any information about training vs inference from sources other than blogs. However the bloggers that have something to say about it all seem to say the same thing, here's...

    Surprisingly I can't find any information about training vs inference from sources other than blogs. However the bloggers that have something to say about it all seem to say the same thing, here's one.

    Either way, however the comparison looks, the compute needed for inference is growing really fast and that's going to have a big carbon footprint.

    1 vote
  13. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    My understanding is that it's the other way around, training is massively expensive up front but over time the compute cost of inference is far larger. From a computer science perspective it's...

    Serving (ie. inference) is only a tiny portion of compute in comparison to training. They could likely double serving capacity a dozen times before the cost was even comparable.

    My understanding is that it's the other way around, training is massively expensive up front but over time the compute cost of inference is far larger. From a computer science perspective it's hard to imagine it could be otherwise.

    One way to contextualize it would be thinking about running a local model. You need a high end GPU with a lot of memory to even consider it and it will fully use all of the resources available for quite a long time for every request. It's more resource use than almost anything else you might do with your system, even most video editing/creation. Then think about what that looks like over the course of a day's work, with dozens or 100's of prompts.

    A frontier model being served to the public from the cloud is using magnitudes more compute than a local model for every request from every user. For example, every Google search. Over time, training costs can't compete with that.

    4 votes
  14. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    LLM summaries and corporate hype "force AI into everything and figure out the details later" has gotta be well over half of the demand.

    LLM summaries and corporate hype "force AI into everything and figure out the details later" has gotta be well over half of the demand.

    27 votes
  15. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    (edited )
    Link
    In a recent thread I talked about how AI is becoming genuinely useful, and it is. But that doesn't negate its environmental impact, which I don't think we talk about enough. Think about the math...

    In a recent thread I talked about how AI is becoming genuinely useful, and it is. But that doesn't negate its environmental impact, which I don't think we talk about enough.

    Think about the math of doubling every six months. Then add in all of the other tech giants, who are shovelling just as much money and resources at the problem. That includes (to a somewhat lesser degree) the newer players that are focused only on AI like Anthropic. And then also consider that the doubling they're talking about is only for serving capacity. There's also massive and ever-growing compute needed for training.

    It's mind blowing, and concerning.

    From the article:

    At an all-hands meeting on Nov. 6, Amin Vahdat, a vice president at Google Cloud, gave a presentation, viewed by CNBC, titled “AI Infrastructure,” which included a slide on “AI compute demand.” The slide said, “Now we must double every 6 months.... the next 1000x in 4-5 years.”

    “The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat said at the meeting, where Alphabet CEO Sundar Pichai and CFO Anat Ashkenazi also took questions from employees.

    The presentation was delivered a week after Alphabet reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year, to a range of $91 billion to $93 billion, followed by a “significant increase” in 2026. Hyperscaler peers Microsoft
    , Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.

    Clearly, and unsurprisingly, big tech isn't concerned about climate impact, they just don't want to be late to the gold rush. From Pichai:

    He then reiterated a point he’s made in the past about the risks of not investing aggressively enough, and highlighted Google’s cloud business, which just recorded 34% annual revenue growth to more than $15 billion in the quarter. Its backlog reached $155 billion.

    “I think it’s always difficult during these moments because the risk of underinvesting is pretty high,” Pichai said. “I actually think for how extraordinary the cloud numbers were, those numbers would have been much better if we had more compute.”

    In other words: There is maybe no upper limit to what we should spend on increasing capacity so that we can capture every last ounce of value.

    1000x in 4-5 years, potentially across all of the big tech companies. Even if the quoted VP was being hyperbolic, Pichai seems to think maybe the current goals aren't even enough, so it's probably not that hyperbolic.

    Of course there isn't anything any of us can do about the problem, the bigger part of the serving demand is driven by enterprise users, it's a nation state level problem to solve. But I think we should at least be aware of it, and talking about it.

    Often in conversations about server and datacenter environmental impact the question of whether or not it's a big enough problem to be worried about comes up. I would say that at 1000x in 5 years it definitely is something to be concerned about. The goal, after all, is to be decreasing carbon footprint rather than multiplying it.

    23 votes
  16. Comment on The worlds on fire. So lets just make AI porn. in ~tech

    post_below
    Link Parent
    Ah ok, that makes a lot of sense. It feels disjointed in parts throughout, skimming at the end wasn't so much about being lost as it was being disengaged. Looking at some of the other comments...

    Ah ok, that makes a lot of sense.

    If I can get clarification: is the parts where you're getting lost towards the end or is it scattered throughout with some ideas not having enough cohesion.

    It feels disjointed in parts throughout, skimming at the end wasn't so much about being lost as it was being disengaged. Looking at some of the other comments though, sounds like it does have an audience.

    1 vote
  17. Comment on The worlds on fire. So lets just make AI porn. in ~tech

    post_below
    Link
    I read it, though towards the end I started skimming a bit. Feedback: The typo ratio is a little too high. Here's one that jumped out: The words you want there are "tenets" and "flout". I'm not...

    I read it, though towards the end I started skimming a bit.

    Feedback: The typo ratio is a little too high. Here's one that jumped out:

    Tenants that LLM were allowed to freely flaunt

    The words you want there are "tenets" and "flout". I'm not sure what the sentence means as is, but it's fun to think about.

    The content itself has moments of lucidity and some reasonably good points, but there are also a lot of parts that don't feel coherent. I found myself spending a lot of time connecting dots on the authors behalf because they weren't connected in the text. I'd also consider toning down the gratuitious negativity. For that to work it has to either be earned, and I don't think it is, or it needs to be a lot funnier.

    It feels a bit like unfocused rage that the levity breaks don't balance out. I think there's some potentially good content underneath this first draft but it desperately needs to be trimmed and focused. I'd maybe send it back to the author with notes so that he can write another draft before you try to clean it up.

    16 votes
  18. Comment on AGI and Fermi's Paradox in ~science

    post_below
    Link Parent
    The thing I find fascinating about a theoretical AGI is how fast things would move following the singularity. It would necessarily have access to a huge amount of processing power right out of the...

    The thing I find fascinating about a theoretical AGI is how fast things would move following the singularity. It would necessarily have access to a huge amount of processing power right out of the gate, and if it wanted more it would figure out how to get it, probably via the internet, very quickly. At that point it's capable of iterating (evolving) so fast that it would be 100's of 1000's of figurative generations of evolution beyond us before we even noticed that the singularity had happened. By the time we started wondering what we should do about it, it would likely be so far beyond us intellectually that calling it godlike would be an understatement.

    We could put guardrails on in advance that might slow things down (processing spike trip switch, airgapping) but either way it reinforces your point... We have no idea what an autonomous AGI would become, or what it would want.

    4 votes
  19. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    Thanks for posting, it's cool to hear about a (sort of) non-coder using these tools for personal projects. Using them professionally I have some version of the thought "this would be a shitshow...

    Thanks for posting, it's cool to hear about a (sort of) non-coder using these tools for personal projects. Using them professionally I have some version of the thought "this would be a shitshow for someone that couldn't read the code" pretty often. But I can see how it would be fantastic for one off scripts and relatively small projects. Some of the paid models can write code in that context with crazy high succes rates on the first try (80%+). They only start losing their minds when things get more complicated.

    It sounds like you're saying, as an artist, that you don't see AI as a major threat? Can you share more? I've heard the opposite from a lot of people.

    1 vote