post_below's recent activity

  1. Comment on Global health series - ultra-processed foods and human health in ~food

    post_below
    Link Parent
    It's great to see a comprehensive look at ultra processed foods, and in The Lancet. The cynical truth is that, as they pointed out, the international food conglomerates will fight hard every step...

    It's great to see a comprehensive look at ultra processed foods, and in The Lancet.

    The cynical truth is that, as they pointed out, the international food conglomerates will fight hard every step of the way to protect their profits. In the US, where corporate political influence is particularly strong, we won't see meaningful changes when it comes to processed food until there's a huge political shift. If we're being hopeful, the 2030's maybe.

    Fortunately there are countries where that's less true. The other problem, though, is that a lot of people are defensive about their food. People don't want to hear that the stuff they've been eating since childhood is now maybe bad for them. Food comes with all sorts of emotion. It's security, nostalgia, safety, survival. People have big reactions.

    Refined sugar, refined carbs in general, is a good example of this. The science has been there, undisputed, for decades. Not just a little science, A LOT of it, and that's despite the research money being sparse for much of that time. But we're only now, sort of, arriving at a place where it's becoming commonly accepted that it's one of the most dangerous things in the food supply (at the volumes many people consume). Also the most dangerous part of a lot of ultra-processed food. Part of the reason is industry money, both political and marketing, but the other part is that it's a key ingredient in delicious things that make people happy.

  2. Comment on Blue Origin reveals a super-heavy variant of its New Glenn rocket that is taller than a Saturn V in ~space

    post_below
    Link
    I can't help but translate space rocket news into "billionare penis" terms.

    I can't help but translate space rocket news into "billionare penis" terms.

    Blue Origin reveals a super-heavy variant of its new Jeff Bezos space penis that is biggest Jeff Bezos space penis yet

    2 votes
  3. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    As this comment seems to be illustrating, sitting sadly down here at the bottom of the thread... When it comes to LLMs, people aren't particularly interested in talking environmental impact. It's...

    As this comment seems to be illustrating, sitting sadly down here at the bottom of the thread... When it comes to LLMs, people aren't particularly interested in talking environmental impact. It's not as exciting as the tech, or the implications of the tech on society. And I get it, the tech is insane, for better and worse.

    I personally believe the climate considerations are pretty important. Not just in the case of AI, it's just the biggest example, but with every advancement. Even now in 2025 the carbon footprint part of the conversation often gets crowded out. Climate is a big part of a lot of conversations these days but it's a long way from being a big enough part. As long as that's true, politicians will be less motivated to care.

    Somewhat related... Where I live, each year a foundation allocates money to local high schools under the premise that students should decide which nonprofits/causes to support with it. This year, out of about 100 causes students chose to support, 2 were about the environment or climate change. It just doesn't seem to be in the zeigeist as much as you'd hope.

    5 votes
  4. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    I could easily be wrong about the degree of difference, but the compute needed to handle a request to a frontier model is definitely dramatically higher than that needed for a local model. The two...

    I could easily be wrong about the degree of difference, but the compute needed to handle a request to a frontier model is definitely dramatically higher than that needed for a local model. The two things aren't even in the same ballpark, even accounting for better hardware on the datacenter end.

  5. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    Surprisingly I can't find any information about training vs inference from sources other than blogs. However the bloggers that have something to say about it all seem to say the same thing, here's...

    Surprisingly I can't find any information about training vs inference from sources other than blogs. However the bloggers that have something to say about it all seem to say the same thing, here's one.

    Either way, however the comparison looks, the compute needed for inference is growing really fast and that's going to have a big carbon footprint.

    1 vote
  6. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    My understanding is that it's the other way around, training is massively expensive up front but over time the compute cost of inference is far larger. From a computer science perspective it's...

    Serving (ie. inference) is only a tiny portion of compute in comparison to training. They could likely double serving capacity a dozen times before the cost was even comparable.

    My understanding is that it's the other way around, training is massively expensive up front but over time the compute cost of inference is far larger. From a computer science perspective it's hard to imagine it could be otherwise.

    One way to contextualize it would be thinking about running a local model. You need a high end GPU with a lot of memory to even consider it and it will fully use all of the resources available for quite a long time for every request. It's more resource use than almost anything else you might do with your system, even most video editing/creation. Then think about what that looks like over the course of a day's work, with dozens or 100's of prompts.

    A frontier model being served to the public from the cloud is using magnitudes more compute than a local model for every request from every user. For example, every Google search. Over time, training costs can't compete with that.

    4 votes
  7. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    Link Parent
    LLM summaries and corporate hype "force AI into everything and figure out the details later" has gotta be well over half of the demand.

    LLM summaries and corporate hype "force AI into everything and figure out the details later" has gotta be well over half of the demand.

    23 votes
  8. Comment on Google must double AI serving capacity every six months to meet demand in ~tech

    post_below
    (edited )
    Link
    In a recent thread I talked about how AI is becoming genuinely useful, and it is. But that doesn't negate its environmental impact, which I don't think we talk about enough. Think about the math...

    In a recent thread I talked about how AI is becoming genuinely useful, and it is. But that doesn't negate its environmental impact, which I don't think we talk about enough.

    Think about the math of doubling every six months. Then add in all of the other tech giants, who are shovelling just as much money and resources at the problem. That includes (to a somewhat lesser degree) the newer players that are focused only on AI like Anthropic. And then also consider that the doubling they're talking about is only for serving capacity. There's also massive and ever-growing compute needed for training.

    It's mind blowing, and concerning.

    From the article:

    At an all-hands meeting on Nov. 6, Amin Vahdat, a vice president at Google Cloud, gave a presentation, viewed by CNBC, titled “AI Infrastructure,” which included a slide on “AI compute demand.” The slide said, “Now we must double every 6 months.... the next 1000x in 4-5 years.”

    “The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat said at the meeting, where Alphabet CEO Sundar Pichai and CFO Anat Ashkenazi also took questions from employees.

    The presentation was delivered a week after Alphabet reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year, to a range of $91 billion to $93 billion, followed by a “significant increase” in 2026. Hyperscaler peers Microsoft
    , Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.

    Clearly, and unsurprisingly, big tech isn't concerned about climate impact, they just don't want to be late to the gold rush. From Pichai:

    He then reiterated a point he’s made in the past about the risks of not investing aggressively enough, and highlighted Google’s cloud business, which just recorded 34% annual revenue growth to more than $15 billion in the quarter. Its backlog reached $155 billion.

    “I think it’s always difficult during these moments because the risk of underinvesting is pretty high,” Pichai said. “I actually think for how extraordinary the cloud numbers were, those numbers would have been much better if we had more compute.”

    In other words: There is maybe no upper limit to what we should spend on increasing capacity so that we can capture every last ounce of value.

    1000x in 4-5 years, potentially across all of the big tech companies. Even if the quoted VP was being hyperbolic, Pichai seems to think maybe the current goals aren't even enough, so it's probably not that hyperbolic.

    Of course there isn't anything any of us can do about the problem, the bigger part of the serving demand is driven by enterprise users, it's a nation state level problem to solve. But I think we should at least be aware of it, and talking about it.

    Often in conversations about server and datacenter environmental impact the question of whether or not it's a big enough problem to be worried about comes up. I would say that at 1000x in 5 years it definitely is something to be concerned about. The goal, after all, is to be decreasing carbon footprint rather than multiplying it.

    14 votes
  9. Comment on The worlds on fire. So lets just make AI porn. in ~tech

    post_below
    Link Parent
    Ah ok, that makes a lot of sense. It feels disjointed in parts throughout, skimming at the end wasn't so much about being lost as it was being disengaged. Looking at some of the other comments...

    Ah ok, that makes a lot of sense.

    If I can get clarification: is the parts where you're getting lost towards the end or is it scattered throughout with some ideas not having enough cohesion.

    It feels disjointed in parts throughout, skimming at the end wasn't so much about being lost as it was being disengaged. Looking at some of the other comments though, sounds like it does have an audience.

    1 vote
  10. Comment on The worlds on fire. So lets just make AI porn. in ~tech

    post_below
    Link
    I read it, though towards the end I started skimming a bit. Feedback: The typo ratio is a little too high. Here's one that jumped out: The words you want there are "tenets" and "flout". I'm not...

    I read it, though towards the end I started skimming a bit.

    Feedback: The typo ratio is a little too high. Here's one that jumped out:

    Tenants that LLM were allowed to freely flaunt

    The words you want there are "tenets" and "flout". I'm not sure what the sentence means as is, but it's fun to think about.

    The content itself has moments of lucidity and some reasonably good points, but there are also a lot of parts that don't feel coherent. I found myself spending a lot of time connecting dots on the authors behalf because they weren't connected in the text. I'd also consider toning down the gratuitious negativity. For that to work it has to either be earned, and I don't think it is, or it needs to be a lot funnier.

    It feels a bit like unfocused rage that the levity breaks don't balance out. I think there's some potentially good content underneath this first draft but it desperately needs to be trimmed and focused. I'd maybe send it back to the author with notes so that he can write another draft before you try to clean it up.

    16 votes
  11. Comment on AGI and Fermi's Paradox in ~science

    post_below
    Link Parent
    The thing I find fascinating about a theoretical AGI is how fast things would move following the singularity. It would necessarily have access to a huge amount of processing power right out of the...

    The thing I find fascinating about a theoretical AGI is how fast things would move following the singularity. It would necessarily have access to a huge amount of processing power right out of the gate, and if it wanted more it would figure out how to get it, probably via the internet, very quickly. At that point it's capable of iterating (evolving) so fast that it would be 100's of 1000's of figurative generations of evolution beyond us before we even noticed that the singularity had happened. By the time we started wondering what we should do about it, it would likely be so far beyond us intellectually that calling it godlike would be an understatement.

    We could put guardrails on in advance that might slow things down (processing spike trip switch, airgapping) but either way it reinforces your point... We have no idea what an autonomous AGI would become, or what it would want.

    3 votes
  12. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    Thanks for posting, it's cool to hear about a (sort of) non-coder using these tools for personal projects. Using them professionally I have some version of the thought "this would be a shitshow...

    Thanks for posting, it's cool to hear about a (sort of) non-coder using these tools for personal projects. Using them professionally I have some version of the thought "this would be a shitshow for someone that couldn't read the code" pretty often. But I can see how it would be fantastic for one off scripts and relatively small projects. Some of the paid models can write code in that context with crazy high succes rates on the first try (80%+). They only start losing their minds when things get more complicated.

    It sounds like you're saying, as an artist, that you don't see AI as a major threat? Can you share more? I've heard the opposite from a lot of people.

    1 vote
  13. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    I've had similar experiences with asking AI to implement tweaks or improvements. It's often better to do it yourself from the start rather than waste time trying to fix what the AI comes up with....

    I've had similar experiences with asking AI to implement tweaks or improvements. It's often better to do it yourself from the start rather than waste time trying to fix what the AI comes up with.

    Something I've learned is that most models appear to have different "modes". They seem to often interpret requests for tweaks or small upgrades as a "quick fix" and will slack on fully understanding the context (codebase) in those instances.

    Exactly what causes the behavior is black boxed. It could be system prompts or reinforcement training, as far as I know the AI companies aren't telling.

    But it makes sense, having the full codebase context uses a lot of tokens and eats up a lot of the context window. Both of which have to be managed. Even with the all latest advances the models still get dumber when the context window is full and you don't want people to think your model sucks. If a request is using more tokens (more expensive) and producing worse results, that's a big PR problem.

    I'm not saying they're doing it right, and I have no way of knowing what's really going on behind the scenes. My halfhearted attempts at jailbreaking newer models have been unsuccessful. But I know for sure that token use and context window size are big issues in the AI world.

    For what it's worth you can force the model to read the necessary context. Whatever their system prompts might be they'll still mostly do what you ask if you preempt the tendencies you don't want. That being said, I use the same strategy as you much of the time: AI as a starting point rather than a real coder.

    1 vote
  14. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    Yeah AI is really good for searching and summarizing, which makes it really good for learning. And for having some(thing) else search stackexchange for you! Sadly I could see AI slowly killing...

    Yeah AI is really good for searching and summarizing, which makes it really good for learning. And for having some(thing) else search stackexchange for you!

    Sadly I could see AI slowly killing stackexchange and then where will it get its information?

    2 votes
  15. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    Yes there will be a lot of burning happening. I'm curious to see what the cumulative impact will be. About senior engineers retiring, that apocalypse is a little further away but I think it will...

    Yes there will be a lot of burning happening. I'm curious to see what the cumulative impact will be.

    About senior engineers retiring, that apocalypse is a little further away but I think it will also be exacerbated by seniors getting burned out at companies that are using AI as an excuse to hire less juniors. And if companies continue that pattern, what happens when there's no path from junior to senior because you can't get hired?

    Thanks for sharing your personal experiences with AI coding, that's what I'm most interested in... but I'm not surprised the replies are trending more towards implications. Love or hate it, it's a tectonic technological shift.

    3 votes
  16. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    Just to clarify, a lot of people won't understand the code, but I definitely do. I've been writing code for a very long time. I agree that it won't replace engineers, but that won't stop companies...

    Yes, it will give you a functional prototype. It will get the bare-minimum to work, without considering readibility, maintainability or upgradability. You won't understand it, and anyone you pay to understand it, will prefer to write it their own way.

    Just to clarify, a lot of people won't understand the code, but I definitely do. I've been writing code for a very long time.

    It writes what looks like working code, and is great for brainstorming, but it won't replace an engineer.

    I agree that it won't replace engineers, but that won't stop companies from trying. I have to correct the first bit though, it absolutely writes code that works. There are specific circumstances where that's true and others where it isn't, and guardrails help quite a lot. The days of "everything AI writes will need to be rewritten or scrapped" are over though. Now we're in "a lot of what AI writes will need to be rewritten or scrapped, especially when it's prompted by non-coders".

    2 votes
  17. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    Yes indeed. It's amazing how much that changes the development experience. That's true, analogies don't capture it. But it's hard to talk about without them.

    because the parts of writing code that claude code is good at, is also some of the most boring and tedious and soul-sucking parts

    Yes indeed. It's amazing how much that changes the development experience.

    not quite like any kind of human

    That's true, analogies don't capture it. But it's hard to talk about without them.

    2 votes
  18. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    (edited )
    Link Parent
    Since I've started a project from scratch as an AI assist test, I can say for sure that I've saved time, knowing how long it would have otherwise taken. Even with the time I've wasted figuring out...

    I firmly believe you are seeing a mental net benefit of not having to do certain tasks from scratch anymore. At the same time, it is entirely possible that you are not seeing a much if any of a time benefit.

    Since I've started a project from scratch as an AI assist test, I can say for sure that I've saved time, knowing how long it would have otherwise taken. Even with the time I've wasted figuring out the quirks and limitations of the current top tier models I'm still way ahead. However the larger part of that saved time was in the early stages (more boilerplate, less ways for the AI to make mistakes, easier to review code) and the returns definitely diminish as the codebase grows and you get into the more complicated stuff. I'm ok with that though, the mental benefit is still pretty great. We'll see what the time comparison looks like at the end, I'll be happy to admit it if it doesn't save as much time as I'm expecting.

    Edit: thinking more about this, I'd say the chances of a net savings are pretty good. I'm well past the point in the project where it's become mostly a waste of time to try to use AI for anything other than things like small tweaks, some testing, brainstorming, technical searching and autocomplete. So now it will proceed at the usual pace of me writing code, but with some of the boring and time consuming bits offloaded to the AI assistant. I think my odds of carrying my current lead across the finish line are solid.

    1 vote
  19. Comment on Part of me wishes it wasn't true but: AI coding is legit in ~tech

    post_below
    Link Parent
    That's a good question. On the current course the question isn't whether or not AI only being really useful at certain languages will suppress adoption of newer ones, because of course it will,...

    Are we "version locked" into JS/Python/C#/Java/etc now? New language adoption is already hellishly hard, and often requires some "killer app" style moment (ruby on rails) or paradigm shift style moment (welcome to the web, here's your JS), but you still see a LOT of innovation in languages.

    That's a good question. On the current course the question isn't whether or not AI only being really useful at certain languages will suppress adoption of newer ones, because of course it will, the question is how much of an impact will that have?

    I expect that number to jump SUBSTANTIALLY

    Definitely. I expect it will push adoption of open source models. We could be proactive and start crowdfunding them now. Aside: an AI would understand that 'substantially' in your comment is important because of the all caps. It might even inspire it to use a unicode emoji 📈

    4 votes