Just highlighting a commenter on that blog post, who raised a similar point as some other folks: the cost of inference is low, but it ignores the cost of training, which is dramatically higher. If...
Just highlighting a commenter on that blog post, who raised a similar point as some other folks: the cost of inference is low, but it ignores the cost of training, which is dramatically higher.
If Anthropic at any point in time uses 1 GW and inference makes up 8% of that (see Infrastructure-side gut-check), should you not scale your own consumption by a factor 12.5 to arrive at a true run rate which accounts for the research and training time which enables the inference in the first place? 10 extra fridges?
If we stopped training new models tomorrow, this argument holds water (2ml or so :3). Seems shaky though given that we're still amortizing the costs of our current models, while developing new ones?
As a consumer I don't have direct control over that, though. One possibility is to use an older model or a cheaper model instead of the latest and greatest? I'm mostly using Sonnet instead of Opus...
As a consumer I don't have direct control over that, though. One possibility is to use an older model or a cheaper model instead of the latest and greatest?
I'm mostly using Sonnet instead of Opus because Sonnet seems good enough.
Oh certainly! I'm not calling you or anyone else (or myself) out for their individual actions; apologies if I came across as doing so. It is what it is, ultimately. My critique was thrust at the...
Oh certainly! I'm not calling you or anyone else (or myself) out for their individual actions; apologies if I came across as doing so. It is what it is, ultimately. My critique was thrust at the author's argument around the impact of a query sent to e.g. Anthropic, since it isn't taking the full cost into account (i.e. including the training cost), only a lower number.
I don't think the author goes into it, but I'm also not a fan of the "carbon footprint" metric being used to criticize an individual's actions. Ultimately, the bulk of the impact happens outside of an individual consumer's direct control, so trying to pin all the blame on them has always felt like a farce (doesn't stop me from eating vegan and riding a bike, though).
Aren't most business activities ultimately for consumers? There's a reason carbon taxes are so effective. They help consumers see the real price of emissions for every product.
Ultimately, the bulk of the impact happens outside of an individual consumer's direct control
Aren't most business activities ultimately for consumers? There's a reason carbon taxes are so effective. They help consumers see the real price of emissions for every product.
Sorry, I don't think I explained myself well, so you're asking a question of a point I didn't bring up (i.e. "carbon emissions aren't caused by individuals' actions", which is false imo). My...
Sorry, I don't think I explained myself well, so you're asking a question of a point I didn't bring up (i.e. "carbon emissions aren't caused by individuals' actions", which is false imo).
My intent was to claim that individuals cannot effectively leverage their market participation in a way that incentivizes corporations and governments to become less impactful to the climate. I put forward that this is for (at least) two reasons:
Consumers can often only choose between options that all emit an equally large amount. My grocery store only stocks one type of tomato; I can't choose between the less and more impactful fruit.
Market-driven optimization seems likely to push actors into local optima, whereas we need a large push to leap out of that and into states that could realistically alter our current climate trajectory. An individual commuter can choose to buy a Prius or a Tesla, but it takes a provincial or federal initiative to buy a subway.
Carbon taxes tend to be effective when they make businesses see the cost, not necessarily your average end-consumer. They may drive up prices for consumers of some products as a side-effect of...
Carbon taxes tend to be effective when they make businesses see the cost, not necessarily your average end-consumer. They may drive up prices for consumers of some products as a side-effect of businesses passing those costs onto them, but I'm not particularly convinced that this is the main driver of change as opposed to the carbon taxes cutting into businesses' bottom line.
I’m speculating here, so apologies in advance if you’re basing that on info I don’t have, but I’d always assumed that pushing the cost onto the business affected their supply chain and process...
I’m speculating here, so apologies in advance if you’re basing that on info I don’t have, but I’d always assumed that pushing the cost onto the business affected their supply chain and process decisions far more than it affects consumer behaviour.
I wouldn’t expect most people to notice a 1% price difference in most products they buy, for example, but to a major manufacturer that’ll add up to a pretty significant difference in profit. They’ll absolutely build their process around coal furnaces and grinding up endangered species if solar power would be 1% more expensive, and you could even argue they’re incentivised to do so because if they don’t their competitors will, but a carbon tax gives both options something closer to the true price and all the companies rationally choose solar on a level playing field instead.
On a median day, I estimate I consume 1,300 Wh through Claude Code—4,400 “typical queries” worth. (For those wondering, this is $15-20 of token spend on a typical day.)
[...]
So, if I wanted to analogize the energy usage of my use of coding agents, it’s something like running the dishwasher an extra time each day, keeping an extra refrigerator, or skipping one drive to the grocery store in favor of biking there. To me, this is very different than, in Benjamin Todd’s words, “a terrible reason to avoid” this level of AI use. These are the sorts of things that would make me think twice.
Interesting read! For me it's interesting when I see people criticizing the electricity use of AI citing environmental concerns, while conveniently forgetting to criticize the fact that it's only...
Interesting read!
For me it's interesting when I see people criticizing the electricity use of AI citing environmental concerns, while conveniently forgetting to criticize the fact that it's only an environmental concern because we're still reliant on fossil fuels and an outdated power grid that doesn't prioritize energy storage and energy sharing, at least here in America.
In fact, come to think of it, most popular criticisms of AI would be completely invalidated if you removed late stage capitalism from the equation.
Energy is never free. It doesn't matter if you're getting it from solar, hydro, wind, or coal. The demand you're placing on the grid requires more solar panels to be manufactured, more wind...
Energy is never free. It doesn't matter if you're getting it from solar, hydro, wind, or coal. The demand you're placing on the grid requires more solar panels to be manufactured, more wind turbines to be built, and more dams to be erected; all of which have an environmental impact.
Renewables have far less of an impact than fossil fuels do, but manufacturing a solar panel is still a very resource intensive process that has a negative environmental impact. If we're spending a ton of energy without much benefit, we're still destroying the environment for no reason.
What? Ai is a product of late stage capitalism... you're talking about a product built to hoover up incredible amounts of personal data, while relying on stolen work of others, funnelling all of...
What? Ai is a product of late stage capitalism... you're talking about a product built to hoover up incredible amounts of personal data, while relying on stolen work of others, funnelling all of it to the benefit of a corporate entity with direct ties to an imperialist military regime. This thing was funded with the ill gotten gains of investors and continues to drag down the economy in a variety of ways. If you removed late stage capitalism from ai, ai (in its current form) would not exist.
The companies wouldn’t exist, but the tech would - or something as close as you can reasonably assume in this kind of counterfactual. These are models that come from adapting scientific work, by...
The companies wouldn’t exist, but the tech would - or something as close as you can reasonably assume in this kind of counterfactual. These are models that come from adapting scientific work, by researchers who said “making computers handle natural language is incredibly useful for data parsing and machine translation” or “the statistical properties there are fascinating, I wonder how far we can push that?” or the old reliable “holy shit, I can make software talk, that’s goddamn cool”.
In being justifiably pissed off at the major companies behind most of the negative impact, you’re actually giving them far too much credit. They don’t own “AI”, they aren’t by any means an essential part of the technical ecosystem, and allowing your dislike for them to spill over onto the concept of an LLM or an image model or whatever is kinda buying into that very capitalist idea of the corporation being synonymous with the invention - rather than the corporation packaging and monetising the last few percent after university researchers and OSS developers and hyperfocused nerds spent decades laying the foundations. The major tech companies do have a lot of incredibly skilled people working for them too, and I absolutely don’t want to undermine that; if anything I’m saying the opposite, that a lot of those same people would still be doing equally good or better work without the profit motive.
The timeline might have ended up a few years differently, the burn would likely have been slower, and the hardware might not have been such a focus without eleventy zillion dollars pouring into NVIDIA. But we’d still have the tech - with all the possibilities and potential consequences that brings - without anything like as many misaligned incentives, and likely with a tiny fraction of the likely harm and resulting backlash because of that.
I mean, you're basically making my point for me. The worst parts of AI right now, the data harvesting, labor replacement, and stolen artwork, are all tied to the system it's being built under. If...
I mean, you're basically making my point for me. The worst parts of AI right now, the data harvesting, labor replacement, and stolen artwork, are all tied to the system it's being built under.
If people weren't forced to tie their survival to jobs, and artists weren't forced to monetize every piece of artwork as income just to live, this would be a very different conversation. Even calling it "theft" depends on a capitalist view of art as property and livelihood. And when people criticize AI in those terms, they often end up reinforcing the same capitalist assumptions they say they oppose.
To use a fictional comparison, Star Trek’s holodeck works in a very similar way. The holodeck computer in Star Trek also generates things from records of existing people, harvested data, artwork, published stories, historical data, and reference material, but nobody treats that as some inherent moral crime or art theft, because it exists in a post-scarcity world.
And I'm not trying to defend AI as it exists right now, I'm just saying that in the moments a lot of people criticize AI they tend to accidentally validate capitalistic power structures. And that seems interesting/odd to me.
(I agree with your point, but imo the Star Trek comparison isn't terribly compelling ... no one has concerns about the holodeck in fiction because none of the writers wanted to deal with that...
To use a fictional comparison, Star Trek’s holodeck works in a very similar way. The holodeck computer in Star Trek also generates things from records of existing people, harvested data, artwork, published stories, historical data, and reference material, but nobody treats that as some inherent moral crime or art theft, because it exists in a post-scarcity world.
(I agree with your point, but imo the Star Trek comparison isn't terribly compelling ... no one has concerns about the holodeck in fiction because none of the writers wanted to deal with that quagmire. See also the ethics -- and lack of objection to -- simulating people from your day-to-day life 🤢)
I was thinking of the holodeck more as a concept within the Star Trek universe moreso than literally how it's depicted in the show. But also Star Trek TNG and Voyager have gone into the issues of...
I was thinking of the holodeck more as a concept within the Star Trek universe moreso than literally how it's depicted in the show.
But also Star Trek TNG and Voyager have gone into the issues of simulating real people, not just in day to day life but also real people they haven't met. TNG's episode "Booby Trap" goes into that when Geordi simulates Dr. Leah Brahms who was a designer of the Enterprise engines and ends up getting romantically involved with her. They depict this as very clearly unethical and this is re-enforced when in the next season Geordi meets the real Dr. Leah Brahms, who is horrified and disgusted to learn about the simulation.
And then there's all the Reginald Barclay stuff and his "holodiction." They very clearly depicted his use of the holodeck in this way as innapropriate and an abuse of privacy with the crew being depicted as extremely uncomfortable when they found out that Barclay was simulating them on the holodeck in exploitative ways.
And it's not even approached just once, then again in Voyager Barclay simulates the Voyager crew and once again they go into the ethical dilemmas of that and even show the politics of it when Barclay was almost kicked out of Starfleet in those episodes.
You could argue that because neither Geordi or Barclay faced permanent consequences of this either officially or from the crew and people they simulated, that Starfleet and that universe, or even the writers of Star Trek, didn't take it seriously. But to me that's just a suspension of disbelief I have to contend with because of the real world logistics and politics of creating a serialized show in the 90s when it comes to actors, contracts, and characters, while also trying to address these kinds of ethical questions in the show and writing. Also the kind of "reset" that each episode tends to have when it's serialized.
And my general headcannon is that holodecks do typically have safeguards preventing them from simulating real people, but they're as easy to disable as the safety measures. Or alternatively personal or "commercial" level holodecks normal people would have access to do have these safeguards and no ability to turn them off, but official Starfleet Holodecks have less restrictions due to them being on official Starfleet ships.
And even on top of all of that, Voyager had an episode where The Doctor, a holographic entity at this point, wrote a holonovel and there's a legal battle on the definition of art and if holograms have the right to own their creative work. Sure, the doctor at this point in the show is an actual intelligence and not just a simulated AI, but it still rhymes and touches upon some things we're currently going through with AI.
The main thing ALL of these things have in common in Star Trek is that all these simulations on the holodeck are created from mined data, historical data, and the personal logs and likelinesses of real people, yet as a concept the holodecks aren't generally hated by fans or even people familiar with Star Trek.
Fair enough! Apologies; I don't have very comprehensive knowledge of Star Trek, and only casually watched it as a kid. To be clearer with my original point: using a fictional TV comparison (as you...
Fair enough! Apologies; I don't have very comprehensive knowledge of Star Trek, and only casually watched it as a kid.
To be clearer with my original point: using a fictional TV comparison (as you noted) doesn't seem compelling. The fictional population of the fictional Star Trek universe behave in network approved ways that a team of writers agreed on, so their fictional opinions on AI generated content bears only the slightest on the conversations we're having about decidedly non-fictional life.
Perhaps to be even clearer: if Gene Roddenberry rose from the dead, proclaimed that an AI stealing art is actually cool and normal, then promptly collapsed back into a heap, I would give an equivalently fractional slice of a damn about his opinion.
(edit) And just to underline this: I agree with you! These conversations about AI generated content often end up roping in the notion of intellectual property rights, portraying them as an obvious social good. That's a heck of an argument, and should hopefully be treated as perpendicular to the current issues we're dealing with.
Notably, LaForge and Barclay are officers. LaForge is a bridge officer and the highest ranking engineering officer. They would have clearance that the average Holodeck user would not (keeping in...
Notably, LaForge and Barclay are officers. LaForge is a bridge officer and the highest ranking engineering officer. They would have clearance that the average Holodeck user would not (keeping in mind there are hundreds of people on board, many who are not Starfleet). Especially since their responsibilities include maintaining and overriding a device that's had a track record of injuring people or posing a threat to the ship.
Fwiw, as a person who criticizes both AI data centre power consumption and power grid issues (but isn't American), the concern is that it's using such an incredibly large amount of power to...
For me it's interesting when I see people criticizing the electricity use of AI citing environmental concerns, while conveniently forgetting to criticize the fact that it's only an environmental concern because we're still reliant on fossil fuels and an outdated power grid that doesn't prioritize energy storage and energy sharing, at least here in America.
Fwiw, as a person who criticizes both AI data centre power consumption and power grid issues (but isn't American), the concern is that it's using such an incredibly large amount of power to perform tasks that most do not see as valuable (or at least, insufficiently valuable vs. the costs). Skimming this article, it looks like some projections claim that data centres will consume ~6.7-12% of the total yearly US energy production. That growth has been causing household electricity rates to increase, which is straining an already strained population.
So ... IMO at least, my criticisms of AI would not be invalidated if you removed late stage capitalism from the equation. Power generation has always taken a long time to come online, since it's expensive to scale up, and typically grows slowly enough to be predictable -- the current growth of AI data centres bucks that trend. ROI on that power generation + data centre buildout is poor, too: assuming one interprets the METR and those MIT researchers results as meaning that traditional work and business practices not being amenable to AI/LLM usage in their current forms (which goes both ways), we'd need reforms across all sectors, which is only loosely related to late stage capitalism.
I freely concede that there's a lot of room for debate about whether LLM agents are a good thing. Also about what they should be used for, and what they shouldn't, and how they should be...
the concern is that it's using such an incredibly large amount of power to perform tasks that most do not see as valuable
I freely concede that there's a lot of room for debate about whether LLM agents are a good thing. Also about what they should be used for, and what they shouldn't, and how they should be regulated.
But I think it's pretty obviously evident that a lot of people consider LLMs useful.
assuming one interprets the METR and those MIT researchers results as meaning that traditional work and business practices not being amenable to AI/LLM usage in their current forms (which goes both ways), we'd need reforms across all sectors, which is only loosely related to late stage capitalism.
I've talked about the issues with the METR study before so I won't repeat anything, it's not necessary, this is from the header of the landing page:
Measuring the Impact of Early-2025 AI
From early 2025 to early 2026 the technology has changed dramatically. There is no way, like really no way, to make a case that these tools aren't useful in a wide variety of contexts. This is easy to demonstrate. It's happening all around you.
Perhaps you avoid AI entirely, in which case I applaud your principles. However, the "they're not really useful, people are just hallucinating" angle is no longer compatible with reality.
Oof, OK. I think my comment was too poorly written, and that's causing some confusion. Thankfully, Claude can help explain it 😅 I'm sorry for writing a hackle-raising message; I occasionally have...
Oof, OK. I think my comment was too poorly written, and that's causing some confusion. Thankfully, Claude can help explain it 😅 I'm sorry for writing a hackle-raising message; I occasionally have very subtle views on things, and they often don't survive translation into text.
kacey:
(posted my original message, minus the context quote)
Could you help me understand this comment? It's a bit wordy and confusing.
Is this making a claim that a lot of people don't find LLMs to be useful?
Is this making a claim that LLMs aren't useful in a wide variety of contexts?
Is the author implying that they're avoiding AI entirely, or stating something like "they're not really useful, people are just hallucinating"?
Claude:
This is a fairly dense comment (K: oof) — let me break it down in relation to your three questions. On whether the author is claiming people don't find LLMs useful
Not quite. The author is making a more precise point: that the value people get from LLMs doesn't justify the cost, specifically the energy cost. It's a cost-benefit argument, not a pure "nobody likes it" claim. The phrase "insufficiently valuable vs. the costs" is the key — some value is acknowledged, but it's framed as not enough to justify the power draw. On whether they're claiming LLMs aren't useful across many contexts
This is where the METR/MIT reference comes in, and it's the trickiest part of the comment. The author is referencing research suggesting that LLMs don't integrate smoothly into real-world work as currently practiced — not that they're useless in principle, but that existing workflows, business processes, and organizational structures aren't set up to actually benefit from them. The "which goes both ways" aside means this cuts against both AI boosters ("it'll transform everything!") and pure skeptics ("it's useless") — the reality being more nuanced: it could be useful, but only after widespread reforms across industries, which is a very high bar (K: well actchually, I was claiming that LLM tools and business both need to evolve, though not necessarily to the same degree). On whether the author is avoiding AI or calling it all hype
Neither, really. They're not making a personal lifestyle statement, and they're not calling LLMs a hallucination or scam. Their position is more structural: the deployment of AI at current scale is outpacing both the energy infrastructure needed to support it and the organizational reform needed to make it actually productive. The ROI isn't there yet, and the costs (energy, grid strain, rising household electricity bills) are already being felt by ordinary people now. The core argument, simplified
"Even setting aside capitalism-specific critiques, AI data centers are growing too fast for the power grid to keep up, the energy costs are real and already hurting people, and the productivity gains to justify those costs require societal changes that haven't happened yet."
For context, I'm currently nudging an LLM in the other window to work on an ML project of mine, and I'm banking the next several years of my life (and a great big chunk of my savings) on a long shot bet that LLM tooling is useful in some obscure manufacturing applications.
Ok fair enough, maybe I misread. The reason I speculated that you avoid LLMs is that it's difficult to square "most do not see as valuable" with having used current gen models. It's hard to...
Ok fair enough, maybe I misread.
The reason I speculated that you avoid LLMs is that it's difficult to square "most do not see as valuable" with having used current gen models. It's hard to imagine that most people, even for light applications like asking a chatbot for a recipe, wouldn't see it as valuable relative to the miniscule increase in their daily energy footprint it represents. Virtually any other footprint related action they could take (like biking vs driving) would have a bigger impact by a magnitude.
Whereas for power users (most of which are using a lot less than the author's example of multiple parallel agents for hours) the utility seems entirely worth a fraction of a dishwasher run.
I am concerned about the overall LLM footprint though. It's negligible on an individual basis but in aggregate it shouldn't be ignored. I only wish regulation moved quicker, some of those 10's of billions from each funding round should be going to offsetting the water and power impacts. At a minimum, force them to be carbon neutral and mitigate their impact on local communities.
Oof. Sorry, I missed this response a couple weeks ago. Your recent thread about your dislike of the metr study reminded me of this conversation! Fair enough! I don't find it difficult to imagine,...
Oof. Sorry, I missed this response a couple weeks ago. Your recent thread about your dislike of the metr study reminded me of this conversation!
The reason I speculated that you avoid LLMs is that it's difficult to square "most do not see as valuable" with having used current gen models. It's hard to imagine that most people, even for light applications like asking a chatbot for a recipe, wouldn't see it as valuable relative to the miniscule increase in their daily energy footprint it represents. Virtually any other footprint related action they could take (like biking vs driving) would have a bigger impact by a magnitude.
Fair enough! I don't find it difficult to imagine, because of the massive outpouring of people who have claimed that they don't see them as valuable. It's hard sometimes to recall that we live in social bubbles of our own selection and making, but the people who we choose to associate with are often ones who are closest to us socially and economically. The overwhelming majority of people don't think about cost/benefits in a global sense; they're not comparing dramatic, lifestyle altering choices (e.g. moving closer to work, so as to cycle) with day to day costs (eating a steak, subscribing to Copilot, flying to Hawaii). They look at the cost of a Claude Pro subscription, and consider whether it's worth the same as -- say -- a Netflix sub, then pull the trigger over the course of probably 30 seconds.
This forum is a microcosm of people who are looking for interesting, deeper conversations than the average, by far. We're all spending waaaaaay more time discussing things that're dismissed or selected in an instant by the general population.
Also -- re. your suggestion of using LLMs for a recipe -- I've successfully had an AI nuke some rice into a carbon disk, and fudge up proportions badly enough on some baked goods, that I am extremely reluctant to suggest them to people for that purpose. They do a great job of cleaning up blog post recipes, though!
Overall: the backlash against AI has been absolutely everywhere. The perception of value is critical in these cases: if people don't think it's a good idea, it doesn't matter if it is or not. A friend hates AI so much that the friction of putting their email into Claude was too much for them, but using the free gemini tooling that came on their chromebook was enough of a gateway to get them interested. For those people -- if there isn't a nudge to start using the tooling -- they will forever see it as trash to be discarded.
[...] offsetting the water and power impacts.
Oh, agreed. The water impacts could be zero if we forced them to use sane systems (or cooling towers, like in nuclear power plants!)
Overly overall: I don't think we're disagreeing by much? Just about our perceptions of other peoples' perceptions, which is admittedly a dicey problem to begin with.
Just wanted to share a similar post from last year, talking about the subject: https://tildes.net/~tech/1o3r/we_did_the_math_on_ais_energy_footprint_heres_the_story_you_havent_heard
Just highlighting a commenter on that blog post, who raised a similar point as some other folks: the cost of inference is low, but it ignores the cost of training, which is dramatically higher.
If we stopped training new models tomorrow, this argument holds water (2ml or so :3). Seems shaky though given that we're still amortizing the costs of our current models, while developing new ones?
As a consumer I don't have direct control over that, though. One possibility is to use an older model or a cheaper model instead of the latest and greatest?
I'm mostly using Sonnet instead of Opus because Sonnet seems good enough.
Oh certainly! I'm not calling you or anyone else (or myself) out for their individual actions; apologies if I came across as doing so. It is what it is, ultimately. My critique was thrust at the author's argument around the impact of a query sent to e.g. Anthropic, since it isn't taking the full cost into account (i.e. including the training cost), only a lower number.
I don't think the author goes into it, but I'm also not a fan of the "carbon footprint" metric being used to criticize an individual's actions. Ultimately, the bulk of the impact happens outside of an individual consumer's direct control, so trying to pin all the blame on them has always felt like a farce (doesn't stop me from eating vegan and riding a bike, though).
Aren't most business activities ultimately for consumers? There's a reason carbon taxes are so effective. They help consumers see the real price of emissions for every product.
Sorry, I don't think I explained myself well, so you're asking a question of a point I didn't bring up (i.e. "carbon emissions aren't caused by individuals' actions", which is false imo).
My intent was to claim that individuals cannot effectively leverage their market participation in a way that incentivizes corporations and governments to become less impactful to the climate. I put forward that this is for (at least) two reasons:
Carbon taxes tend to be effective when they make businesses see the cost, not necessarily your average end-consumer. They may drive up prices for consumers of some products as a side-effect of businesses passing those costs onto them, but I'm not particularly convinced that this is the main driver of change as opposed to the carbon taxes cutting into businesses' bottom line.
I’m speculating here, so apologies in advance if you’re basing that on info I don’t have, but I’d always assumed that pushing the cost onto the business affected their supply chain and process decisions far more than it affects consumer behaviour.
I wouldn’t expect most people to notice a 1% price difference in most products they buy, for example, but to a major manufacturer that’ll add up to a pretty significant difference in profit. They’ll absolutely build their process around coal furnaces and grinding up endangered species if solar power would be 1% more expensive, and you could even argue they’re incentivised to do so because if they don’t their competitors will, but a carbon tax gives both options something closer to the true price and all the companies rationally choose solar on a level playing field instead.
From the article:
[...]
Interesting read!
For me it's interesting when I see people criticizing the electricity use of AI citing environmental concerns, while conveniently forgetting to criticize the fact that it's only an environmental concern because we're still reliant on fossil fuels and an outdated power grid that doesn't prioritize energy storage and energy sharing, at least here in America.
In fact, come to think of it, most popular criticisms of AI would be completely invalidated if you removed late stage capitalism from the equation.
Energy is never free. It doesn't matter if you're getting it from solar, hydro, wind, or coal. The demand you're placing on the grid requires more solar panels to be manufactured, more wind turbines to be built, and more dams to be erected; all of which have an environmental impact.
Renewables have far less of an impact than fossil fuels do, but manufacturing a solar panel is still a very resource intensive process that has a negative environmental impact. If we're spending a ton of energy without much benefit, we're still destroying the environment for no reason.
What? Ai is a product of late stage capitalism... you're talking about a product built to hoover up incredible amounts of personal data, while relying on stolen work of others, funnelling all of it to the benefit of a corporate entity with direct ties to an imperialist military regime. This thing was funded with the ill gotten gains of investors and continues to drag down the economy in a variety of ways. If you removed late stage capitalism from ai, ai (in its current form) would not exist.
The companies wouldn’t exist, but the tech would - or something as close as you can reasonably assume in this kind of counterfactual. These are models that come from adapting scientific work, by researchers who said “making computers handle natural language is incredibly useful for data parsing and machine translation” or “the statistical properties there are fascinating, I wonder how far we can push that?” or the old reliable “holy shit, I can make software talk, that’s goddamn cool”.
In being justifiably pissed off at the major companies behind most of the negative impact, you’re actually giving them far too much credit. They don’t own “AI”, they aren’t by any means an essential part of the technical ecosystem, and allowing your dislike for them to spill over onto the concept of an LLM or an image model or whatever is kinda buying into that very capitalist idea of the corporation being synonymous with the invention - rather than the corporation packaging and monetising the last few percent after university researchers and OSS developers and hyperfocused nerds spent decades laying the foundations. The major tech companies do have a lot of incredibly skilled people working for them too, and I absolutely don’t want to undermine that; if anything I’m saying the opposite, that a lot of those same people would still be doing equally good or better work without the profit motive.
The timeline might have ended up a few years differently, the burn would likely have been slower, and the hardware might not have been such a focus without eleventy zillion dollars pouring into NVIDIA. But we’d still have the tech - with all the possibilities and potential consequences that brings - without anything like as many misaligned incentives, and likely with a tiny fraction of the likely harm and resulting backlash because of that.
I mean, you're basically making my point for me. The worst parts of AI right now, the data harvesting, labor replacement, and stolen artwork, are all tied to the system it's being built under.
If people weren't forced to tie their survival to jobs, and artists weren't forced to monetize every piece of artwork as income just to live, this would be a very different conversation. Even calling it "theft" depends on a capitalist view of art as property and livelihood. And when people criticize AI in those terms, they often end up reinforcing the same capitalist assumptions they say they oppose.
To use a fictional comparison, Star Trek’s holodeck works in a very similar way. The holodeck computer in Star Trek also generates things from records of existing people, harvested data, artwork, published stories, historical data, and reference material, but nobody treats that as some inherent moral crime or art theft, because it exists in a post-scarcity world.
And I'm not trying to defend AI as it exists right now, I'm just saying that in the moments a lot of people criticize AI they tend to accidentally validate capitalistic power structures. And that seems interesting/odd to me.
(I agree with your point, but imo the Star Trek comparison isn't terribly compelling ... no one has concerns about the holodeck in fiction because none of the writers wanted to deal with that quagmire. See also the ethics -- and lack of objection to -- simulating people from your day-to-day life 🤢)
I was thinking of the holodeck more as a concept within the Star Trek universe moreso than literally how it's depicted in the show.
But also Star Trek TNG and Voyager have gone into the issues of simulating real people, not just in day to day life but also real people they haven't met. TNG's episode "Booby Trap" goes into that when Geordi simulates Dr. Leah Brahms who was a designer of the Enterprise engines and ends up getting romantically involved with her. They depict this as very clearly unethical and this is re-enforced when in the next season Geordi meets the real Dr. Leah Brahms, who is horrified and disgusted to learn about the simulation.
And then there's all the Reginald Barclay stuff and his "holodiction." They very clearly depicted his use of the holodeck in this way as innapropriate and an abuse of privacy with the crew being depicted as extremely uncomfortable when they found out that Barclay was simulating them on the holodeck in exploitative ways.
And it's not even approached just once, then again in Voyager Barclay simulates the Voyager crew and once again they go into the ethical dilemmas of that and even show the politics of it when Barclay was almost kicked out of Starfleet in those episodes.
You could argue that because neither Geordi or Barclay faced permanent consequences of this either officially or from the crew and people they simulated, that Starfleet and that universe, or even the writers of Star Trek, didn't take it seriously. But to me that's just a suspension of disbelief I have to contend with because of the real world logistics and politics of creating a serialized show in the 90s when it comes to actors, contracts, and characters, while also trying to address these kinds of ethical questions in the show and writing. Also the kind of "reset" that each episode tends to have when it's serialized.
And my general headcannon is that holodecks do typically have safeguards preventing them from simulating real people, but they're as easy to disable as the safety measures. Or alternatively personal or "commercial" level holodecks normal people would have access to do have these safeguards and no ability to turn them off, but official Starfleet Holodecks have less restrictions due to them being on official Starfleet ships.
And even on top of all of that, Voyager had an episode where The Doctor, a holographic entity at this point, wrote a holonovel and there's a legal battle on the definition of art and if holograms have the right to own their creative work. Sure, the doctor at this point in the show is an actual intelligence and not just a simulated AI, but it still rhymes and touches upon some things we're currently going through with AI.
The main thing ALL of these things have in common in Star Trek is that all these simulations on the holodeck are created from mined data, historical data, and the personal logs and likelinesses of real people, yet as a concept the holodecks aren't generally hated by fans or even people familiar with Star Trek.
Fair enough! Apologies; I don't have very comprehensive knowledge of Star Trek, and only casually watched it as a kid.
To be clearer with my original point: using a fictional TV comparison (as you noted) doesn't seem compelling. The fictional population of the fictional Star Trek universe behave in network approved ways that a team of writers agreed on, so their fictional opinions on AI generated content bears only the slightest on the conversations we're having about decidedly non-fictional life.
Perhaps to be even clearer: if Gene Roddenberry rose from the dead, proclaimed that an AI stealing art is actually cool and normal, then promptly collapsed back into a heap, I would give an equivalently fractional slice of a damn about his opinion.
(edit) And just to underline this: I agree with you! These conversations about AI generated content often end up roping in the notion of intellectual property rights, portraying them as an obvious social good. That's a heck of an argument, and should hopefully be treated as perpendicular to the current issues we're dealing with.
Notably, LaForge and Barclay are officers. LaForge is a bridge officer and the highest ranking engineering officer. They would have clearance that the average Holodeck user would not (keeping in mind there are hundreds of people on board, many who are not Starfleet). Especially since their responsibilities include maintaining and overriding a device that's had a track record of injuring people or posing a threat to the ship.
Fwiw, as a person who criticizes both AI data centre power consumption and power grid issues (but isn't American), the concern is that it's using such an incredibly large amount of power to perform tasks that most do not see as valuable (or at least, insufficiently valuable vs. the costs). Skimming this article, it looks like some projections claim that data centres will consume ~6.7-12% of the total yearly US energy production. That growth has been causing household electricity rates to increase, which is straining an already strained population.
So ... IMO at least, my criticisms of AI would not be invalidated if you removed late stage capitalism from the equation. Power generation has always taken a long time to come online, since it's expensive to scale up, and typically grows slowly enough to be predictable -- the current growth of AI data centres bucks that trend. ROI on that power generation + data centre buildout is poor, too: assuming one interprets the METR and those MIT researchers results as meaning that traditional work and business practices not being amenable to AI/LLM usage in their current forms (which goes both ways), we'd need reforms across all sectors, which is only loosely related to late stage capitalism.
I freely concede that there's a lot of room for debate about whether LLM agents are a good thing. Also about what they should be used for, and what they shouldn't, and how they should be regulated.
But I think it's pretty obviously evident that a lot of people consider LLMs useful.
I've talked about the issues with the METR study before so I won't repeat anything, it's not necessary, this is from the header of the landing page:
From early 2025 to early 2026 the technology has changed dramatically. There is no way, like really no way, to make a case that these tools aren't useful in a wide variety of contexts. This is easy to demonstrate. It's happening all around you.
Perhaps you avoid AI entirely, in which case I applaud your principles. However, the "they're not really useful, people are just hallucinating" angle is no longer compatible with reality.
Oof, OK. I think my comment was too poorly written, and that's causing some confusion. Thankfully, Claude can help explain it 😅 I'm sorry for writing a hackle-raising message; I occasionally have very subtle views on things, and they often don't survive translation into text.
For context, I'm currently nudging an LLM in the other window to work on an ML project of mine, and I'm banking the next several years of my life (and a great big chunk of my savings) on a long shot bet that LLM tooling is useful in some obscure manufacturing applications.
Ok fair enough, maybe I misread.
The reason I speculated that you avoid LLMs is that it's difficult to square "most do not see as valuable" with having used current gen models. It's hard to imagine that most people, even for light applications like asking a chatbot for a recipe, wouldn't see it as valuable relative to the miniscule increase in their daily energy footprint it represents. Virtually any other footprint related action they could take (like biking vs driving) would have a bigger impact by a magnitude.
Whereas for power users (most of which are using a lot less than the author's example of multiple parallel agents for hours) the utility seems entirely worth a fraction of a dishwasher run.
I am concerned about the overall LLM footprint though. It's negligible on an individual basis but in aggregate it shouldn't be ignored. I only wish regulation moved quicker, some of those 10's of billions from each funding round should be going to offsetting the water and power impacts. At a minimum, force them to be carbon neutral and mitigate their impact on local communities.
Oof. Sorry, I missed this response a couple weeks ago. Your recent thread about your dislike of the metr study reminded me of this conversation!
Fair enough! I don't find it difficult to imagine, because of the massive outpouring of people who have claimed that they don't see them as valuable. It's hard sometimes to recall that we live in social bubbles of our own selection and making, but the people who we choose to associate with are often ones who are closest to us socially and economically. The overwhelming majority of people don't think about cost/benefits in a global sense; they're not comparing dramatic, lifestyle altering choices (e.g. moving closer to work, so as to cycle) with day to day costs (eating a steak, subscribing to Copilot, flying to Hawaii). They look at the cost of a Claude Pro subscription, and consider whether it's worth the same as -- say -- a Netflix sub, then pull the trigger over the course of probably 30 seconds.
This forum is a microcosm of people who are looking for interesting, deeper conversations than the average, by far. We're all spending waaaaaay more time discussing things that're dismissed or selected in an instant by the general population.
Also -- re. your suggestion of using LLMs for a recipe -- I've successfully had an AI nuke some rice into a carbon disk, and fudge up proportions badly enough on some baked goods, that I am extremely reluctant to suggest them to people for that purpose. They do a great job of cleaning up blog post recipes, though!
Overall: the backlash against AI has been absolutely everywhere. The perception of value is critical in these cases: if people don't think it's a good idea, it doesn't matter if it is or not. A friend hates AI so much that the friction of putting their email into Claude was too much for them, but using the free gemini tooling that came on their chromebook was enough of a gateway to get them interested. For those people -- if there isn't a nudge to start using the tooling -- they will forever see it as trash to be discarded.
Oh, agreed. The water impacts could be zero if we forced them to use sane systems (or cooling towers, like in nuclear power plants!)
Overly overall: I don't think we're disagreeing by much? Just about our perceptions of other peoples' perceptions, which is admittedly a dicey problem to begin with.
Just wanted to share a similar post from last year, talking about the subject: https://tildes.net/~tech/1o3r/we_did_the_math_on_ais_energy_footprint_heres_the_story_you_havent_heard