I think this is the crux - the article isn't aimed at someone who uses LLMs or other transformer based tools on the regular, but rather a 'booster' Otherwise the 'It doesn't work' rhetoric seems...
I think this is the crux - the article isn't aimed at someone who uses LLMs or other transformer based tools on the regular, but rather a 'booster'
So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs, the less likely they are to emphatically argue that I’m missing out by not doing so myself.
No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality.
Otherwise the 'It doesn't work' rhetoric seems so strange to me. Yes, the predictions of future performance are quite extreme. But these systems are doing useful work for me on the regular (Agentic search, summarization, captioning, as a programming assistant).
Yeah, I'm not in the target audience because I don't want to argue with AI boosters. It seems better not to engage? Unless you're genuinely curious about them. There are certain writers (Fredrik...
Yeah, I'm not in the target audience because I don't want to argue with AI boosters. It seems better not to engage? Unless you're genuinely curious about them.
There are certain writers (Fredrik deBoer comes to mind) who have a regular schtick of arguing with bad ideas floating around somewhere on the Internet. It's reacting to things I don't care about, sometimes published in social media like TikTok that I don't even use. He's pretty up front about having to do this to make a living. He's a good writer, but the only thing for it was to unsubscribe.
Do you work in tech? In my experience, there are a LOT of AI boosters in the industry right now. They have a tendency to try to derail a lot of other projects with false AI promises and even dire...
Do you work in tech? In my experience, there are a LOT of AI boosters in the industry right now. They have a tendency to try to derail a lot of other projects with false AI promises and even dire warnings that 'if we don't commit resources to AI right now, we'll get left behind'. There are shades of truth to this in a very limited set of niches, but in most cases I think it's good to have some ammunition to fight back against these people, since they're generally hellbent on the idea of automating away all of our jobs.
When I first read this piece, I felt pretty confused because while I’ve seen “AI boosters” around, they just kind of come off as any other software salesperson to someone outside the industry. I...
When I first read this piece, I felt pretty confused because while I’ve seen “AI boosters” around, they just kind of come off as any other software salesperson to someone outside the industry. I don’t need a 15000 word essay deconstructing why some company is actually overpitching some product for our business; it happened before AI and will happen so long as sales exists.
I wonder if the people that feel this article is overreacting are in the same boat that I am. From outside of tech, AI just feels like any other tech innovation. Some products work, some products don’t. Everyone says they are the former. It’s just the sales song and dance.
However, if you are working on the software stack, there’s legitimate discussion about what tools go into that stack and how you use AI in your product, and I can see how “AI boosters” could be legitimately harmful to business productivity. That perspective justifies an article like this a lot more.
Should also bare in mind that unlike many other big tech innovations like cloud or even mobile apps, AI boosters target small business owners that really don't have the resources or expertise to...
Should also bare in mind that unlike many other big tech innovations like cloud or even mobile apps, AI boosters target small business owners that really don't have the resources or expertise to make informed decisions. And there isn't even a good playbook from these people on how to see any RoI. It's all productivity gains, but unlike something like process automation or mechanization, there's no clear metrics to measure the related dollar value gains.
To give a personal anecdote that's fresh in my mind, I attended a family event and there's a guy who usually likes to talk consumer tech with me. EV's, home automation, self hosting. But last night he's showed me GPT logs that has very specific and sensitive information about his Wholesale Seafood business, trying to get any sort of guidance on how to grow since business is very slow. And this LLM is recommending things that he knows to be BS. For example, reducing the excessive costs like refrigeration and the unionized staff.
But he believes that the fault is with him, so he wants to know how to hire a prompt engineer to get correct answers. And I sort of had to lead him down the path of the Elon Musk logic. If they can communicate well on everything you are not an expert in but instantly fail in your field, then they most likely aren't just lacking in your one area.
The problem with entertaining AI (or any hype) Boosters is that they rarely have the users best interests at heart. They know corporate are not going to bite. Way too much risk in the tech. Every CIO knows its not hard to work out the true cost of such service and its only a matter of time before the Service Providers start enshitiflating the whole thing. GPT5 all but confirmed most peoples speculations. So all they can do is try to get as many retail/SMME customers in the FOMO spiral and develop a forced dependency on the technology.
My friend is a university professor whose department's AI boosters are trying to direct more and more funding and new hires towards AI, at the direct expense of all other areas. From the article:...
My friend is a university professor whose department's AI boosters are trying to direct more and more funding and new hires towards AI, at the direct expense of all other areas.
From the article:
In 2024, nearly 33% of all global venture funding went to artificial intelligence, and according to The Information, AI startups have raised over $40 billion in 2025 alone, with Statista adding that AI absorbed 71% of VC funding in Q1 2025.
It's is a very serious issue that resources are so disproportionately allocated to something that is so overwhelmingly useless, when they could benefit humanity so much more if directed elsewhere.
I envy your position as spectator! I suspect it's a bit like watching a very promising trainwreck from the outside. A bit more worrying when you're sitting towards the back of the train, trying to...
I envy your position as spectator! I suspect it's a bit like watching a very promising trainwreck from the outside. A bit more worrying when you're sitting towards the back of the train, trying to use the loo when the trainwreck begins.
These boosters aren't just "floating around somewhere on the internet". My friend is a university professor whose department has a fair number of AI boosters, which means more and more resources...
These boosters aren't just "floating around somewhere on the internet".
My friend is a university professor whose department has a fair number of AI boosters, which means more and more resources are being allocated to AI folly. Those resources are away from actually important topics that could advance humanity.
It's like Elon Musk trying to terraform Mars when terraforming the Sahara desert would be i) at least plausibly feasible, ii) a lot more affordable, iii) magnitudes more beneficial.
I read the first 2 paragraphs and felt like this: https://www.youtube.com/watch?v=qOQjzVWJkCA I'm guessing there's some context here, but the author really comes out swinging.
I agree with you. I think that there's a lot of emotion driving the article to the point that it does it a disservice, however I find some of the ways the author suggests to engage useful but more...
I agree with you. I think that there's a lot of emotion driving the article to the point that it does it a disservice, however I find some of the ways the author suggests to engage useful but more in a skeptic's mindset in general. Usually when talking to true believers in something (For example conspiracy theories) you'll see very similar rhetoric, and generally asking them to be specific or explain their experiences rather than what they've seen/read makes their own argument "fail".
I know there are a lot of polarized opinions on Tildes about AI, but I've consistently found Ed's opinions well-reasoned, if a bit over-the-top in writing style. I somehow seem to deal with AI...
I know there are a lot of polarized opinions on Tildes about AI, but I've consistently found Ed's opinions well-reasoned, if a bit over-the-top in writing style. I somehow seem to deal with AI boosters every day, so I found this article pretty useful in providing quick and easy counterarguments to the common AI booster talking points.
Warning: Ed uses quite a bit of coarse language, and this is a REALLY long article. I think it's worth reading in its entirety, though, so I figured I'd share it here.
I know it’s more of a theme to orient an article around, but you shouldn’t argue against anyone that you’re actually trying to convince or have any practical purpose. Arguments should only be for...
I know it’s more of a theme to orient an article around, but you shouldn’t argue against anyone that you’re actually trying to convince or have any practical purpose. Arguments should only be for recreation.
They’re terrible - arguably counterproductive - at convincing people of things. The modal outcome of an argument is that both parties become further entrenched in their beliefs. If you argue with an “AI booster”, you will make them believe more in AI.
Not if everyone reminded themselves to be open towards new thoughts. :-) I’m not even saying that in jest, I genuinely believe if we all just gave it 5 minutes on every (important)...
[Arguments] are terrible - arguably counterproductive - at convincing people of things. The modal outcome of an argument is that both parties become further entrenched in their beliefs.
Not if everyone reminded themselves to be open towards new thoughts. :-)
I’m not even saying that in jest, I genuinely believe if we all just gave it 5 minutes on every (important) disagreement/discussion, the world’d be a better place for it. (The linked article is an excellent read, BTW.)
The problem is that it’s extremely difficult. It’s so hard to accept new viewpoints and argumentative positions as “potentially not wrong” in your head, even when you have in the back of your mind “be open-minded be open-minded be open-minded” running in a loop (which, let’s face it, unfortunately isn’t even the case for a lot of people). Especially so for topics you’re passionate about.
And thus people default to disagreement.
I mean, it's not like I'm saying it's impossible to convince people of things. But the framing has to be one of cooperation, not an adversarial one. OP's article is dripping with disdain for "AI...
I mean, it's not like I'm saying it's impossible to convince people of things. But the framing has to be one of cooperation, not an adversarial one. OP's article is dripping with disdain for "AI boosters" - he gives them a special name, he describes them with language like
They sneer and jeer and cry constantly that people are not showing adequate amounts of awe when an AI lab
Would anyone listen to someone like this? Again, it's fine if it's just for recreation. But I think people need to understand that arguments are for fun. There are no productive arguments. Sometimes being mad and yelling at people is fun. I like my fair share of arguing. But I do knowing it's just recreation.
If you did want to change people's opinion, then there's no end of better ways.
I suppose my point is that arguments and debates are never productive as anything but entertainment, and you can't ever expect them to be. Conversations can be productive. A conversation between people with viewpoints that differ in some areas can result in changed minds. Not arguments.
It's 2025, so yes. That is how modern discourse works at all levels of US society. We clearly aren't making articulate and careful, nuanced talking points anymore in policy.
Would anyone listen to someone like this?
It's 2025, so yes. That is how modern discourse works at all levels of US society. We clearly aren't making articulate and careful, nuanced talking points anymore in policy.
But people on politics, on business pitches and in on workplaces will need to understand the arguments and make sure to express themselves accordingly. There is definitely money on the line at...
Arguments should only be for recreation.
But people on politics, on business pitches and in on workplaces will need to understand the arguments and make sure to express themselves accordingly. There is definitely money on the line at this point.
Man, this is a large amount of emotion to put into an article. I hope he's doing ok. I'm sure his original article got him a lot of notoriety from the worst types of people. I wish more of the...
Man, this is a large amount of emotion to put into an article. I hope he's doing ok. I'm sure his original article got him a lot of notoriety from the worst types of people. I wish more of the Booster Quips had quotes from where he's seen them? And that there was less ad hominem and logical fallacies in this. Like if there was less violent anger in this article and more empathy, then I'd definitely be more willing to listen to these arguments. As it is, I'm just hoping that he's ok and that the world isn't too on fire for him.
I reject this perspective entirely. Now is the time to flail our arms and make as much noise as emotionally as possible. With data centers being built on thousands of acres of land across a number...
Exemplary
I reject this perspective entirely. Now is the time to flail our arms and make as much noise as emotionally as possible. With data centers being built on thousands of acres of land across a number of politically/financially vulnerable places on the planet all to serve a demographic of users that simply doesnt exist because big tech is too big to fail as its embedded its own and AI's success within people's retirement funds, now is the time to be angry. Being soft with people whose sole purpose is to manipulate you in order to squeeze as much value out of you and your surroundings, let alone the people theyve tricked to do it for free for them, is how we got into this mess.
I have not read the article yet but i listen to Better Offline regularly so im familiar with Ed and his style. I appreciate that hes willing to point at insanity and call it what it is, though sometimes it feels as though it borders on un-restricted hate for tech i think thats mostly a product of a broken news establishment that we've been subjected to for decades; pretending to deliver the news without bias in some kind of false performance of objectivity. Meanwhile the ads they can run on the site directly affect the money that they make.
This technology is a mild step up in utility with terrible repercussions and an incredibly slim use-case but its being sold to us as a solution to all of the problems that the people who made it helped create in the first place. But now with even less oversight. A lot of people need to just get angry about it, they need to read something that motivates them to look further into the topics that will affect them most and i welcome that in whatever flawed format it may come in.
In conversations about AI I like to add: It's also an environmental nightmare because of the huge amount of electricity involved. We're not at a place, globally, where we should be going all in on...
In conversations about AI I like to add: It's also an environmental nightmare because of the huge amount of electricity involved.
We're not at a place, globally, where we should be going all in on new industries that use unprecedented amounts of power.
Once we've completed the transition to greener energy it's a different conversation.
An environmental nightmare for sure! Let alone the resource usage to build these data centers. But even more importantly: at a time when we increasingly face regulations on natural gas usage and...
An environmental nightmare for sure! Let alone the resource usage to build these data centers.
But even more importantly: at a time when we increasingly face regulations on natural gas usage and personal decisions to reduce gasoline usage, data centers are consuming MASSIVE amounts of power from the grid, to the point where the grid simply can't supply enough power and prices get driven up so utilities can expand capacity. If you've switched the heating via a heat pump in the last few years, or bought and EV, or switched to an induction stove, this should make you MAD. You made a responsible choice to reduce your personal fossil fuel usage. And now billionaires like Bezos and megacorporations are driving up your heating, cooking, and transportation bills.
At least Apple plans to build power plants for a couple of their data centers. Frankly, these companies should be required to do so -- in a renewable way -- as a public benefit. They can clearly afford it, as they all simultaneously light tens or hundreds of billions of dollars on fire to fund their AI efforts. Meta spent that a few times over in the past few years for their (now obviously failed) Metaverse effort. Just imagine if we could put that kind of money into healthcare or green energy or education instead.
Do you have a source for this? I thought residential and industrial energy pricing was normally separate, and also that datacentres are also not that big of a contributor to energy prices in...
prices get driven up so utilities can expand capacity
Do you have a source for this? I thought residential and industrial energy pricing was normally separate, and also that datacentres are also not that big of a contributor to energy prices in general (~1% of US energy consumption is still fairly small)
Just imagine if we could put that kind of money into healthcare
Annual health expenditures stood at over 4.8 trillion U.S. dollars in 2023
AI spending is still sub-10% of US healthcare spending, it really wouldn't do that much
Source: https://apnews.com/article/electricity-prices-data-centers-artificial-intelligence-fbf213a915fb574a4f3e5baaa7041c3a No doubt our healthcare system is deeply deeply inefficient. But I'm...
No doubt our healthcare system is deeply deeply inefficient. But I'm sure even putting that money into something like libraries, which get a lot less funding, would be much more worthwhile. My point isn't that the companies spending on AI should funnel the money into one monolithic thing like healthcare: I'm just trying to say that most money spent on 'AI' is a waste, and spending the money on pretty much anything else (within a valid business context) would likely be a net improvement for society.
But the same applies to consultants and lawyers and many other business spending categories, so I suppose it makes little difference.
yeah, general principle that this much spending is inefficient and could be diverted is true, my reply was a bit blithe. Though now a sizeable chunk of this spending is done through debt, so...
yeah, general principle that this much spending is inefficient and could be diverted is true, my reply was a bit blithe. Though now a sizeable chunk of this spending is done through debt, so probably it would be better if the money wasn't spent and then we wouldn't have a risk of a financial crisis
Mind elaborating on this? From all the studies and calculations that I've seen I've always come away with the impression that its energy usage is within the ballpark of a large forum (like reddit)...
Mind elaborating on this? From all the studies and calculations that I've seen I've always come away with the impression that its energy usage is within the ballpark of a large forum (like reddit) and way way lower than something like a video hosting service, such as tiktok or youtube. And that in terms of output it is more energy efficient than a human creator sitting at his/her computer. So if that's true, seems like the only way AI could be considered an environmental disaster is if one presupposes AI is inherently bad.
I've seen a lot of different estimates and really I don't think anyone knows exactly how much power AI is using, but I think everyone agrees that it's a lot. Stacks of high end GPUs draw a lot of...
I've seen a lot of different estimates and really I don't think anyone knows exactly how much power AI is using, but I think everyone agrees that it's a lot. Stacks of high end GPUs draw a lot of power and the process of training models (which is happening nonstop right now) and building and running large datacenters, that wouldn't otherwise be needed, is power use in addition to how much a particular query costs.
I don't think there's any way that doing a task with AI assistance (from a large cutting edge model) is cheaper in terms of power than doing the same task unaided.
The question is not whether it's "a lot", the question is whether it's so bad compared to other online services that it deserves to be called an environmental nightmare that should be rallied...
The question is not whether it's "a lot", the question is whether it's so bad compared to other online services that it deserves to be called an environmental nightmare that should be rallied against.
that wouldn't otherwise be needed
Well what do you mean by that? We don't need tiktok, I think we need it way less actually, and yet no one is speaking out about the horrors of how much compute is required to deliver videos to millions of people. To me it feels like outcry against dirty polluting container ships when they're actually insanely efficient compared to other transportation methods and a pretty small factor when it comes to emissions as a whole.
I don't think there's any way that doing a task with AI assistance (from a large cutting edge model) is cheaper in terms of power than doing the same task unaided.
AI is creating enough new power demand that it is slowing down the green energy transition. The demand is also driving up costs for everyday people. This has been well covered by reputable outlets...
AI is creating enough new power demand that it is slowing down the green energy transition. The demand is also driving up costs for everyday people. This has been well covered by reputable outlets if you're curious. The demand is projected to grow dramatically, meaning that we're only at the beginning of its impact. Which is the time to talk about whether or not it's a good idea.
As far as Tiktok goes, and datacenters in general, I'd be excited to see a serious look into the impact on society and the carbon footprint they have. It would be great if they were required to offset their impact, especially in small communities. But AI in particular is causing an explosion in datacenter expansion that makes video delivery look tame by comparison
About the article you linked, it's a bit misleading:
For the human writing process, we looked at humans’ total annual carbon footprints, and then took a subset of that annual footprint based on how much time they spent writing.
So it's not actually a carbon footprint comparison at all. For the human side of the estimate they just looked at how much of a footprint the human has while existing, regardless of what they're doing. They're going to have that footprint either way. It's darkly funny in the sense that the only way AI would offset that footprint is if its use caused the human not to have to exist at all.
Reading the article it sounds like a hype piece, they go off on tangents about how great AI is that appear to have nothing to do with the analysis or the carbon footprint premise. So weird that it's published by the Nature website.
Yeah. I don't think I even have a problem with LLMs themselves. The tech is neat. The problem is that it's 2025 and this feels like the culmination of some decade+ of Big Tech's betrayal to...
Yeah. I don't think I even have a problem with LLMs themselves. The tech is neat.
The problem is that it's 2025 and this feels like the culmination of some decade+ of Big Tech's betrayal to society as it tries to capture everything with reckless abandon. Now their golden ticket came in and they still want to try and cash it in as if these are still the cool hip 2005 companies that werre truly making useful tools. When in reality the good faith waned years ago.
It's also, perhaps coincidentally, mirrors 2025's US society at large: breaking a lot of things to claim victories, making as many deals as possible (no matter how good or even profitable the deals actually are), completely disregarding the working class in the process. All while lobbying for laws to make life harder too.
I'm just a bit tired of all the dishonesty, and if nothing else 2025 taught me that we're not in times where we can make slow, reasoned arguments to appeal to such people. We need to be loud. Maybe we can bring back decorum once society isn't on the brink of collapse.
I'm glad someone else marked this as exemplary too. I've seen many anti AI comments but out of them all this has been the most persuasive. I feel like you got to the heart of the Ed's article in...
I'm glad someone else marked this as exemplary too. I've seen many anti AI comments but out of them all this has been the most persuasive. I feel like you got to the heart of the Ed's article in the OP and I'm very grateful for your comment.
It's justified anger, in my opinion. I'll share one example. I witnessed the following conversation on YouTube and almost fell for the hoax. I made a mental note that I should look into it to make...
It's justified anger, in my opinion. I'll share one example. I witnessed the following conversation on YouTube and almost fell for the hoax.
In one test, conducted by an A.I. safety research group that hooked GPT-4 up to a number of other systems, GPT-4 was able to hire a human TaskRabbit worker to do a simple online task for it — solving a Captcha test — without alerting the person to the fact that it was a robot. The A.I. even lied to the worker about why it needed the Captcha done, concocting a story about a vision impairment.
Roose, along with his co-host Casey Newton, would go on to describe this example at length on a podcast that week, describing an entire narrative where “the human actually gets suspicious” and “GPT 4 reasons out loud that it should not reveal that [it is] a robot,” [something seems to be missing here about how GPT-4 lied to the human] at which point “the TaskRabbit solves the CAPTCHA.” During this conversation, Newton gasps and says “oh my god” twice, and when he asks Roose “how does the model understand that in order to succeed at this task, it has to deceive the human?” Roose responds “we don’t know, that is the unsatisfying answer,” and Newton laughs and states “we need to pull the plug. I mean, again, what?”
I made a mental note that I should look into it to make sure there are no misunderstandings or misrepresentations in there. But I never got around to actually doing so, and I never would have thought that the possible misrepresentation would be so blatant as it is.
It is transparently, blatantly obvious that GPT-4 did not "hire" a Taskrabbit or, indeed, make any of these actions — it was prompted to, and they do not show the prompts they used, likely because they had to use so many of them.
Anger is an appropriate reaction when someone deliberately misleads you in an attempt to exploit. When the scale of this attempt is on the level of an entire society, not only is anger appropriate, it becomes necessary. Some things are worse than others and expressing anger is a way to communicate that this is one of them.
Thanks for asking. The description was indeed overly vague. I added more context in the first quote. I also realised that there's a part missing from the narration of events and added a remark...
Thanks for asking. The description was indeed overly vague. I added more context in the first quote.
I also realised that there's a part missing from the narration of events and added a remark about that. GPT-4 allegedly was reasoning "out loud" (= so that the researchers could see) about the need to deceive the human, then told the human it was a fellow human with impaired vision to justify why it needs someone else to solve a CAPTCHA, and this got the actual human to solve the CAPTCHA.
I remember feeling astonished and in awe, as well as worried, after watching the conversation where these events were (inaccurately, as it turns out) described. Exactly the kinds of emotions that can make material go viral. Luckily I was very busy at the time and only told one person about it, my AI skeptic friend who was unimpressed even without looking further into it, and that was that.
The thing is, we can't be hypervigilantly fact-checking stuff all the time. We are going to have to have some baseline assumption about things that will err on one side or another, and accept that occasionally that assumption may be wrong. Up until now I've been trying to remain open and curious about AI, but after seeing how deceitful the surrounding communication is, I'm likely going to pick a side and just start outright rejecting any AI hype I come across.
I have better things to do with my time than fact-checking these deliberately misleading morons.
I think he's rightfully upset, and part of his style is to not water down his anger. AI boosters are insufferable and should be called out far more often and with more intensity than they...
I think he's rightfully upset, and part of his style is to not water down his anger. AI boosters are insufferable and should be called out far more often and with more intensity than they currently are.
I honestly don't have the energy to enroll in this debate. I just want to point out that this. Rubs me the wrong way, because it is a dismissive rhetorical argument. Not one of substance. You are...
I honestly don't have the energy to enroll in this debate. I just want to point out that this.
Man, this is a large amount of emotion to put into an article. I hope he's doing ok.
Rubs me the wrong way, because it is a dismissive rhetorical argument. Not one of substance. You are effectively setting the stage that the other person (author in this case) has emotional issues and therefore everything they are saying is invalid.
The reason I am pointing this out is that just a few sentences over, you start talking about ad hominems and logical fallacies yourself.
Appreciate you pointing that out and honor that you don't want to participate in a discussion around it. You bring up a valid critique of my comment and continue to encourage me to think harder on...
Appreciate you pointing that out and honor that you don't want to participate in a discussion around it. You bring up a valid critique of my comment and continue to encourage me to think harder on my comments. Thank you.
I haven't read the whole thing, but he's misunderstood the MIT report that he references at the beginning. Quoting from Ed's piece: He's wrong about what the "learning gap" means here. He's...
I haven't read the whole thing, but he's misunderstood the MIT report that he references at the beginning. Quoting from Ed's piece:
An incorrect read of the study has been that the "learning gap" that makes these things less useful, when the study actually says that "...the fundamental gap that defines the GenAI divide [is that users resist tools that don't adapt, model quality fails without context, and UX suffers when systems can't remember." This isn't something you learn your way out of. The products don't do what they're meant to do, and people are realizing it.
He's wrong about what the "learning gap" means here. He's referring to users learning rather than, as the report talks about, the ML systems adapting to workflows and processes. If you just blithely slather ChatGPT onto an internal system, it starts fresh each time with no context about the internal system, and doesn't evolve over time. By contrast, the report says:
Organisations on the right side of the GenAI Divide share a common approach: they build adaptive, embedded systems that learn from feedback […] the organizations and vendors succeeding are those aggressively solving for learning, memory, and workflow adaptation, while those failing are either building generic tools or trying to develop capabilities internally.
So anyway, I stopped reading. It's 16k words and his interpretation of his first source doesn't inspire enough confidence to look over the rest.
Are you sure you're understanding his interpretation? It seems to me like the 'learning gap' refers to both a gap between user communication to the model and the model's data about the request....
Are you sure you're understanding his interpretation? It seems to me like the 'learning gap' refers to both a gap between user communication to the model and the model's data about the request. It's a gap between those two things! So I'm not sure you can dismiss his entire argument based on that reasoning, unless I'm misunderstanding, too.
It is a very long article, and it's written in a very (Ed)gy way, so I understand if folks don't want to spend time reading through the entire thing.
In the report, the "learning gap" refers to "tools that don't learn, integrate poorly, or match workflows". Ed talks about user experience with the tools before saying "this isn't something you...
In the report, the "learning gap" refers to "tools that don't learn, integrate poorly, or match workflows". Ed talks about user experience with the tools before saying "this isn't something you learn your way out of" (implying, at least to me, that the users would be doing the learning—or possibly that these tools cannot learn, but this interpretation would also be very wrong).
But in any case, he's very clear there and in surrounding paragraphs that his interpretation of the report is that AI is not having an effect on companies because it is fundamentally incapable of doing so:
Generative AI isn't transforming anything, AI isn't replacing anyone, enterprises are trying to adopt generative AI but it doesn't fucking work, and the thing holding back AI is the fact it doesn't fucking work.
The products don't do what they're meant to do, and people are realizing it.
But this just isn't what the report says. The purpose of the report is very literally to evaluate what companies who have seen returns from AI tooling are doing differently to those who haven't: the conclusion being that many businesses are trying to throw a generic AI solution at some internal process, shouting "AI!", and hoping for returns. This does not work. Others are actually seeing returns, because they're taking the time and effort to produce tooling which fits the processes which they already have.
I think this is the crux - the article isn't aimed at someone who uses LLMs or other transformer based tools on the regular, but rather a 'booster'
Otherwise the 'It doesn't work' rhetoric seems so strange to me. Yes, the predictions of future performance are quite extreme. But these systems are doing useful work for me on the regular (Agentic search, summarization, captioning, as a programming assistant).
Yeah, I'm not in the target audience because I don't want to argue with AI boosters. It seems better not to engage? Unless you're genuinely curious about them.
There are certain writers (Fredrik deBoer comes to mind) who have a regular schtick of arguing with bad ideas floating around somewhere on the Internet. It's reacting to things I don't care about, sometimes published in social media like TikTok that I don't even use. He's pretty up front about having to do this to make a living. He's a good writer, but the only thing for it was to unsubscribe.
Do you work in tech? In my experience, there are a LOT of AI boosters in the industry right now. They have a tendency to try to derail a lot of other projects with false AI promises and even dire warnings that 'if we don't commit resources to AI right now, we'll get left behind'. There are shades of truth to this in a very limited set of niches, but in most cases I think it's good to have some ammunition to fight back against these people, since they're generally hellbent on the idea of automating away all of our jobs.
When I first read this piece, I felt pretty confused because while I’ve seen “AI boosters” around, they just kind of come off as any other software salesperson to someone outside the industry. I don’t need a 15000 word essay deconstructing why some company is actually overpitching some product for our business; it happened before AI and will happen so long as sales exists.
I wonder if the people that feel this article is overreacting are in the same boat that I am. From outside of tech, AI just feels like any other tech innovation. Some products work, some products don’t. Everyone says they are the former. It’s just the sales song and dance.
However, if you are working on the software stack, there’s legitimate discussion about what tools go into that stack and how you use AI in your product, and I can see how “AI boosters” could be legitimately harmful to business productivity. That perspective justifies an article like this a lot more.
Should also bare in mind that unlike many other big tech innovations like cloud or even mobile apps, AI boosters target small business owners that really don't have the resources or expertise to make informed decisions. And there isn't even a good playbook from these people on how to see any RoI. It's all productivity gains, but unlike something like process automation or mechanization, there's no clear metrics to measure the related dollar value gains.
To give a personal anecdote that's fresh in my mind, I attended a family event and there's a guy who usually likes to talk consumer tech with me. EV's, home automation, self hosting. But last night he's showed me GPT logs that has very specific and sensitive information about his Wholesale Seafood business, trying to get any sort of guidance on how to grow since business is very slow. And this LLM is recommending things that he knows to be BS. For example, reducing the excessive costs like refrigeration and the unionized staff.
But he believes that the fault is with him, so he wants to know how to hire a prompt engineer to get correct answers. And I sort of had to lead him down the path of the Elon Musk logic. If they can communicate well on everything you are not an expert in but instantly fail in your field, then they most likely aren't just lacking in your one area.
The problem with entertaining AI (or any hype) Boosters is that they rarely have the users best interests at heart. They know corporate are not going to bite. Way too much risk in the tech. Every CIO knows its not hard to work out the true cost of such service and its only a matter of time before the Service Providers start enshitiflating the whole thing. GPT5 all but confirmed most peoples speculations. So all they can do is try to get as many retail/SMME customers in the FOMO spiral and develop a forced dependency on the technology.
My friend is a university professor whose department's AI boosters are trying to direct more and more funding and new hires towards AI, at the direct expense of all other areas.
From the article:
It's is a very serious issue that resources are so disproportionately allocated to something that is so overwhelmingly useless, when they could benefit humanity so much more if directed elsewhere.
We're also being pressured to use AI on the non-faculty side of the equation. I'd say why but the answer is "to cut staffing costs" and I hate it.
I’m retired so this is thankfully not an issue for me. I only read and write about AI stuff recreationally. :)
I envy your position as spectator! I suspect it's a bit like watching a very promising trainwreck from the outside. A bit more worrying when you're sitting towards the back of the train, trying to use the loo when the trainwreck begins.
These boosters aren't just "floating around somewhere on the internet".
My friend is a university professor whose department has a fair number of AI boosters, which means more and more resources are being allocated to AI folly. Those resources are away from actually important topics that could advance humanity.
It's like Elon Musk trying to terraform Mars when terraforming the Sahara desert would be i) at least plausibly feasible, ii) a lot more affordable, iii) magnitudes more beneficial.
I think, importantly, the author isn't arguing that AI does nothing useful. They're arguing that it is not transformational.
I read the first 2 paragraphs and felt like this:
https://www.youtube.com/watch?v=qOQjzVWJkCA
I'm guessing there's some context here, but the author really comes out swinging.
I agree with you. I think that there's a lot of emotion driving the article to the point that it does it a disservice, however I find some of the ways the author suggests to engage useful but more in a skeptic's mindset in general. Usually when talking to true believers in something (For example conspiracy theories) you'll see very similar rhetoric, and generally asking them to be specific or explain their experiences rather than what they've seen/read makes their own argument "fail".
I know there are a lot of polarized opinions on Tildes about AI, but I've consistently found Ed's opinions well-reasoned, if a bit over-the-top in writing style. I somehow seem to deal with AI boosters every day, so I found this article pretty useful in providing quick and easy counterarguments to the common AI booster talking points.
Warning: Ed uses quite a bit of coarse language, and this is a REALLY long article. I think it's worth reading in its entirety, though, so I figured I'd share it here.
I know it’s more of a theme to orient an article around, but you shouldn’t argue against anyone that you’re actually trying to convince or have any practical purpose. Arguments should only be for recreation.
They’re terrible - arguably counterproductive - at convincing people of things. The modal outcome of an argument is that both parties become further entrenched in their beliefs. If you argue with an “AI booster”, you will make them believe more in AI.
To be fair they did say "argue" with, not "convince them to no longer be an AI booster"
Not if everyone reminded themselves to be open towards new thoughts. :-)
I’m not even saying that in jest, I genuinely believe if we all just gave it 5 minutes on every (important) disagreement/discussion, the world’d be a better place for it. (The linked article is an excellent read, BTW.)
The problem is that it’s extremely difficult. It’s so hard to accept new viewpoints and argumentative positions as “potentially not wrong” in your head, even when you have in the back of your mind “be open-minded be open-minded be open-minded” running in a loop (which, let’s face it, unfortunately isn’t even the case for a lot of people). Especially so for topics you’re passionate about.
And thus people default to disagreement.
I mean, it's not like I'm saying it's impossible to convince people of things. But the framing has to be one of cooperation, not an adversarial one. OP's article is dripping with disdain for "AI boosters" - he gives them a special name, he describes them with language like
Would anyone listen to someone like this? Again, it's fine if it's just for recreation. But I think people need to understand that arguments are for fun. There are no productive arguments. Sometimes being mad and yelling at people is fun. I like my fair share of arguing. But I do knowing it's just recreation.
If you did want to change people's opinion, then there's no end of better ways.
I suppose my point is that arguments and debates are never productive as anything but entertainment, and you can't ever expect them to be. Conversations can be productive. A conversation between people with viewpoints that differ in some areas can result in changed minds. Not arguments.
It's 2025, so yes. That is how modern discourse works at all levels of US society. We clearly aren't making articulate and careful, nuanced talking points anymore in policy.
But people on politics, on business pitches and in on workplaces will need to understand the arguments and make sure to express themselves accordingly. There is definitely money on the line at this point.
Man, this is a large amount of emotion to put into an article. I hope he's doing ok. I'm sure his original article got him a lot of notoriety from the worst types of people. I wish more of the Booster Quips had quotes from where he's seen them? And that there was less ad hominem and logical fallacies in this. Like if there was less violent anger in this article and more empathy, then I'd definitely be more willing to listen to these arguments. As it is, I'm just hoping that he's ok and that the world isn't too on fire for him.
I reject this perspective entirely. Now is the time to flail our arms and make as much noise as emotionally as possible. With data centers being built on thousands of acres of land across a number of politically/financially vulnerable places on the planet all to serve a demographic of users that simply doesnt exist because big tech is too big to fail as its embedded its own and AI's success within people's retirement funds, now is the time to be angry. Being soft with people whose sole purpose is to manipulate you in order to squeeze as much value out of you and your surroundings, let alone the people theyve tricked to do it for free for them, is how we got into this mess.
I have not read the article yet but i listen to Better Offline regularly so im familiar with Ed and his style. I appreciate that hes willing to point at insanity and call it what it is, though sometimes it feels as though it borders on un-restricted hate for tech i think thats mostly a product of a broken news establishment that we've been subjected to for decades; pretending to deliver the news without bias in some kind of false performance of objectivity. Meanwhile the ads they can run on the site directly affect the money that they make.
This technology is a mild step up in utility with terrible repercussions and an incredibly slim use-case but its being sold to us as a solution to all of the problems that the people who made it helped create in the first place. But now with even less oversight. A lot of people need to just get angry about it, they need to read something that motivates them to look further into the topics that will affect them most and i welcome that in whatever flawed format it may come in.
In conversations about AI I like to add: It's also an environmental nightmare because of the huge amount of electricity involved.
We're not at a place, globally, where we should be going all in on new industries that use unprecedented amounts of power.
Once we've completed the transition to greener energy it's a different conversation.
An environmental nightmare for sure! Let alone the resource usage to build these data centers.
But even more importantly: at a time when we increasingly face regulations on natural gas usage and personal decisions to reduce gasoline usage, data centers are consuming MASSIVE amounts of power from the grid, to the point where the grid simply can't supply enough power and prices get driven up so utilities can expand capacity. If you've switched the heating via a heat pump in the last few years, or bought and EV, or switched to an induction stove, this should make you MAD. You made a responsible choice to reduce your personal fossil fuel usage. And now billionaires like Bezos and megacorporations are driving up your heating, cooking, and transportation bills.
At least Apple plans to build power plants for a couple of their data centers. Frankly, these companies should be required to do so -- in a renewable way -- as a public benefit. They can clearly afford it, as they all simultaneously light tens or hundreds of billions of dollars on fire to fund their AI efforts. Meta spent that a few times over in the past few years for their (now obviously failed) Metaverse effort. Just imagine if we could put that kind of money into healthcare or green energy or education instead.
Do you have a source for this? I thought residential and industrial energy pricing was normally separate, and also that datacentres are also not that big of a contributor to energy prices in general (~1% of US energy consumption is still fairly small)
AI spending is still sub-10% of US healthcare spending, it really wouldn't do that much
Source: https://apnews.com/article/electricity-prices-data-centers-artificial-intelligence-fbf213a915fb574a4f3e5baaa7041c3a
No doubt our healthcare system is deeply deeply inefficient. But I'm sure even putting that money into something like libraries, which get a lot less funding, would be much more worthwhile. My point isn't that the companies spending on AI should funnel the money into one monolithic thing like healthcare: I'm just trying to say that most money spent on 'AI' is a waste, and spending the money on pretty much anything else (within a valid business context) would likely be a net improvement for society.
But the same applies to consultants and lawyers and many other business spending categories, so I suppose it makes little difference.
yeah, general principle that this much spending is inefficient and could be diverted is true, my reply was a bit blithe. Though now a sizeable chunk of this spending is done through debt, so probably it would be better if the money wasn't spent and then we wouldn't have a risk of a financial crisis
Mind elaborating on this? From all the studies and calculations that I've seen I've always come away with the impression that its energy usage is within the ballpark of a large forum (like reddit) and way way lower than something like a video hosting service, such as tiktok or youtube. And that in terms of output it is more energy efficient than a human creator sitting at his/her computer. So if that's true, seems like the only way AI could be considered an environmental disaster is if one presupposes AI is inherently bad.
I've seen a lot of different estimates and really I don't think anyone knows exactly how much power AI is using, but I think everyone agrees that it's a lot. Stacks of high end GPUs draw a lot of power and the process of training models (which is happening nonstop right now) and building and running large datacenters, that wouldn't otherwise be needed, is power use in addition to how much a particular query costs.
I don't think there's any way that doing a task with AI assistance (from a large cutting edge model) is cheaper in terms of power than doing the same task unaided.
Here's a review from MIT:
https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
The question is not whether it's "a lot", the question is whether it's so bad compared to other online services that it deserves to be called an environmental nightmare that should be rallied against.
Well what do you mean by that? We don't need tiktok, I think we need it way less actually, and yet no one is speaking out about the horrors of how much compute is required to deliver videos to millions of people. To me it feels like outcry against dirty polluting container ships when they're actually insanely efficient compared to other transportation methods and a pretty small factor when it comes to emissions as a whole.
This is CO2 emissions, not power, but
https://www.nature.com/articles/s41598-024-54271-x
AI is creating enough new power demand that it is slowing down the green energy transition. The demand is also driving up costs for everyday people. This has been well covered by reputable outlets if you're curious. The demand is projected to grow dramatically, meaning that we're only at the beginning of its impact. Which is the time to talk about whether or not it's a good idea.
As far as Tiktok goes, and datacenters in general, I'd be excited to see a serious look into the impact on society and the carbon footprint they have. It would be great if they were required to offset their impact, especially in small communities. But AI in particular is causing an explosion in datacenter expansion that makes video delivery look tame by comparison
About the article you linked, it's a bit misleading:
So it's not actually a carbon footprint comparison at all. For the human side of the estimate they just looked at how much of a footprint the human has while existing, regardless of what they're doing. They're going to have that footprint either way. It's darkly funny in the sense that the only way AI would offset that footprint is if its use caused the human not to have to exist at all.
Reading the article it sounds like a hype piece, they go off on tangents about how great AI is that appear to have nothing to do with the analysis or the carbon footprint premise. So weird that it's published by the Nature website.
Yeah. I don't think I even have a problem with LLMs themselves. The tech is neat.
The problem is that it's 2025 and this feels like the culmination of some decade+ of Big Tech's betrayal to society as it tries to capture everything with reckless abandon. Now their golden ticket came in and they still want to try and cash it in as if these are still the cool hip 2005 companies that werre truly making useful tools. When in reality the good faith waned years ago.
It's also, perhaps coincidentally, mirrors 2025's US society at large: breaking a lot of things to claim victories, making as many deals as possible (no matter how good or even profitable the deals actually are), completely disregarding the working class in the process. All while lobbying for laws to make life harder too.
I'm just a bit tired of all the dishonesty, and if nothing else 2025 taught me that we're not in times where we can make slow, reasoned arguments to appeal to such people. We need to be loud. Maybe we can bring back decorum once society isn't on the brink of collapse.
I'm glad someone else marked this as exemplary too. I've seen many anti AI comments but out of them all this has been the most persuasive. I feel like you got to the heart of the Ed's article in the OP and I'm very grateful for your comment.
It's justified anger, in my opinion. I'll share one example. I witnessed the following conversation on YouTube and almost fell for the hoax.
I made a mental note that I should look into it to make sure there are no misunderstandings or misrepresentations in there. But I never got around to actually doing so, and I never would have thought that the possible misrepresentation would be so blatant as it is.
Anger is an appropriate reaction when someone deliberately misleads you in an attempt to exploit. When the scale of this attempt is on the level of an entire society, not only is anger appropriate, it becomes necessary. Some things are worse than others and expressing anger is a way to communicate that this is one of them.
I don't understand what these quotes are saying. What was it supposed to do? What was the deception? Who was it lying to?
Thanks for asking. The description was indeed overly vague. I added more context in the first quote.
I also realised that there's a part missing from the narration of events and added a remark about that. GPT-4 allegedly was reasoning "out loud" (= so that the researchers could see) about the need to deceive the human, then told the human it was a fellow human with impaired vision to justify why it needs someone else to solve a CAPTCHA, and this got the actual human to solve the CAPTCHA.
I remember feeling astonished and in awe, as well as worried, after watching the conversation where these events were (inaccurately, as it turns out) described. Exactly the kinds of emotions that can make material go viral. Luckily I was very busy at the time and only told one person about it, my AI skeptic friend who was unimpressed even without looking further into it, and that was that.
The thing is, we can't be hypervigilantly fact-checking stuff all the time. We are going to have to have some baseline assumption about things that will err on one side or another, and accept that occasionally that assumption may be wrong. Up until now I've been trying to remain open and curious about AI, but after seeing how deceitful the surrounding communication is, I'm likely going to pick a side and just start outright rejecting any AI hype I come across.
I have better things to do with my time than fact-checking these deliberately misleading morons.
I think he's rightfully upset, and part of his style is to not water down his anger. AI boosters are insufferable and should be called out far more often and with more intensity than they currently are.
I honestly don't have the energy to enroll in this debate. I just want to point out that this.
Rubs me the wrong way, because it is a dismissive rhetorical argument. Not one of substance. You are effectively setting the stage that the other person (author in this case) has emotional issues and therefore everything they are saying is invalid.
The reason I am pointing this out is that just a few sentences over, you start talking about ad hominems and logical fallacies yourself.
Appreciate you pointing that out and honor that you don't want to participate in a discussion around it. You bring up a valid critique of my comment and continue to encourage me to think harder on my comments. Thank you.
I haven't read the whole thing, but he's misunderstood the MIT report that he references at the beginning. Quoting from Ed's piece:
He's wrong about what the "learning gap" means here. He's referring to users learning rather than, as the report talks about, the ML systems adapting to workflows and processes. If you just blithely slather ChatGPT onto an internal system, it starts fresh each time with no context about the internal system, and doesn't evolve over time. By contrast, the report says:
So anyway, I stopped reading. It's 16k words and his interpretation of his first source doesn't inspire enough confidence to look over the rest.
Are you sure you're understanding his interpretation? It seems to me like the 'learning gap' refers to both a gap between user communication to the model and the model's data about the request. It's a gap between those two things! So I'm not sure you can dismiss his entire argument based on that reasoning, unless I'm misunderstanding, too.
It is a very long article, and it's written in a very (Ed)gy way, so I understand if folks don't want to spend time reading through the entire thing.
In the report, the "learning gap" refers to "tools that don't learn, integrate poorly, or match workflows". Ed talks about user experience with the tools before saying "this isn't something you learn your way out of" (implying, at least to me, that the users would be doing the learning—or possibly that these tools cannot learn, but this interpretation would also be very wrong).
But in any case, he's very clear there and in surrounding paragraphs that his interpretation of the report is that AI is not having an effect on companies because it is fundamentally incapable of doing so:
But this just isn't what the report says. The purpose of the report is very literally to evaluate what companies who have seen returns from AI tooling are doing differently to those who haven't: the conclusion being that many businesses are trying to throw a generic AI solution at some internal process, shouting "AI!", and hoping for returns. This does not work. Others are actually seeing returns, because they're taking the time and effort to produce tooling which fits the processes which they already have.