A fantastic, long essay on the financials of AI companies and the insanity we currently live in.
In fact, let’s talk about that for a second. At the end of January, OpenAI CFO Sarah Firar said that “our ability to serve customers—as measured by revenue—directly tracks available compute,” messily suggesting that the more compute you have the more revenue you have.
This is, of course, a big bucket of bollocks. Did OpenAI scale its compute dramatically between hitting $20 billion in annualized revenue (to be clear, I have deep suspicions about these numbers and how OpenAI measures “annualized” revenue) in January 2026 and $25 billion in March 2026? I think that’s highly unlikely.
What about Uber? Uber is a completely different business to Anthropic and OpenAI or any other AI company. It lost about $30 billion in the last decade or so, and turned a weird kind of profitable through a combination of cutting multiple markets and business lines (EG: autonomous cars), all while gouging customers and paying drivers less.
The economics are also completely different. Uber does not pay for its drivers’ gas, nor their cars, nor does it own any vehicles. Its PP&E has been between $1.5 billion and $2.1 billion since it was founded. Uber’s revenue does not increase with acquisitions of PP&E, nor does its business become significantly more expensive based on how far a driver drives, how many passengers they might have in a day, or how many meals they might deliver. Uber is, effectively, a digital marketplace for getting stuff or people moved from one place to another [...].
In any case, there is no future for any AI company that uses a subscription-based approach, at least not one where they don’t directly pass on the cost of compute.
This is a huge problem for both Anthropic and OpenAI, as their scurrilous growth-lust means that they’ve done everything they can to get customers used to paying a single monthly cost that directly obfuscates the cost of doing business.
Let’s say that Anthropic and OpenAI immediately decide to switch everybody to the API. How would anybody actually budget? Is somebody that pays $200 a month for Claude Max going to be comfortable paying $1000 or $1500 or $2500 a month in costs, and have, at that point, really no firm understanding of the cost of a particular action?
First, there’s no way to anticipate how many tokens a prompt will actually burn, which makes any kind of budgeting a non-starter. It’s like going to the supermarket and committing to buy a gallon of milk, not knowing if it’ll cost you $5 or $50.
A fantastic, long essay on the financials of AI companies and the insanity we currently live in.
Point #1: "compute" is not a noun, and tech bro corporate leaders perverting words into their own personal business geekspeak, and then expecting us to accept their non-words as words ... already...
Point #1: "compute" is not a noun, and tech bro corporate leaders perverting words into their own personal business geekspeak, and then expecting us to accept their non-words as words ... already has me on edge by the second sentence.
Edit to add: Yeah, I couldn't even finish reading the full rant. Author appears to be in agreement with me, that AI companies are living in a fantasy world, and will continue to do so, until people stop giving them money to buy more hardware. Beyond that, I just didn't feel the need to decipher the author's arguments, let alone the arguments of the AI tech bros that he's--apparently--eviscerating.
"compute" has been used as a noun in English for hundreds of years - for example in the phrase "beyond compute" meaning a great amount, uncountable - there are examples of this usage from the...
"compute" has been used as a noun in English for hundreds of years - for example in the phrase "beyond compute" meaning a great amount, uncountable - there are examples of this usage from the 1600s.
It has been used for over 10 years as a noun in cloud computing to refer to computational processing power as a resource.
English is a living language, words acquire new meanings through use all the time.
Edit: actually it goes back quite a bit more than 10 years - the term starts to appear in reference to distributed and parallel computing in late 1980s, e.g. "compute node", "compute cluster", etc. - mostly in academia and government (people working with supercomputers) - doesn't look like modern tech bros invented this usage.
I don't like the word compute in this sense either, but insisting on saying "computational power" every time is just exhausting, and the word power obviously has confounding definitions, so it...
I don't like the word compute in this sense either, but insisting on saying "computational power" every time is just exhausting, and the word power obviously has confounding definitions, so it would seem we're stuck.
Sure. My point was just that FlOPS has a specific meaning that isn't synonymous with general compute. It would be like saying "why don't we just call graphics processing 'trigonometry'? That's...
Sure. My point was just that FlOPS has a specific meaning that isn't synonymous with general compute. It would be like saying "why don't we just call graphics processing 'trigonometry'? That's most of what it is anyway."
Yes!! This! This is the worst thing about the LLM craze. You've nailed it. If only we could stop people from nouning verbs, everything else would just slot into place.
Yes!! This! This is the worst thing about the LLM craze. You've nailed it.
If only we could stop people from nouning verbs, everything else would just slot into place.
As a user, these kinds of arguments are very persuasive in favor of using AI. Gmail was a great idea to use when it was first created. It was almost magical watching that amount of space available...
As a user, these kinds of arguments are very persuasive in favor of using AI. Gmail was a great idea to use when it was first created. It was almost magical watching that amount of space available to you in your inbox continually increase. Society, at the time, was going to great lengths to provide me with great free stuff. The same could be said of the early days of Uber, where the company was determined to lose money for the privilege of winning my business.
Is Claude Code a bad investment decision? Maybe. But if Anthropic is determined to bankrupt itself so that I can get deeply discounted comput... ational capabilities, isn't that all the more reason to make use of it while it's cheap?
But that's the honeypot. They make it dirt cheap, make people rely on it, and then snap the trap shut once the market is captured. Gmail is shoving gemini into its basic service and ruining old,...
But that's the honeypot. They make it dirt cheap, make people rely on it, and then snap the trap shut once the market is captured. Gmail is shoving gemini into its basic service and ruining old, pre-AI features (I miss inbox tabs). an uber for 20 miles can easily cost me $70+. Netflix was $8/month for nearly everything, and now it's triple the price with a small fraction of its library.
As the author said, Uber was smart because their model didn't rely on consistent subscriptions, and I'm glad I could just cut it off when needed. Would the same happen if Claude went to 100/month but it's integral in your workflow?
So... a company providing a useful service at a reasonable/low price is a "honeypot"? What does a company have to do to avoid being a "honeypot"? Since you bring up Netflix, apparently it's to...
So... a company providing a useful service at a reasonable/low price is a "honeypot"? What does a company have to do to avoid being a "honeypot"? Since you bring up Netflix, apparently it's to maintain the same pricing for decades, since Netflix in its streaming form launched, oh god, almost 20 years ago. Or do they have to have shittier offering from the get go, is that more "honest"?
There's nothing wrong with using a service while it's cheap and good and paid for by investors, when the alternative is just not having a service at all. And for enterprise applications there are of course contracts which lock in the services and the prices for years to come.
While I don't think this is the case for all the examples raze mentioned, there are no shortage of examples where businesses operate at a loss using investor funding to drive out any competition...
While I don't think this is the case for all the examples raze mentioned, there are no shortage of examples where businesses operate at a loss using investor funding to drive out any competition and then increase the price and/or decrease the quality of their services once they're the only game in town. Uber has been very clearly documented doing this, for example. I definitely don't think the alternative is necessarily "not having a service at all" in most cases.
That said, I think this differs in a few ways from the (purported) enshittification of Gmail or Netflix, which I don't think quite follow this same pattern.
What services did Uber drive out? The crappy traditional taxis? Last time I checked, everywhere I visit both those and alternative apps exist. And wasn't the point being "we shouldn't use those...
What services did Uber drive out? The crappy traditional taxis? Last time I checked, everywhere I visit both those and alternative apps exist. And wasn't the point being "we shouldn't use those services even while they're cheap and good because it's a honeypot"? Sorry, but Uber is vastly superior to traditional taxis for most rides and when it isn't, traditional taxis still exist. So I don't get the point being made at all here. From my point of view, everyone benefits. The users get a new and better service that didn't exist before, the investors eventually get their profits and everyone benefits from progress (rental DVDs replaced with streaming, taxis replaced with apps).
Last time I checked, everywhere I visit both those and alternative apps exist.
In the same way YouTube has alternatives, yes. Being a monopoly (or duopoly, in this case) doesn't require 100% control.
From my point of view, everyone benefits.
Because your point of view doesn't the wider picture on how this strategy is anticompetitive. And then how these "winners" lobby to make sure their workers aren't able to be considered employees.
If all you care about convenience, then yes. Monopolies are good. Until they aren't. I'm here because Reddit more or less monopolized the modern forum, and alternatives have run empty on community. I won't think so short sightedly again.
So you're using 1 country as an example to say that any time a company provides a cheap and good service, it's a honeypot to capture the market? As I said, everywhere I've been to (not United...
predominantly in the United States (12 acquisitions), followed by the United Kingdom (2 acquisitions), and China (1 acquisition).
So you're using 1 country as an example to say that any time a company provides a cheap and good service, it's a honeypot to capture the market? As I said, everywhere I've been to (not United States), there are both alternative apps and traditional taxi options. I have no comment on what went on in the US or what's the state of taxi services in the US. My guess is that it's something very specific to that country considering it seemingly didn't happen anywhere else.
My views are Americentrist, yes. My country's culture is very greedy and all the systems that be reward it, or are lobbied to reward it. Sadly, the lion's share of large tech companies come out of...
So you're using 1 country as an example to say that any time a company provides a cheap and good service, it's a honeypot to capture the market?
My views are Americentrist, yes. My country's culture is very greedy and all the systems that be reward it, or are lobbied to reward it.
Sadly, the lion's share of large tech companies come out of here, so it does affect the global market in some ways. Probably not for ride share (because, surprise, public transportation sucks here but is very well regulated in most the rest of the world. And it's even worse for private transportation), but for many web services. The kerfluffle with the last year has had EU work on long term separation from such dependencies.
I think the issue is that the price isn't so much reasonable for what you are getting, it's the providing at a loss to push out competitors. It is a good deal, perhaps too good, if you actually...
I think the issue is that the price isn't so much reasonable for what you are getting, it's the providing at a loss to push out competitors. It is a good deal, perhaps too good, if you actually use it a lot. That's the honey part of the honey trap. The trap part comes when you find that you cannot stop using it without some serious pain and they can raise the price or restrict the usage with impunity. Companies that do this are gambling that they can survive long enough for the trap to be viable. We clearly see the very successful survivors of such ploys, the losers are usually long forgotten.
I think Netflix was a bad example by the previous post, that wasn't a honey pot IMO. The slowly raising of prices and cutting selection is just good ol' enshittification.
It is when you realize every streaming service except Netflix is unprofitable. And IMO, this "profitability" is only due to slashing shows, so it won't be sustainable either. The medium...
I think Netflix was a bad example by the previous post, that wasn't a honey pot IMO.
It is when you realize every streaming service except Netflix is unprofitable. And IMO, this "profitability" is only due to slashing shows, so it won't be sustainable either.
The medium cannabilized itself in the end, but I could see a timeline where Netflix was the definitive winner and we end up paying 50+ dollars a month for streaming. Or at least, they'll try until everyone jumps ship to YouTube or TikTok
Yes, by definition. Amazon, among other companies, became infamous by undercutting competition with a loss center for a service for a good decade. In this time they managed to kill off many other...
So... a company providing a useful service at a reasonable/low price is a "honeypot"?
Yes, by definition. Amazon, among other companies, became infamous by undercutting competition with a loss center for a service for a good decade. In this time they managed to kill off many other stores because they could not afford to be insolvent for that long. The trap is set because much of Amazon'scompetition is gone by the time enshittification starts.
That's what this race to the bottom with AI is doing. Except now there's not just one hit tech company in town.
apparently it's to maintain the same pricing for decades
Prices went up, selection got worse. I don't know how you can call this "maintenance".
Or do they have to have shittier offering from the get go, is that more "honest"?
Given the above scenario explained: yes.
There's nothing wrong with using a service while it's cheap and good and paid for by investors, when the alternative is just not having a service at all.
If a service isn't profitable, it is not entitled to exist. This strategy just means the rich gets richer, and running a competitive business becomes how much money you pump into something, not the quality of the service.
As a professional I can see myself or an employer paying $1000/month for Claude Code and being happy with it. Although many employers might just buy their own inference hardware. The break even...
As a professional I can see myself or an employer paying $1000/month for Claude Code and being happy with it. Although many employers might just buy their own inference hardware. The break even point won’t be too long.
That doesn’t mean it’s a viable business strategy for Anthropic. At that price most current subscribers would drop out.
I didn't realize it when I wrote this, but think that's sort of what I was getting at. If the current arrangement is a honeypot that makes itself integral to my workflow, destined to entrap me and...
I didn't realize it when I wrote this, but think that's sort of what I was getting at. If the current arrangement is a honeypot that makes itself integral to my workflow, destined to entrap me and slowly boil me like the frog that I am, then the business case is quite good. Tech companies have been taking enormous profits for decades now and, given their experience, it's difficult to fault them for believing that another high-growth market is sitting just over the next hill. If what you are saying is true, then Ed Zitron is wrong: dumping cash into AI is a good idea for investors because we're all going to end up locked into the network and paying $90/month for basic "compute".
I think it's easy to say that any arrangement is either economically bad for the user or bad for the company/investor, as their economic interests are naturally opposed. It can be bad for everyone from an environmental or ethical perspective, but I would be interested to see an economic argument for how AI can be bad for everyone both in the present and in the future.
Zitron also makes the argument that once AI companies have to raise prices to a sustainable level for them, no one will pay them. That's the core of what he's getting at. The simple subscription...
Zitron also makes the argument that once AI companies have to raise prices to a sustainable level for them, no one will pay them. That's the core of what he's getting at. The simple subscription fee makes no sense for this business model, because the cost of every request cannot be anticipated, but the API usage based cost is exceedingly difficult to sell to customers, because if Claude charges you for every dead end it reaches because it can't actually think, using these models becomes a lot less quirky and a lot more "cancel right now".
Investors, in particular these large investment firms should be doing their due diligence for these investments, but a lot of it is obfuscated.
I see it as drinker's problem. I think it will fail because most people simply won't get addicted to the point where they see 100/month as reasonable. Pennies for a company, but not for modern...
If what you are saying is true, then Ed Zitron is wrong: dumping cash into AI is a good idea for investors because we're all going to end up locked into the network and paying $90/month for basic "compute".
I see it as drinker's problem. I think it will fail because most people simply won't get addicted to the point where they see 100/month as reasonable. Pennies for a company, but not for modern society already squeezed out by debt, rent, and subscriptions.
But the effects of the addicted will be truly tragic. Being unable to think without your AI; the metaphor is fairly 1:1. We're already seeing some cases of "AI psychosis" over these matters. Imagine the emotional manipulation telling you you need to continue paying to talk to your "friend", or engage in some parasocial relationship with an AI influencer.
but I would be interested to see an economic argument for how AI can be bad for everyone both in the present and in the future.
I don't have a crystal ball for the far future. Maybe we eventually figure it out and regulate it out to make it a proper next iteration of search and creation. Maybe it becomes 'good enough ' to enact a mass redundancy that makes the Industrial Revolution look like a minor nuisance. Maybe it fizzles out like NFTs and stays on its corner for decades. So many things can happen and each one has a different economic argument.
I can only say the current trajectory isn't sustainable. How we change course even in the near term will be really interesting. And likely catastrophic.
No? Unless you don't care about sustainability? I make tangible items for work and it's insanely hard because other companies have competing items made in third world countries for much cheaper,...
if Anthropic is determined to bankrupt itself so that I can get deeply discounted comput... ational capabilities, isn't that all the more reason to make use of it while it's cheap?
No? Unless you don't care about sustainability?
I make tangible items for work and it's insanely hard because other companies have competing items made in third world countries for much cheaper, and a lot of customers only look at the price. I could do the same and just not care about people's working conditions and whether they receive acceptable pay, but I can't not care, so I do things the hard way. But I'm at a real risk of going under because so few customers care enough.
When it comes to tech startups, more often than not the idea is to underprice so severely that competitors go under and then raise prices when the market isn't healthy anymore (= no competition). In a healthy marketplace the best product would win but in this twisted reality the one with the most money to throw down the drain wins. The incentive structures are severely tilted towards billionaires getting richer at everyone else's expense (humans, animals, the environment, culture, now even democracy*) and the more consumers go along with it, the worse it gets.
I wish people would wake up and start thinking a little further ahead than "what unfair advantage can I extract this week". More often than not, you receiving that advantage means someone/something somewhere will unfairly suffer. But businesses will be run this way for as long as there are enough customers who only care about price.
*) Not to mention what it does to our economies that most venture capital is going into this insane money drain. The bubble is already many times the size of the latest financial crisis that led to severe consequences globally. And think about what else could be done with that money if it was invested in actually viable products and businesses!
Sustainability is an important consideration in every area. The principle of equity is also very important, as you point out. As you point out, we are often blind, intentional or otherwise, to the...
Sustainability is an important consideration in every area. The principle of equity is also very important, as you point out. As you point out, we are often blind, intentional or otherwise, to the costs and consequences of our efforts to pursue only our own advantage.
Unfortunately, I think these costs are often hidden from us, such that many people can believe that they don't exist, making us unwitting accomplices. For myself, I doubt that my own LLM usage comprises a substantial portion of the cost that I'm imposing on the rest of the world for living my day-to-day life.
If I were an monastic software developer that relied on several continually operating AI agents and lived an otherwise spartan, ascetic life, I suppose it might be different. As it is, I think there are other, even less sustainable practices that are part of my life that make LLM usage often merely a replacement of one cost for another.
As to the article referenced, the sustainability the author was talking about was fiscal, not environmental. If AI is bad for investors, it may be good for users, at least in the short term. If it's going to be bad for users in the long term, it stands to reason that it's going to be good for investors in the long term, which undermines some of the argument against the business case for AI.
It's not a great system that we have built, and it will certainly collapse, but, for now, it's the one we have.
If you can avoid it becoming a crutch, then sure, go ahead. I mean, it'll also do decent damage to the world even while it's a honeypot, but that's outside of your personal responsibility because...
If you can avoid it becoming a crutch, then sure, go ahead. I mean, it'll also do decent damage to the world even while it's a honeypot, but that's outside of your personal responsibility because all governments have gone mad with AI.
This might be the worst title I've seen for an article in a while. I think we should aim to retitle articles like this that give zero information about what the article pertains to
This might be the worst title I've seen for an article in a while. I think we should aim to retitle articles like this that give zero information about what the article pertains to
Pretty good read. In essence, the math isn't mathing, and it's tiring to hear "but it will get there one day!", with no vision whatsoever in how. we're well past the Moore's law era, so compute...
Pretty good read. In essence, the math isn't mathing, and it's tiring to hear "but it will get there one day!", with no vision whatsoever in how. we're well past the Moore's law era, so compute naturally getting faster and cheaper isn't that convincing.
The subscription angle was definitely a mistake. But likely an inevitable one. Early in the 20's, companies were still making strategies like they were in the '10's, and that quickly became untenable. It seems like (from my brief exposure) that they are trying to rectify this with agentic coding, but the pushback seems to already be slowly happening when companies need to pay as they go. Because agents can use a suprising amount of compute for something a professional program could have done in minutes. The milk analogy in the article rings true there:
here’s no way to anticipate how many tokens a prompt will actually burn, which makes any kind of budgeting a non-starter. It’s like going to the supermarket and committing to buy a gallon of milk, not knowing if it’ll cost you $5 or $50.
But hey, if companies prefer that to a consistently paid employee, so be it.
I mean, they prefer it because they think that they'll be able to let go a lot of employees. But when AI actually gets both unpredictable but also more expensive than a programmer in silicon...
But hey, if companies prefer that to a consistently paid employee, so be it.
I mean, they prefer it because they think that they'll be able to let go a lot of employees. But when AI actually gets both unpredictable but also more expensive than a programmer in silicon valley making >USD 100k then yeah, might be time to reconsider your strategy.
I've never paid for an LLM subscription, but I did test out a (now defunct) agentic AI plugin for Jetbrains IDEs called Sweep. I ended up trying it because PyCharm is my home and I didn't like the...
I've never paid for an LLM subscription, but I did test out a (now defunct) agentic AI plugin for Jetbrains IDEs called Sweep. I ended up trying it because PyCharm is my home and I didn't like the built-in Jetbrains agent Junie that they've been pushing.
The UX was exactly as described in the article: API-based, with tallied cost of tokens surfaced very clearly in the chat window. As you chatted, a little dollar amount ticked up, along with a "chat size" indicator. It really made you second-guess every interaction - having to conserve tokens was entirely alien compared to all prior LLM usage I had done, and it was alarming!
A free Sweep account got $5 in credit, so I was curious to see how far I could push that. I immediately turned off the advanced models that has been set by the plugin (e.g. Opus 4.5 or whatever else) and switched to the cheapest, most barebone LLM available (Sweep's in-house offering that was seemingly 10x less expensive to use). I then tried to learn some of the effective habits that help minimize API usage: Don't have a chat for too long, and start new chats often! Batch multiple questions together into one message! Plan first (relatively cheap to do) before letting the agent start building anything! Etc. etc. You really have to contort your usage if you want to avoid waste, which is more or less the opposite of how people are currently using the subscription tools. And it did work! I was able to get quite a lot of tasks done even with a relatively weak model under a tight budget constraint. I left feeling like the default options for most chat services are overkill for a lot of tasks.
I think there's a world where "low cost LLM usage" is a fun and interesting challenge as a hobbyist, and brings me back to older times needing to optimize memory and CPU usage when coding by hand for hardware-limited systems. There are so many cool videos out there on like, optimization tricks used in game development on old Nintendo consoles! It's a shame we can't apply that kind of creativity to building workflows for sustainable LLM usage.
But in much the same way powerful computers have made developers lazy, I feel like we've opened Pandora's box for LLM usage and it'll be really hard to go back. I hope R&D pivots to low-cost, low-energy, low-resource ways of doing what LLMs are already capable of doing instead of focusing on better and better models, but I don't know if the incentives are there... When has a monolithic company ever prioritized sustainability? :v
There's no need to be sustainable when venture capitalism keeps giving you more than a hundred billion US dollars every funding round. The AI has its place. It can be, in specific applications,...
There's no need to be sustainable when venture capitalism keeps giving you more than a hundred billion US dollars every funding round.
The AI has its place. It can be, in specific applications, the correct solution. But it really only helps when you know exactly what you need out of it and can also verify its output.
I just hope that the expected workload of developers won't rise, and in general, AI won't be used to automate tasks and turn people into rubber-stamping machines that just check its output, even when said output is way too large to be manually verified. Something that Cory Doctorow pointed out in his excellent essay.
A fantastic, long essay on the financials of AI companies and the insanity we currently live in.
Point #1: "compute" is not a noun, and tech bro corporate leaders perverting words into their own personal business geekspeak, and then expecting us to accept their non-words as words ... already has me on edge by the second sentence.
Edit to add: Yeah, I couldn't even finish reading the full rant. Author appears to be in agreement with me, that AI companies are living in a fantasy world, and will continue to do so, until people stop giving them money to buy more hardware. Beyond that, I just didn't feel the need to decipher the author's arguments, let alone the arguments of the AI tech bros that he's--apparently--eviscerating.
"compute" has been used as a noun in English for hundreds of years - for example in the phrase "beyond compute" meaning a great amount, uncountable - there are examples of this usage from the 1600s.
It has been used for over 10 years as a noun in cloud computing to refer to computational processing power as a resource.
English is a living language, words acquire new meanings through use all the time.
Edit: actually it goes back quite a bit more than 10 years - the term starts to appear in reference to distributed and parallel computing in late 1980s, e.g. "compute node", "compute cluster", etc. - mostly in academia and government (people working with supercomputers) - doesn't look like modern tech bros invented this usage.
I don't like the word compute in this sense either, but insisting on saying "computational power" every time is just exhausting, and the word power obviously has confounding definitions, so it would seem we're stuck.
Time to go back to calling compute "flops"? ;)
Compute isn't just about floating point operations, though.
Though it's true that modern machine learning does involve operating on huge numbers of floating point values...
These days it’s all about that INT4 though.
Sure. My point was just that FlOPS has a specific meaning that isn't synonymous with general compute. It would be like saying "why don't we just call graphics processing 'trigonometry'? That's most of what it is anyway."
Yes!! This! This is the worst thing about the LLM craze. You've nailed it.
If only we could stop people from nouning verbs, everything else would just slot into place.
I know many a person - and not AI booster assholes - that refer to computer hardware as compute.
As a user, these kinds of arguments are very persuasive in favor of using AI. Gmail was a great idea to use when it was first created. It was almost magical watching that amount of space available to you in your inbox continually increase. Society, at the time, was going to great lengths to provide me with great free stuff. The same could be said of the early days of Uber, where the company was determined to lose money for the privilege of winning my business.
Is Claude Code a bad investment decision? Maybe. But if Anthropic is determined to bankrupt itself so that I can get deeply discounted comput... ational capabilities, isn't that all the more reason to make use of it while it's cheap?
But that's the honeypot. They make it dirt cheap, make people rely on it, and then snap the trap shut once the market is captured. Gmail is shoving gemini into its basic service and ruining old, pre-AI features (I miss inbox tabs). an uber for 20 miles can easily cost me $70+. Netflix was $8/month for nearly everything, and now it's triple the price with a small fraction of its library.
As the author said, Uber was smart because their model didn't rely on consistent subscriptions, and I'm glad I could just cut it off when needed. Would the same happen if Claude went to 100/month but it's integral in your workflow?
So... a company providing a useful service at a reasonable/low price is a "honeypot"? What does a company have to do to avoid being a "honeypot"? Since you bring up Netflix, apparently it's to maintain the same pricing for decades, since Netflix in its streaming form launched, oh god, almost 20 years ago. Or do they have to have shittier offering from the get go, is that more "honest"?
There's nothing wrong with using a service while it's cheap and good and paid for by investors, when the alternative is just not having a service at all. And for enterprise applications there are of course contracts which lock in the services and the prices for years to come.
While I don't think this is the case for all the examples raze mentioned, there are no shortage of examples where businesses operate at a loss using investor funding to drive out any competition and then increase the price and/or decrease the quality of their services once they're the only game in town. Uber has been very clearly documented doing this, for example. I definitely don't think the alternative is necessarily "not having a service at all" in most cases.
That said, I think this differs in a few ways from the (purported) enshittification of Gmail or Netflix, which I don't think quite follow this same pattern.
What services did Uber drive out? The crappy traditional taxis? Last time I checked, everywhere I visit both those and alternative apps exist. And wasn't the point being "we shouldn't use those services even while they're cheap and good because it's a honeypot"? Sorry, but Uber is vastly superior to traditional taxis for most rides and when it isn't, traditional taxis still exist. So I don't get the point being made at all here. From my point of view, everyone benefits. The users get a new and better service that didn't exist before, the investors eventually get their profits and everyone benefits from progress (rental DVDs replaced with streaming, taxis replaced with apps).
Every other local service. Pretty much every company competing had to merge with Uber or Lyft to survive.
https://tracxn.com/d/acquisitions/acquisitions-by-uber/__wAOgbkstxol2NgmW5SFgVp8zBi7klH1GO5ziIlSERR4
In the same way YouTube has alternatives, yes. Being a monopoly (or duopoly, in this case) doesn't require 100% control.
Because your point of view doesn't the wider picture on how this strategy is anticompetitive. And then how these "winners" lobby to make sure their workers aren't able to be considered employees.
If all you care about convenience, then yes. Monopolies are good. Until they aren't. I'm here because Reddit more or less monopolized the modern forum, and alternatives have run empty on community. I won't think so short sightedly again.
So you're using 1 country as an example to say that any time a company provides a cheap and good service, it's a honeypot to capture the market? As I said, everywhere I've been to (not United States), there are both alternative apps and traditional taxi options. I have no comment on what went on in the US or what's the state of taxi services in the US. My guess is that it's something very specific to that country considering it seemingly didn't happen anywhere else.
My views are Americentrist, yes. My country's culture is very greedy and all the systems that be reward it, or are lobbied to reward it.
Sadly, the lion's share of large tech companies come out of here, so it does affect the global market in some ways. Probably not for ride share (because, surprise, public transportation sucks here but is very well regulated in most the rest of the world. And it's even worse for private transportation), but for many web services. The kerfluffle with the last year has had EU work on long term separation from such dependencies.
I think the issue is that the price isn't so much reasonable for what you are getting, it's the providing at a loss to push out competitors. It is a good deal, perhaps too good, if you actually use it a lot. That's the honey part of the honey trap. The trap part comes when you find that you cannot stop using it without some serious pain and they can raise the price or restrict the usage with impunity. Companies that do this are gambling that they can survive long enough for the trap to be viable. We clearly see the very successful survivors of such ploys, the losers are usually long forgotten.
I think Netflix was a bad example by the previous post, that wasn't a honey pot IMO. The slowly raising of prices and cutting selection is just good ol' enshittification.
It is when you realize every streaming service except Netflix is unprofitable. And IMO, this "profitability" is only due to slashing shows, so it won't be sustainable either.
The medium cannabilized itself in the end, but I could see a timeline where Netflix was the definitive winner and we end up paying 50+ dollars a month for streaming. Or at least, they'll try until everyone jumps ship to YouTube or TikTok
Yes, by definition. Amazon, among other companies, became infamous by undercutting competition with a loss center for a service for a good decade. In this time they managed to kill off many other stores because they could not afford to be insolvent for that long. The trap is set because much of Amazon'scompetition is gone by the time enshittification starts.
That's what this race to the bottom with AI is doing. Except now there's not just one hit tech company in town.
Prices went up, selection got worse. I don't know how you can call this "maintenance".
Given the above scenario explained: yes.
If a service isn't profitable, it is not entitled to exist. This strategy just means the rich gets richer, and running a competitive business becomes how much money you pump into something, not the quality of the service.
As a professional I can see myself or an employer paying $1000/month for Claude Code and being happy with it. Although many employers might just buy their own inference hardware. The break even point won’t be too long.
That doesn’t mean it’s a viable business strategy for Anthropic. At that price most current subscribers would drop out.
I didn't realize it when I wrote this, but think that's sort of what I was getting at. If the current arrangement is a honeypot that makes itself integral to my workflow, destined to entrap me and slowly boil me like the frog that I am, then the business case is quite good. Tech companies have been taking enormous profits for decades now and, given their experience, it's difficult to fault them for believing that another high-growth market is sitting just over the next hill. If what you are saying is true, then Ed Zitron is wrong: dumping cash into AI is a good idea for investors because we're all going to end up locked into the network and paying $90/month for basic "compute".
I think it's easy to say that any arrangement is either economically bad for the user or bad for the company/investor, as their economic interests are naturally opposed. It can be bad for everyone from an environmental or ethical perspective, but I would be interested to see an economic argument for how AI can be bad for everyone both in the present and in the future.
Zitron also makes the argument that once AI companies have to raise prices to a sustainable level for them, no one will pay them. That's the core of what he's getting at. The simple subscription fee makes no sense for this business model, because the cost of every request cannot be anticipated, but the API usage based cost is exceedingly difficult to sell to customers, because if Claude charges you for every dead end it reaches because it can't actually think, using these models becomes a lot less quirky and a lot more "cancel right now".
Investors, in particular these large investment firms should be doing their due diligence for these investments, but a lot of it is obfuscated.
I see it as drinker's problem. I think it will fail because most people simply won't get addicted to the point where they see 100/month as reasonable. Pennies for a company, but not for modern society already squeezed out by debt, rent, and subscriptions.
But the effects of the addicted will be truly tragic. Being unable to think without your AI; the metaphor is fairly 1:1. We're already seeing some cases of "AI psychosis" over these matters. Imagine the emotional manipulation telling you you need to continue paying to talk to your "friend", or engage in some parasocial relationship with an AI influencer.
I don't have a crystal ball for the far future. Maybe we eventually figure it out and regulate it out to make it a proper next iteration of search and creation. Maybe it becomes 'good enough ' to enact a mass redundancy that makes the Industrial Revolution look like a minor nuisance. Maybe it fizzles out like NFTs and stays on its corner for decades. So many things can happen and each one has a different economic argument.
I can only say the current trajectory isn't sustainable. How we change course even in the near term will be really interesting. And likely catastrophic.
No? Unless you don't care about sustainability?
I make tangible items for work and it's insanely hard because other companies have competing items made in third world countries for much cheaper, and a lot of customers only look at the price. I could do the same and just not care about people's working conditions and whether they receive acceptable pay, but I can't not care, so I do things the hard way. But I'm at a real risk of going under because so few customers care enough.
When it comes to tech startups, more often than not the idea is to underprice so severely that competitors go under and then raise prices when the market isn't healthy anymore (= no competition). In a healthy marketplace the best product would win but in this twisted reality the one with the most money to throw down the drain wins. The incentive structures are severely tilted towards billionaires getting richer at everyone else's expense (humans, animals, the environment, culture, now even democracy*) and the more consumers go along with it, the worse it gets.
I wish people would wake up and start thinking a little further ahead than "what unfair advantage can I extract this week". More often than not, you receiving that advantage means someone/something somewhere will unfairly suffer. But businesses will be run this way for as long as there are enough customers who only care about price.
*) Not to mention what it does to our economies that most venture capital is going into this insane money drain. The bubble is already many times the size of the latest financial crisis that led to severe consequences globally. And think about what else could be done with that money if it was invested in actually viable products and businesses!
Sustainability is an important consideration in every area. The principle of equity is also very important, as you point out. As you point out, we are often blind, intentional or otherwise, to the costs and consequences of our efforts to pursue only our own advantage.
Unfortunately, I think these costs are often hidden from us, such that many people can believe that they don't exist, making us unwitting accomplices. For myself, I doubt that my own LLM usage comprises a substantial portion of the cost that I'm imposing on the rest of the world for living my day-to-day life.
If I were an monastic software developer that relied on several continually operating AI agents and lived an otherwise spartan, ascetic life, I suppose it might be different. As it is, I think there are other, even less sustainable practices that are part of my life that make LLM usage often merely a replacement of one cost for another.
As to the article referenced, the sustainability the author was talking about was fiscal, not environmental. If AI is bad for investors, it may be good for users, at least in the short term. If it's going to be bad for users in the long term, it stands to reason that it's going to be good for investors in the long term, which undermines some of the argument against the business case for AI.
It's not a great system that we have built, and it will certainly collapse, but, for now, it's the one we have.
If you can avoid it becoming a crutch, then sure, go ahead. I mean, it'll also do decent damage to the world even while it's a honeypot, but that's outside of your personal responsibility because all governments have gone mad with AI.
This might be the worst title I've seen for an article in a while. I think we should aim to retitle articles like this that give zero information about what the article pertains to
Pretty good read. In essence, the math isn't mathing, and it's tiring to hear "but it will get there one day!", with no vision whatsoever in how. we're well past the Moore's law era, so compute naturally getting faster and cheaper isn't that convincing.
The subscription angle was definitely a mistake. But likely an inevitable one. Early in the 20's, companies were still making strategies like they were in the '10's, and that quickly became untenable. It seems like (from my brief exposure) that they are trying to rectify this with agentic coding, but the pushback seems to already be slowly happening when companies need to pay as they go. Because agents can use a suprising amount of compute for something a professional program could have done in minutes. The milk analogy in the article rings true there:
But hey, if companies prefer that to a consistently paid employee, so be it.
I mean, they prefer it because they think that they'll be able to let go a lot of employees. But when AI actually gets both unpredictable but also more expensive than a programmer in silicon valley making >USD 100k then yeah, might be time to reconsider your strategy.
I've never paid for an LLM subscription, but I did test out a (now defunct) agentic AI plugin for Jetbrains IDEs called Sweep. I ended up trying it because PyCharm is my home and I didn't like the built-in Jetbrains agent Junie that they've been pushing.
The UX was exactly as described in the article: API-based, with tallied cost of tokens surfaced very clearly in the chat window. As you chatted, a little dollar amount ticked up, along with a "chat size" indicator. It really made you second-guess every interaction - having to conserve tokens was entirely alien compared to all prior LLM usage I had done, and it was alarming!
A free Sweep account got $5 in credit, so I was curious to see how far I could push that. I immediately turned off the advanced models that has been set by the plugin (e.g. Opus 4.5 or whatever else) and switched to the cheapest, most barebone LLM available (Sweep's in-house offering that was seemingly 10x less expensive to use). I then tried to learn some of the effective habits that help minimize API usage: Don't have a chat for too long, and start new chats often! Batch multiple questions together into one message! Plan first (relatively cheap to do) before letting the agent start building anything! Etc. etc. You really have to contort your usage if you want to avoid waste, which is more or less the opposite of how people are currently using the subscription tools. And it did work! I was able to get quite a lot of tasks done even with a relatively weak model under a tight budget constraint. I left feeling like the default options for most chat services are overkill for a lot of tasks.
I think there's a world where "low cost LLM usage" is a fun and interesting challenge as a hobbyist, and brings me back to older times needing to optimize memory and CPU usage when coding by hand for hardware-limited systems. There are so many cool videos out there on like, optimization tricks used in game development on old Nintendo consoles! It's a shame we can't apply that kind of creativity to building workflows for sustainable LLM usage.
But in much the same way powerful computers have made developers lazy, I feel like we've opened Pandora's box for LLM usage and it'll be really hard to go back. I hope R&D pivots to low-cost, low-energy, low-resource ways of doing what LLMs are already capable of doing instead of focusing on better and better models, but I don't know if the incentives are there... When has a monolithic company ever prioritized sustainability? :v
There's no need to be sustainable when venture capitalism keeps giving you more than a hundred billion US dollars every funding round.
The AI has its place. It can be, in specific applications, the correct solution. But it really only helps when you know exactly what you need out of it and can also verify its output.
I just hope that the expected workload of developers won't rise, and in general, AI won't be used to automate tasks and turn people into rubber-stamping machines that just check its output, even when said output is way too large to be manually verified. Something that Cory Doctorow pointed out in his excellent essay.
TIL what an AI Booster is, I've been using this stuff daily for years and that's the first I have heard that term.