... ... From the conclusion, it seems there are 3 paths that AI can go: The AI bubble bursts, everyone wakes up from this delusion and we return to a world without AI The promised AGI happens and...
This is the AI era in a nutshell. Squint one way, and you can portray it as the saving grace of the world economy. Look at it more closely, and it’s a ticking time bomb lodged in the global financial system. The conversation is always polarized. Keep the faith.
...
Lately, I’ve been preoccupied with a different question: What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap.
...
Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion.
From the conclusion, it seems there are 3 paths that AI can go:
The AI bubble bursts, everyone wakes up from this delusion and we return to a world without AI
The promised AGI happens and the world is saved/destroyed
AI plateaus and performs just good enough as the article describes. It might contribute some good solutions to certain issues but nothing life-changing, while making the world a whole lot worse.
While I certainly wish 1 is our future, I don't think we can ever go back to a pre-AI world, even if the AI bubble should burst. I do hope I'm wrong though. I personally believe that 2 won't happen, at least not in my lifetime, which unfortunately makes 3 the most likely scenario in my eyes
Nonsense, never going to happen because for many things it is unquestionably already good enough. For example learning new complicated things - a situation where you need to verify what you're...
The AI bubble bursts, everyone wakes up from this delusion and we return to a world without AI
Nonsense, never going to happen because for many things it is unquestionably already good enough. For example learning new complicated things - a situation where you need to verify what you're learning anyway, but it's difficult to get over the beginner hump because you don't even know where to start, what kind of theory you need, you don't know the terminology... LLMs are already immensely useful here in giving you the basic intuition and information on how and where to start, especially the new "reasoning" models that hallucinate less and can also verify things in scientific publications etc.
AI plateaus and performs just good enough as the article describes. It might contribute some good solutions to certain issues but nothing life-changing, while making the world a whole lot worse.
I fail to see how "the world is not going to be radically transformed in ways that may end up okay overall, but will no doubt create huge challenges and at least some singificantly negative changes" is a bad outcome. It would give us time to eventually adapt, at least. I wish this were true because I think the result wouldn't be as bad as suggested, but I'm worried the development is not going to stop there.
Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap.
Imo this is not too far from the nonsensical "stochastic parrot" term. We are in the middle of one of the fastest technological evolutions in history, one that exploded into the mainstream because we found out that if we just make a neural net big enough, it starts to have unexpected emergent properties. There is no rule that says that the development is going to continue just as fast, but so far it has been (o1 was released not even a year ago and it was a big step up in capabilities), saying that surely any moment it's going to stop has no basis.
Do note that I'm not saying that AI is great and that the changes are going to be positive, I hope that's obvious from how I see the second outcome.
Also I cannot predict the economical consequences of a potential bubble bursting, but I do predict that if it does burst, it's not going to be a burst that would delete the whole field, it's just a question of how big the setback is going to be. Worst case scenario we're going to be using Deepseek R1 or some equivalent on our own future GPUs or more likely AI acceleration chips (already on market, though not nearly good enough yet) in the future.
I agree a bubble burst would just be a reality check. AI will still be good for generating bespoke stock images, translating, autocomplete, search, etc. But the current price of AI companies is...
I agree a bubble burst would just be a reality check. AI will still be good for generating bespoke stock images, translating, autocomplete, search, etc. But the current price of AI companies is set with the expectation that we’re on a quick path to AGI. Coming out of that delusional state will be a big blow to the stock market.
DeepSeek is good enough to run basically what we use ChatGPT for, likely slightly worse but not by an order of magnitude, and afaik it can be run locally without needing the most expensive and low...
AI will still be good for generating bespoke stock images, translating, autocomplete, search, etc.
DeepSeek is good enough to run basically what we use ChatGPT for, likely slightly worse but not by an order of magnitude, and afaik it can be run locally without needing the most expensive and low availability GPUs because reasoning models can be ran partially on several GPUs (so you do need several top of the line gaming GPUs, but those are accessible). Not 100% sure on that specifically, maybe it was only with VRAM mods which few people dare to do, but my point is that the usage that we have for LLMs now is not going away.
You can run a quantized version on a $9500 Mac Studio. With a couple years of progress in optimization and cheaper hardware that might be doable at less than half the cost. Not an unreasonable sum...
You can run a quantized version on a $9500 Mac Studio. With a couple years of progress in optimization and cheaper hardware that might be doable at less than half the cost. Not an unreasonable sum for professional hardware in high paying industries.
Framework's Desktop maxes out at 128GB of unified RAM, and is only ~$2.2k. It's being marketed as a local machine learning dev machine. Considering the comparative price of Nvidia cards all on...
Framework's Desktop maxes out at 128GB of unified RAM, and is only ~$2.2k. It's being marketed as a local machine learning dev machine. Considering the comparative price of Nvidia cards all on their own, I bet they're going to sell every one they can make.
Makes me wonder if AMD are going to come out with an extremely high memory config for Strix Halo - NVIDIA won’t do it because they’re worried about cannibalising their datacenter market, but...
Makes me wonder if AMD are going to come out with an extremely high memory config for Strix Halo - NVIDIA won’t do it because they’re worried about cannibalising their datacenter market, but that’s much less of an issue for AMD’s GPUs and they could probably undercut Apple by half at 512GB.
Could make for an interesting option as a locally hosted appliance for SMEs.
I think it's much simpler. It's never going to happen because, as mentioned, they will just try to to rand it until it sticks again. It's incredibly slim, but I can see a society where it's...
never going to happen because for many things it is unquestionably already good enough.
I think it's much simpler. It's never going to happen because, as mentioned, they will just try to to rand it until it sticks again. It's incredibly slim, but I can see a society where it's untenable for any business to use AI. That would basically be an "Ai-less" world, except for some small toy GPT's for personal use.
I fail to see how "the world is not going to be radically transformed in ways that may end up okay overall, but will no doubt create huge challenges and at least some singificantly negative changes" is a bad outcome.
Easy. UneUnemployment and homelessness hits a point to cause so much wealth inequality that multiple countries engage in violence against AI companies. Military is deployed and we hit a soft civil war. There's no serious plans to address the non-1% economy (e.g. No UBI) and many western countries fall into thus 2nd world feudelism country feel. Futuristic for the richest, and the slums for the non-1%.
They say every society is only 9 meals away from anarchy. I don't see how displacing millions of jobs at such a fast rate (historically speaking) ends in some way beneficial to society. You're kicking society out.
The only solace here is that this isn't going to happen in 5 years like how corporate wants to hype it up to try and be. Maybe over the course of 20 years, woth a shift in policy makers, we could either address this issue or prevent it entirely.
We are in the middle of one of the fastest technological evolutions in history
Not really. I was just reading an article about how 95% of Gen Ai projects ended in failure. The technology is there but it sure isn't being welded competently. It's wielded by a businessman introduced to a hammer and claiming they are a carpenter.
There are some amazing breakthroughs, but a good 90%+ of the "evolution" is in businesses being emboldened to attack labor as a concept. Prematurely, I might add.
Most, yes. 19/20 projects from already surviving businesses, no. That's not a good sign for adoption of a "revolutionary" tech. To give reference: the common metric is that 90% or startups fail...
Most, yes. 19/20 projects from already surviving businesses, no. That's not a good sign for adoption of a "revolutionary" tech.
To give reference: the common metric is that 90% or startups fail within 5 years. So it's a noteworthy to hear 95% of genAI projects not coming together.
It's already affordable for businesses to use somewhat useful self-hosted AI models. It's just more practical and reasonably cheap to not do that and always have the best thing. And again, while...
I think it's much simpler. It's never going to happen because, as mentioned, they will just try to to rand it until it sticks again. It's incredibly slim, but I can see a society where it's untenable for any business to use AI. That would basically be an "Ai-less" world, except for some small toy GPT's for personal use.
It's already affordable for businesses to use somewhat useful self-hosted AI models. It's just more practical and reasonably cheap to not do that and always have the best thing. And again, while moronic managers push AI in aplications where it's useless, and this inflates usage, it is already immensely useful in some areas, so saying anything like "it's only going to stick if it's pushed on us by force" has no basis in reality.
The only solace here is that this isn't going to happen in 5 years like how corporate wants to hype it up to try and be. Maybe over the course of 20 years, woth a shift in policy makers, we could either address this issue or prevent it entirely.
This is what I'm saying basically. All of the things you mention above this are going to happen with a fast continuing evolution of AI as well, and to a higher degree. Plateauing development would at least give us some time to adapt.
Not really. I was just reading an article about how 95% of Gen Ai projects ended in failure. The technology is there but it sure isn't being welded competently. It's wielded by a businessman introduced to a hammer and claiming they are a carpenter.
Eh, on one hand I agree with the last sentence, but on the other hand I only believe statistics that I falsify myself, plus this says absolutely nothing about the most important part, which is just using existing AI models in some workflows within existing fields. One example: I'm using LLMs to do R&D in electroacoustics/psychoacoustics as a self taught person and it's been an absolute gamechanger comparable with the introduction of widely available advanced loudspeaker simulation software.
Except that is the reality around us right now. Companies forcing employees to use AI, forcing consumers to try and use AI, not making deals unless somehow it has AI in it. If it's so wonderful,...
saying anything like "it's only going to stick if it's pushed on us by force" has no basis in reality.
Except that is the reality around us right now. Companies forcing employees to use AI, forcing consumers to try and use AI, not making deals unless somehow it has AI in it. If it's so wonderful, why try to push it on us instead of an ad push to make us rush to the store to buy it?
All of the things you mention above this are going to happen with a fast continuing evolution of AI as well, and to a higher degree.
Maybe our time scales are different, but I don't consider 20 years to be "fast continuing evolution". And especially not in the landscape of tech. 20 years ago was pre web 2.0 and pre-smartphone. Those factors did have explosive evolution and could see clear societal shifts in 5 years. And arguebly worrying trends within a decade.
I don't see the same here, not in a way where the societal shifts feel so obvious and drastic. But somehow we see the worrying trends happening already. So that's impressive in its own regard.
plus this says absolutely nothing about the most important part, which is just using existing AI models in some workflows within existing fields.
Article and report (PDF warning) if you want to see it for yourself.
Obviously businesses won't let us audit them so it's hard to prove this ourselves. Even these studies noted that companies can be tepid here. But the trends don't fill me with much confidence. I'll also note the study seems to have soft suggestions that small groups can use it well (like your R&D project), but it just doesn't seem to scale to business needs properly.
I can't see this ever happening, at least not in the US. The most violence I've seen is some of the interference with ICE raids/detainments by communities, but that is in direct response to an...
Unemployment and homelessness hits a point to cause so much wealth inequality that multiple countries engage in violence against AI companies. Military is deployed and we hit a soft civil war.
I can't see this ever happening, at least not in the US.
The most violence I've seen is some of the interference with ICE raids/detainments by communities, but that is in direct response to an active situation that is causing direct harm to a community member.
I've pondered two different scenarios that could result in violence, one less likely, the other more likely.
The first is similar in outcome to your comment. If wages, particularly minimum wage and other low-paying hourly jobs, pay less than the cost of commuting (high gas prices) and childcare, then there is no motivation to work these jobs. This could be worsened/forced by the additional Medicaid work requirements that are being implemented from the Trump administration's recent bill. But I do not see the people affected by this resorting to violence- their first priority is to take care of their family, and being imprisoned or worse would hurt their family.
The other ties into automation much more. If/when self-driving trucks start becoming mainstream, it will result in layoffs for truck drivers. The American Trucking Association's reporting says there were 3.55 million truck drivers employed in 2023. I can absolutely see this group rioting and causing all types of destruction. But I think they would focus their energy on their (ex)employers and less on the technology companies creating the actual software/hardware.
For AI specifically, it's mostly kicking people out of higher paying jobs (as far as I know). A lot of recent tech layoffs are actually fueled by the first Trump admin's Tax Cuts and Jobs Act of 2017, which drastically cut how businesses can expense R&D activities, changing something that was in place for 70 years. This didn't go into effect until 2022. I'm not a tax expert, but it looks like this is being re-established in the latest bill. Will it result in an increase in tech hiring activities? Maybe, but it's likely those hirings will still be a reduced number compared to the amount of layoffs, and that is likely due to the appearance/assumption of AI being able to replace some of these workers.
It happened 90 years ago, so I'll never fully count it out. The thing is a lot of Americans right now are uncomfortable, but not to a point of riot. When we hit that point, all bets are off. No...
It happened 90 years ago, so I'll never fully count it out. The thing is a lot of Americans right now are uncomfortable, but not to a point of riot. When we hit that point, all bets are off. No point trying to protect a family who's lost their roof, no point worrying about jail time if you're dehydrated and jobless. Or as we saw 4 years ago: no point in caring about consequences when you feel you've been cheated.
A lot of recent tech layoffs are actually fueled by the first Trump admin's Tax Cuts and Jobs Act of 2017, which drastically cut how businesses can expense R&D activities, changing something that was in place for 70 years. This didn't go into effect until 2022. I'm not a tax expert, but it looks like this is being re-established in the latest bill. Will it result in an increase in tech hiring activities
Yeah, hard to say. S174 was one factor of many. Interest rates remain "high" (translation: no longer dirt cheap), the economy stability still isn't there, tarrif threats are annoying, and there's still this urge to try and make another wave of outsourcing, now with AI powering it. Maaybe in 2023 or even 4 this could have helped, but it seems too late in 2025
For someone who does not read about or involve themselves with LLMs daily, could you elaborate a bit on what ‘unexpected emergent properties’ refers to?
[…] we found out that if we just make a neural net big enough, it starts to have unexpected emergent properties
For someone who does not read about or involve themselves with LLMs daily, could you elaborate a bit on what ‘unexpected emergent properties’ refers to?
Simpler NNs or LLMs fail to do certain tasks which become possible not because of a significant change in approach but from a change in size. GPT-2 failed at arithmetic, reasoning, and...
Simpler NNs or LLMs fail to do certain tasks which become possible not because of a significant change in approach but from a change in size. GPT-2 failed at arithmetic, reasoning, and translation, and couldn't do much in the way of in-context learning. At the time it might have been reasonable to say maybe they never would be able to do something like understand a joke [not trained on], or be able to do novel math olympiad problems.
*An old talk by Peter Norvig on moving into the "data paradigm" of CS mentioned a test of 5 algorithms for finding where sentences ended in Chinese. The point he wanted to drive home was that the algorithm that had performed worst when trained on (iirc) one million examples had become the best with 100 million, with the former #1 becoming the 3rd best.
To go a different route, hominin evolution had a period of relatively rapid expansion of the prefrontal cortex, roughly tripling its size over ~2 million years. A simplistic explanation of this is that the brain wrinkled for surface area and got as many cortical minicolumns--the repeating units of the neocortex-- as it could.
So in our own history you can make a case that after a couple hundred million years of relative stagnation what ended up giving us the capacity for culture, more abstract thought, and the rest of what separates us from australopithecines was a "neural net big enough."
I think the bubble will burst but there won’t be a great awakening and a return to the world before AI. Instead, it will be repackaged under a different name and resold to the same people who got...
I think the bubble will burst but there won’t be a great awakening and a return to the world before AI. Instead, it will be repackaged under a different name and resold to the same people who got burned the last time.
This part of the article (which you also quoted) really upset me:
… not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late
Because yeah, of course that’s happening in tech and in politics too. The US is being dismantled by greedy evil authoritarians in front of our eyes while a sufficient number of people cheer it on. In fact AI has been a big weapon used by these evil people. History is full of empty promises and snake oil salesmen, and this is just the latest and most dangerous incarnation.
We will keep building energy hungry data centers until AGI emerges to save us from the climate collapse that we have no left over money/will to fight. Because we bought data centers.
We will keep building energy hungry data centers until AGI emerges to save us from the climate collapse that we have no left over money/will to fight. Because we bought data centers.
I remember the dot com bubble. I don't see this bubble working out the way you imagine. Although money is being spent on data centers and power infrastructure, these things don't become useless if...
I remember the dot com bubble. I don't see this bubble working out the way you imagine. Although money is being spent on data centers and power infrastructure, these things don't become useless if AI turns out to not be as useful as we thought. Moving away from fossil fuels requires more capacity to generate and transport electricity, so we can electrify all the things.
There is opportunity cost since the money could have been spent on something else. When a stock market bubble bursts, a lot people discover that they're poorer than they thought they were.
But the economy doesn't "run out of money" and people will move on. Society doesn't become "exhausted."
The economy will never run out of money, but little people sure do. People who rely on employment for a living will be told there's no money, here's a pittance, work overtime, and no inflation...
The economy will never run out of money, but little people sure do.
People who rely on employment for a living will be told there's no money, here's a pittance, work overtime, and no inflation match take it or leave it. Nonprofits and research bodies that help our planet will always be told there's just been a financial collapse there's no money. Local firefighters, on the ground EMS people, school teachers and elderly care staff are always told this.
Every time there's a financial disaster that's what I hear. I never hear oh well the economy never runs out of money here's some.
Even at the billionaire scale, when money “goes away” that just means faith in the collective hallucination that it existed has been shaken. In the vast majority of cases there’s nothing actually...
Even at the billionaire scale, when money “goes away” that just means faith in the collective hallucination that it existed has been shaken. In the vast majority of cases there’s nothing actually stopping people from agreeing to carry on just as they did the day before if they wanted to.
For whatever reason people seem to have convinced themselves that the economy is a real, independent thing rather than just a means of modulating the society-level trust required for people to do things. Even worse, a lot of people also seem to believe economic abstractions have value in and of themselves, rather than having value only as far as they serve society.
It’s always been absolutely wild to me how few people ever seem to question this. If the abstraction says your job was worth doing yesterday but not today, maybe the abstraction is unhelpful? If the abstraction says there’s always money but you can’t have it, maybe the abstraction is unhelpful? But even pointing out that it is an abstraction falls on deaf ears.
Right, exactly. When this thing goes bust a lot of real jobs will suddenly evaporate: a crash means the people are scared and they're hugging their life savings instead of building that new...
Right, exactly. When this thing goes bust a lot of real jobs will suddenly evaporate: a crash means the people are scared and they're hugging their life savings instead of building that new housing block, or opening a new branch, or hiring, or any number of real things that pay people real money. Suddenly contracts don't get renewed and entire small businesses collapse. Suddenly the government will start austerity measures and suddenly there is no funding for even the most normal BASIC things that have gone underfunded forever. Pensioner's portfolios will take a severe beating. People dont spend. More businesses collapse.
Society sure looks very much exhausted after a bust, from where I've always stood looking up.
Economic activity is about spending. Wasteful spending is bad, but it still creates jobs, etc, whether it’s wasteful or not. As you say, the problem is when people wake up and spending abruptly...
Economic activity is about spending. Wasteful spending is bad, but it still creates jobs, etc, whether it’s wasteful or not. As you say, the problem is when people wake up and spending abruptly stops. The solution is for the government to step in until consumer spending recovers. Austerity measures would make things worse in that situation. State and local governments might not have much choice, but a national government is different since the central bank creates money.
Thinking about this as “exhaustion” suggests that people need to rest and recover, which makes it a dubious metaphor. People don’t benefit from increased unemployment; it’s not beneficial rest. An economy in a recession needs more spending to generate activity, hopefully by spending on less wasteful things. Any delay before setting off in a more promising direction just causes unnecessary suffering.
(Inflation is the opposite. There is too much spending and the government needs to dampen it until capacity improves.)
From a technological perspective, no. They will have plenty use. From a business standpoint who's focused on maximizing shareholder value: it's dubious. The smart ones will pivot, and the greedy...
These things don't become useless if AI turns out to not be as useful as we thought
From a technological perspective, no. They will have plenty use.
From a business standpoint who's focused on maximizing shareholder value: it's dubious. The smart ones will pivot, and the greedy ones will abandon and chase the next trend. I question how many smart businesses we have in this day and age.
But the economy doesn't "run out of money" and people will move on. Society doesn't become "exhausted."
Sure it does. The bible has an entire book dedicated to peering into the future just to describe this. Symbolized with 4 horsemen.
Now do I think we'll hit that point? Hard to say, there's so many small variables right now that can save or doom society as we know it. We're definitely on a tipping point in history though, that's for sure.
The way I imagine that happening would be if there were so many disasters at once that society can't cope. An investment bubble by itself isn't going to do that. (Disaster planning is still a good...
The way I imagine that happening would be if there were so many disasters at once that society can't cope. An investment bubble by itself isn't going to do that. (Disaster planning is still a good idea, though.)
It won't do it all at once, but it might chain off some other diasters, in parallel with completely different events that just happen to come at the same time. multiple wars breaking out over the...
It won't do it all at once, but it might chain off some other diasters, in parallel with completely different events that just happen to come at the same time. multiple wars breaking out over the world wouldn't be the investment bubble's fault, but it would subtly influence the stock market, which would influence the job market, etc.
If AGI can somehow convince businesses to reverse 100 years of narrative and properly focus on minimizing greenhouse emissions (or maybe put efforts into taking out excessive gases), that truly...
If AGI can somehow convince businesses to reverse 100 years of narrative and properly focus on minimizing greenhouse emissions (or maybe put efforts into taking out excessive gases), that truly would be a feat beyond human comprehension.
Or Idk, maybe AGI somehow became socialist terminators and beat capitalism into submission. That's understandable.
Indeed, it would be beyond my comprehension at least. We've had decades of actual intelligence giving good advice on how not to destroy the only spaceship we live on, to no avail. I'm convinced...
Indeed, it would be beyond my comprehension at least.
We've had decades of actual intelligence giving good advice on how not to destroy the only spaceship we live on, to no avail. I'm convinced any advice which isn't "this is how to make more money faster" will be ignored more readily than even the most hare brained but comforting hallucinations.
3 is the most likely scenario. Accompanied by another recession when we realize sinking roughly a quarter of the economy into stochastic text generators is maybe not the sanest idea. Like many...
3 is the most likely scenario. Accompanied by another recession when we realize sinking roughly a quarter of the economy into stochastic text generators is maybe not the sanest idea.
Like many others here I fully believe that at this point it's being propped up because the alternative is global financial collapse. Nothing else is growing right now.
Worth noting that options 1 and 3 both amount to the (financial) bubble popping, and will be devastating to global markets and a lot of ordinary people's pensions. I've seen it argued that the...
Worth noting that options 1 and 3 both amount to the (financial) bubble popping, and will be devastating to global markets and a lot of ordinary people's pensions. I've seen it argued that the tech companies know it's going this way but are betting on being the last man standing once the recession subsides, like Amazon after the dotcom bubble.
I do think that the "internet pollution" concept will become a rather insidious problem. When AI answers a query, it is unclear how that answer was obtained (and perhaps that could never be...
... What if the real doomer scenario is that we pollute the internet and the planet, ...
AI plateaus and performs just good enough as the article describes. It might contribute some good solutions to certain issues but nothing life-changing, while making the world a whole lot worse.
I do think that the "internet pollution" concept will become a rather insidious problem. When AI answers a query, it is unclear how that answer was obtained (and perhaps that could never be figured out). The answer can be wrong, have omissions, and/or have hallucinations all the same. A person may or may not trust what the AI directly says to them however they may eventually learn to not really trust what another person says to them either if that other person is just repeating the errant output that they read from an AI.
The fallout of such an "AI information pollution" event is that people will feel like distrusting most everything they hear replacing that trust with "what they feel like trusting" (that being what they want to hear). While this isn't a new concept (with misleading books and goofy internet sources telling people whatever they "feel like trusting"), I feel that the concept will become more broadly practiced. When it is increasingly believed that all information you can reasonably access is tainted with being inaccurate, it does make researching anything a bit more frustrating and less desirable.
Bringing this back to the core AI weakness of "it is unclear how that answer was obtained", the AI is working as sort of a "black box" that gives "that answer". Without the ability to know how the answer was obtained, it sort of conveys that "how the answer was obtained" doesn't matter so long as the answer "sounds good enough" since most people will only believe what they "feel like trusting" anyway.
I know, I am doing a lot of "hand waving" through the overall thought process here and coming to a large conclusion that would certainly require far more evidence to back it up. Likely the end result will be far less dramatic than what I concluded. Still, the increase of information being "good enough" for people who "feel like trusting" what they want to hear is a bit concerning.
That’s true for now, and that’s useful because the majority of those links are currently human-written and have at least some semblance of a reason to exist, but as more of the internet is created...
That’s true for now, and that’s useful because the majority of those links are currently human-written and have at least some semblance of a reason to exist, but as more of the internet is created by AI summarising other pages which themselves are AI-written summaries... at some stage the percentage of the internet not written by AI will plummet and then you’ve got the same problem on a larger scale.
Was just about to post this on the economics of the AI industry and the host who is very much pro-AI had not much to offer after every talking point was shot down: What does it mean to have...
Was just about to post this on the economics of the AI industry and the host who is very much pro-AI had not much to offer after every talking point was shot down:
What does it mean to have "better" models?
Who can show revenue increases that are directly driven by AI that's not just layoffs?
For how many years can Nvidia sell increasing quantity of chips?
What's OpenAI strategy when user numbers flatten out?
How will OpenAI generate revenue to make their next funding target from SoftBank? (How will Softbank repay the 10 other banks they took loans from?)
This was all before GPT5 so I can only imagine how that would have changed the narrative.
He's being asked about things past next quarter. Of course the businesses up GOP never think that far nowadays. If infinite growth slows too much, they jump ship and find the next industry to...
He's being asked about things past next quarter. Of course the businesses up GOP never think that far nowadays. If infinite growth slows too much, they jump ship and find the next industry to grift. Easier to not answer than tell the truth here.
gift link as well: https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/?gift=eGjCypnHsndY6-G2sn97lTsQmt8bWFbFXN5nFMj3LwU
Thanks! I changed the topic link to that gifted one instead, BTW. But if you would prefer the link remain available in just the comment, let me know and I can revert it.
Thanks! I changed the topic link to that gifted one instead, BTW. But if you would prefer the link remain available in just the comment, let me know and I can revert it.
Feels like the author didn't really justify some of the extrapolations. I don't think the current line of research will create AGI, but I also don't think the economy is going to collapse or...
Feels like the author didn't really justify some of the extrapolations. I don't think the current line of research will create AGI, but I also don't think the economy is going to collapse or anything. LLMs will make tools, some of which will be useful, and it will bring good and it will bring bad, like all technologies. The world will move on. The capex spend by major tech companies is fine - DCs are useful, regardless of AI.
Maybe the price of GPU time will depress a bit. That's OK. Maybe it'll lead to new discoveries as startups can access GPU hours for free.
We will continue to have a need for doing linear algebra at scale. It's so far the best way to create models for high dimensional, highly nuanced problem spaces.
The stock market may dip for a bit. That's also OK. There have been many times where the stock market has dipped or had a correction, and the economy has not "crashed".
At this rate, it will. Ai will only be a footnote on why though. There's a lot of stocks in tech companies, but not necessarily enough to single-handedly cause a depression if 2-3 of them falter....
but I also don't think the economy is going to collapse or anything
At this rate, it will. Ai will only be a footnote on why though. There's a lot of stocks in tech companies, but not necessarily enough to single-handedly cause a depression if 2-3 of them falter. Many things need to fall apart all at once to hit a depression.
There have been many times where the stock market has dipped or had a correction, and the economy has not "crashed".
We don't call it a "crash" unless it's a crash, yes. Money is ephemeral, though, so it's hard to define a crash until it already happens. Japan is a great example of this; they didn't call it the "lost decades" in the 90's/00's though.
LLMs / AI / Whatever we are going to call it are massively overhyped. They are here to stay though. I don't think it's as revolutionary as everyone wants to claim it is. To me this is like the...
LLMs / AI / Whatever we are going to call it are massively overhyped. They are here to stay though.
I don't think it's as revolutionary as everyone wants to claim it is. To me this is like the productivity boost we gained from integrated development environments (IDE) for software engineer, or photoshop for graphic design, or search engines for general productivity, or wikipedia for looking up information about a specific topic. All of these things are very useful and increase productivity. "AI" is just another thing in that category.
...
...
From the conclusion, it seems there are 3 paths that AI can go:
While I certainly wish 1 is our future, I don't think we can ever go back to a pre-AI world, even if the AI bubble should burst. I do hope I'm wrong though. I personally believe that 2 won't happen, at least not in my lifetime, which unfortunately makes 3 the most likely scenario in my eyes
Nonsense, never going to happen because for many things it is unquestionably already good enough. For example learning new complicated things - a situation where you need to verify what you're learning anyway, but it's difficult to get over the beginner hump because you don't even know where to start, what kind of theory you need, you don't know the terminology... LLMs are already immensely useful here in giving you the basic intuition and information on how and where to start, especially the new "reasoning" models that hallucinate less and can also verify things in scientific publications etc.
I fail to see how "the world is not going to be radically transformed in ways that may end up okay overall, but will no doubt create huge challenges and at least some singificantly negative changes" is a bad outcome. It would give us time to eventually adapt, at least. I wish this were true because I think the result wouldn't be as bad as suggested, but I'm worried the development is not going to stop there.
Imo this is not too far from the nonsensical "stochastic parrot" term. We are in the middle of one of the fastest technological evolutions in history, one that exploded into the mainstream because we found out that if we just make a neural net big enough, it starts to have unexpected emergent properties. There is no rule that says that the development is going to continue just as fast, but so far it has been (o1 was released not even a year ago and it was a big step up in capabilities), saying that surely any moment it's going to stop has no basis.
Do note that I'm not saying that AI is great and that the changes are going to be positive, I hope that's obvious from how I see the second outcome.
Also I cannot predict the economical consequences of a potential bubble bursting, but I do predict that if it does burst, it's not going to be a burst that would delete the whole field, it's just a question of how big the setback is going to be. Worst case scenario we're going to be using Deepseek R1 or some equivalent on our own future GPUs or more likely AI acceleration chips (already on market, though not nearly good enough yet) in the future.
I agree a bubble burst would just be a reality check. AI will still be good for generating bespoke stock images, translating, autocomplete, search, etc. But the current price of AI companies is set with the expectation that we’re on a quick path to AGI. Coming out of that delusional state will be a big blow to the stock market.
DeepSeek is good enough to run basically what we use ChatGPT for, likely slightly worse but not by an order of magnitude, and afaik it can be run locally without needing the most expensive and low availability GPUs because reasoning models can be ran partially on several GPUs (so you do need several top of the line gaming GPUs, but those are accessible). Not 100% sure on that specifically, maybe it was only with VRAM mods which few people dare to do, but my point is that the usage that we have for LLMs now is not going away.
You can run a quantized version on a $9500 Mac Studio. With a couple years of progress in optimization and cheaper hardware that might be doable at less than half the cost. Not an unreasonable sum for professional hardware in high paying industries.
Framework's Desktop maxes out at 128GB of unified RAM, and is only ~$2.2k. It's being marketed as a local machine learning dev machine. Considering the comparative price of Nvidia cards all on their own, I bet they're going to sell every one they can make.
The Mac Studio tops out at 512GB of unified memory. That’s enables a different class of model.
Makes me wonder if AMD are going to come out with an extremely high memory config for Strix Halo - NVIDIA won’t do it because they’re worried about cannibalising their datacenter market, but that’s much less of an issue for AMD’s GPUs and they could probably undercut Apple by half at 512GB.
Could make for an interesting option as a locally hosted appliance for SMEs.
Oh, sure, I was just pointing out that dedicated ML hardware is starting to ship that isn't bound to nvidia's stranglehold.
I think it's much simpler. It's never going to happen because, as mentioned, they will just try to to rand it until it sticks again. It's incredibly slim, but I can see a society where it's untenable for any business to use AI. That would basically be an "Ai-less" world, except for some small toy GPT's for personal use.
Easy. UneUnemployment and homelessness hits a point to cause so much wealth inequality that multiple countries engage in violence against AI companies. Military is deployed and we hit a soft civil war. There's no serious plans to address the non-1% economy (e.g. No UBI) and many western countries fall into thus 2nd world feudelism country feel. Futuristic for the richest, and the slums for the non-1%.
They say every society is only 9 meals away from anarchy. I don't see how displacing millions of jobs at such a fast rate (historically speaking) ends in some way beneficial to society. You're kicking society out.
The only solace here is that this isn't going to happen in 5 years like how corporate wants to hype it up to try and be. Maybe over the course of 20 years, woth a shift in policy makers, we could either address this issue or prevent it entirely.
Not really. I was just reading an article about how 95% of Gen Ai projects ended in failure. The technology is there but it sure isn't being welded competently. It's wielded by a businessman introduced to a hammer and claiming they are a carpenter.
There are some amazing breakthroughs, but a good 90%+ of the "evolution" is in businesses being emboldened to attack labor as a concept. Prematurely, I might add.
I don’t think this is meaningful. Most new business ideas aren’t successful.
Most, yes. 19/20 projects from already surviving businesses, no. That's not a good sign for adoption of a "revolutionary" tech.
To give reference: the common metric is that 90% or startups fail within 5 years. So it's a noteworthy to hear 95% of genAI projects not coming together.
It's already affordable for businesses to use somewhat useful self-hosted AI models. It's just more practical and reasonably cheap to not do that and always have the best thing. And again, while moronic managers push AI in aplications where it's useless, and this inflates usage, it is already immensely useful in some areas, so saying anything like "it's only going to stick if it's pushed on us by force" has no basis in reality.
This is what I'm saying basically. All of the things you mention above this are going to happen with a fast continuing evolution of AI as well, and to a higher degree. Plateauing development would at least give us some time to adapt.
Eh, on one hand I agree with the last sentence, but on the other hand I only believe statistics that I falsify myself, plus this says absolutely nothing about the most important part, which is just using existing AI models in some workflows within existing fields. One example: I'm using LLMs to do R&D in electroacoustics/psychoacoustics as a self taught person and it's been an absolute gamechanger comparable with the introduction of widely available advanced loudspeaker simulation software.
Except that is the reality around us right now. Companies forcing employees to use AI, forcing consumers to try and use AI, not making deals unless somehow it has AI in it. If it's so wonderful, why try to push it on us instead of an ad push to make us rush to the store to buy it?
Maybe our time scales are different, but I don't consider 20 years to be "fast continuing evolution". And especially not in the landscape of tech. 20 years ago was pre web 2.0 and pre-smartphone. Those factors did have explosive evolution and could see clear societal shifts in 5 years. And arguebly worrying trends within a decade.
I don't see the same here, not in a way where the societal shifts feel so obvious and drastic. But somehow we see the worrying trends happening already. So that's impressive in its own regard.
Article and report (PDF warning) if you want to see it for yourself.
Obviously businesses won't let us audit them so it's hard to prove this ourselves. Even these studies noted that companies can be tepid here. But the trends don't fill me with much confidence. I'll also note the study seems to have soft suggestions that small groups can use it well (like your R&D project), but it just doesn't seem to scale to business needs properly.
I can't see this ever happening, at least not in the US.
The most violence I've seen is some of the interference with ICE raids/detainments by communities, but that is in direct response to an active situation that is causing direct harm to a community member.
I've pondered two different scenarios that could result in violence, one less likely, the other more likely.
The first is similar in outcome to your comment. If wages, particularly minimum wage and other low-paying hourly jobs, pay less than the cost of commuting (high gas prices) and childcare, then there is no motivation to work these jobs. This could be worsened/forced by the additional Medicaid work requirements that are being implemented from the Trump administration's recent bill. But I do not see the people affected by this resorting to violence- their first priority is to take care of their family, and being imprisoned or worse would hurt their family.
The other ties into automation much more. If/when self-driving trucks start becoming mainstream, it will result in layoffs for truck drivers. The American Trucking Association's reporting says there were 3.55 million truck drivers employed in 2023. I can absolutely see this group rioting and causing all types of destruction. But I think they would focus their energy on their (ex)employers and less on the technology companies creating the actual software/hardware.
For AI specifically, it's mostly kicking people out of higher paying jobs (as far as I know). A lot of recent tech layoffs are actually fueled by the first Trump admin's Tax Cuts and Jobs Act of 2017, which drastically cut how businesses can expense R&D activities, changing something that was in place for 70 years. This didn't go into effect until 2022. I'm not a tax expert, but it looks like this is being re-established in the latest bill. Will it result in an increase in tech hiring activities? Maybe, but it's likely those hirings will still be a reduced number compared to the amount of layoffs, and that is likely due to the appearance/assumption of AI being able to replace some of these workers.
It happened 90 years ago, so I'll never fully count it out. The thing is a lot of Americans right now are uncomfortable, but not to a point of riot. When we hit that point, all bets are off. No point trying to protect a family who's lost their roof, no point worrying about jail time if you're dehydrated and jobless. Or as we saw 4 years ago: no point in caring about consequences when you feel you've been cheated.
Yeah, hard to say. S174 was one factor of many. Interest rates remain "high" (translation: no longer dirt cheap), the economy stability still isn't there, tarrif threats are annoying, and there's still this urge to try and make another wave of outsourcing, now with AI powering it. Maaybe in 2023 or even 4 this could have helped, but it seems too late in 2025
For someone who does not read about or involve themselves with LLMs daily, could you elaborate a bit on what ‘unexpected emergent properties’ refers to?
Simpler NNs or LLMs fail to do certain tasks which become possible not because of a significant change in approach but from a change in size. GPT-2 failed at arithmetic, reasoning, and translation, and couldn't do much in the way of in-context learning. At the time it might have been reasonable to say maybe they never would be able to do something like understand a joke [not trained on], or be able to do novel math olympiad problems.
*An old talk by Peter Norvig on moving into the "data paradigm" of CS mentioned a test of 5 algorithms for finding where sentences ended in Chinese. The point he wanted to drive home was that the algorithm that had performed worst when trained on (iirc) one million examples had become the best with 100 million, with the former #1 becoming the 3rd best.
To go a different route, hominin evolution had a period of relatively rapid expansion of the prefrontal cortex, roughly tripling its size over ~2 million years. A simplistic explanation of this is that the brain wrinkled for surface area and got as many cortical minicolumns--the repeating units of the neocortex-- as it could.
So in our own history you can make a case that after a couple hundred million years of relative stagnation what ended up giving us the capacity for culture, more abstract thought, and the rest of what separates us from australopithecines was a "neural net big enough."
I think the bubble will burst but there won’t be a great awakening and a return to the world before AI. Instead, it will be repackaged under a different name and resold to the same people who got burned the last time.
This part of the article (which you also quoted) really upset me:
Because yeah, of course that’s happening in tech and in politics too. The US is being dismantled by greedy evil authoritarians in front of our eyes while a sufficient number of people cheer it on. In fact AI has been a big weapon used by these evil people. History is full of empty promises and snake oil salesmen, and this is just the latest and most dangerous incarnation.
We will keep building energy hungry data centers until AGI emerges to save us from the climate collapse that we have no left over money/will to fight. Because we bought data centers.
I remember the dot com bubble. I don't see this bubble working out the way you imagine. Although money is being spent on data centers and power infrastructure, these things don't become useless if AI turns out to not be as useful as we thought. Moving away from fossil fuels requires more capacity to generate and transport electricity, so we can electrify all the things.
There is opportunity cost since the money could have been spent on something else. When a stock market bubble bursts, a lot people discover that they're poorer than they thought they were.
But the economy doesn't "run out of money" and people will move on. Society doesn't become "exhausted."
The economy will never run out of money, but little people sure do.
People who rely on employment for a living will be told there's no money, here's a pittance, work overtime, and no inflation match take it or leave it. Nonprofits and research bodies that help our planet will always be told there's just been a financial collapse there's no money. Local firefighters, on the ground EMS people, school teachers and elderly care staff are always told this.
Every time there's a financial disaster that's what I hear. I never hear oh well the economy never runs out of money here's some.
Even at the billionaire scale, when money “goes away” that just means faith in the collective hallucination that it existed has been shaken. In the vast majority of cases there’s nothing actually stopping people from agreeing to carry on just as they did the day before if they wanted to.
For whatever reason people seem to have convinced themselves that the economy is a real, independent thing rather than just a means of modulating the society-level trust required for people to do things. Even worse, a lot of people also seem to believe economic abstractions have value in and of themselves, rather than having value only as far as they serve society.
It’s always been absolutely wild to me how few people ever seem to question this. If the abstraction says your job was worth doing yesterday but not today, maybe the abstraction is unhelpful? If the abstraction says there’s always money but you can’t have it, maybe the abstraction is unhelpful? But even pointing out that it is an abstraction falls on deaf ears.
Right, exactly. When this thing goes bust a lot of real jobs will suddenly evaporate: a crash means the people are scared and they're hugging their life savings instead of building that new housing block, or opening a new branch, or hiring, or any number of real things that pay people real money. Suddenly contracts don't get renewed and entire small businesses collapse. Suddenly the government will start austerity measures and suddenly there is no funding for even the most normal BASIC things that have gone underfunded forever. Pensioner's portfolios will take a severe beating. People dont spend. More businesses collapse.
Society sure looks very much exhausted after a bust, from where I've always stood looking up.
Economic activity is about spending. Wasteful spending is bad, but it still creates jobs, etc, whether it’s wasteful or not. As you say, the problem is when people wake up and spending abruptly stops. The solution is for the government to step in until consumer spending recovers. Austerity measures would make things worse in that situation. State and local governments might not have much choice, but a national government is different since the central bank creates money.
Thinking about this as “exhaustion” suggests that people need to rest and recover, which makes it a dubious metaphor. People don’t benefit from increased unemployment; it’s not beneficial rest. An economy in a recession needs more spending to generate activity, hopefully by spending on less wasteful things. Any delay before setting off in a more promising direction just causes unnecessary suffering.
(Inflation is the opposite. There is too much spending and the government needs to dampen it until capacity improves.)
From a technological perspective, no. They will have plenty use.
From a business standpoint who's focused on maximizing shareholder value: it's dubious. The smart ones will pivot, and the greedy ones will abandon and chase the next trend. I question how many smart businesses we have in this day and age.
Sure it does. The bible has an entire book dedicated to peering into the future just to describe this. Symbolized with 4 horsemen.
Now do I think we'll hit that point? Hard to say, there's so many small variables right now that can save or doom society as we know it. We're definitely on a tipping point in history though, that's for sure.
The way I imagine that happening would be if there were so many disasters at once that society can't cope. An investment bubble by itself isn't going to do that. (Disaster planning is still a good idea, though.)
It won't do it all at once, but it might chain off some other diasters, in parallel with completely different events that just happen to come at the same time. multiple wars breaking out over the world wouldn't be the investment bubble's fault, but it would subtly influence the stock market, which would influence the job market, etc.
If AGI can somehow convince businesses to reverse 100 years of narrative and properly focus on minimizing greenhouse emissions (or maybe put efforts into taking out excessive gases), that truly would be a feat beyond human comprehension.
Or Idk, maybe AGI somehow became socialist terminators and beat capitalism into submission. That's understandable.
Indeed, it would be beyond my comprehension at least.
We've had decades of actual intelligence giving good advice on how not to destroy the only spaceship we live on, to no avail. I'm convinced any advice which isn't "this is how to make more money faster" will be ignored more readily than even the most hare brained but comforting hallucinations.
3 is the most likely scenario. Accompanied by another recession when we realize sinking roughly a quarter of the economy into stochastic text generators is maybe not the sanest idea.
Like many others here I fully believe that at this point it's being propped up because the alternative is global financial collapse. Nothing else is growing right now.
Worth noting that options 1 and 3 both amount to the (financial) bubble popping, and will be devastating to global markets and a lot of ordinary people's pensions. I've seen it argued that the tech companies know it's going this way but are betting on being the last man standing once the recession subsides, like Amazon after the dotcom bubble.
I do think that the "internet pollution" concept will become a rather insidious problem. When AI answers a query, it is unclear how that answer was obtained (and perhaps that could never be figured out). The answer can be wrong, have omissions, and/or have hallucinations all the same. A person may or may not trust what the AI directly says to them however they may eventually learn to not really trust what another person says to them either if that other person is just repeating the errant output that they read from an AI.
The fallout of such an "AI information pollution" event is that people will feel like distrusting most everything they hear replacing that trust with "what they feel like trusting" (that being what they want to hear). While this isn't a new concept (with misleading books and goofy internet sources telling people whatever they "feel like trusting"), I feel that the concept will become more broadly practiced. When it is increasingly believed that all information you can reasonably access is tainted with being inaccurate, it does make researching anything a bit more frustrating and less desirable.
Bringing this back to the core AI weakness of "it is unclear how that answer was obtained", the AI is working as sort of a "black box" that gives "that answer". Without the ability to know how the answer was obtained, it sort of conveys that "how the answer was obtained" doesn't matter so long as the answer "sounds good enough" since most people will only believe what they "feel like trusting" anyway.
I know, I am doing a lot of "hand waving" through the overall thought process here and coming to a large conclusion that would certainly require far more evidence to back it up. Likely the end result will be far less dramatic than what I concluded. Still, the increase of information being "good enough" for people who "feel like trusting" what they want to hear is a bit concerning.
AI summaries that are based on web searches do link to the articles that it looked at, which is helpful for double-checking them.
That’s true for now, and that’s useful because the majority of those links are currently human-written and have at least some semblance of a reason to exist, but as more of the internet is created by AI summarising other pages which themselves are AI-written summaries... at some stage the percentage of the internet not written by AI will plummet and then you’ve got the same problem on a larger scale.
Was just about to post this on the economics of the AI industry and the host who is very much pro-AI had not much to offer after every talking point was shot down:
What does it mean to have "better" models?
Who can show revenue increases that are directly driven by AI that's not just layoffs?
For how many years can Nvidia sell increasing quantity of chips?
What's OpenAI strategy when user numbers flatten out?
How will OpenAI generate revenue to make their next funding target from SoftBank? (How will Softbank repay the 10 other banks they took loans from?)
This was all before GPT5 so I can only imagine how that would have changed the narrative.
He's being asked about things past next quarter. Of course the businesses up GOP never think that far nowadays. If infinite growth slows too much, they jump ship and find the next industry to grift. Easier to not answer than tell the truth here.
Mirror: https://archive.is/YWwHd
gift link as well: https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/?gift=eGjCypnHsndY6-G2sn97lTsQmt8bWFbFXN5nFMj3LwU
Thanks! I changed the topic link to that gifted one instead, BTW. But if you would prefer the link remain available in just the comment, let me know and I can revert it.
Should be fine in the topic, thanks!
Feels like the author didn't really justify some of the extrapolations. I don't think the current line of research will create AGI, but I also don't think the economy is going to collapse or anything. LLMs will make tools, some of which will be useful, and it will bring good and it will bring bad, like all technologies. The world will move on. The capex spend by major tech companies is fine - DCs are useful, regardless of AI.
Maybe the price of GPU time will depress a bit. That's OK. Maybe it'll lead to new discoveries as startups can access GPU hours for free.
We will continue to have a need for doing linear algebra at scale. It's so far the best way to create models for high dimensional, highly nuanced problem spaces.
The stock market may dip for a bit. That's also OK. There have been many times where the stock market has dipped or had a correction, and the economy has not "crashed".
At this rate, it will. Ai will only be a footnote on why though. There's a lot of stocks in tech companies, but not necessarily enough to single-handedly cause a depression if 2-3 of them falter. Many things need to fall apart all at once to hit a depression.
We don't call it a "crash" unless it's a crash, yes. Money is ephemeral, though, so it's hard to define a crash until it already happens. Japan is a great example of this; they didn't call it the "lost decades" in the 90's/00's though.
LLMs / AI / Whatever we are going to call it are massively overhyped. They are here to stay though.
I don't think it's as revolutionary as everyone wants to claim it is. To me this is like the productivity boost we gained from integrated development environments (IDE) for software engineer, or photoshop for graphic design, or search engines for general productivity, or wikipedia for looking up information about a specific topic. All of these things are very useful and increase productivity. "AI" is just another thing in that category.