38
votes
Welcome to a multidimensional economic disaster - the AI boom wasn’t built for the polycrisis (gifted lnk)
Link information
This data is scraped automatically and may be incorrect.
- Title
- Welcome to a Multidimensional Economic Disaster
- Authors
- Matteo Wong, Charlie Warzel
- Published
- Mar 26 2026
- Word count
- 2347 words
I think Buffett said it best. "It’s only when the tide goes out that you learn who’s been swimming naked."
That's an amazing quote.
The frustrating thing about this situation we're all in is that after the inevitable crash happens, and we face pain that makes 2008 seem like a mild annoyance, mountains of think pieces, newscasts, movies, and other media will come out with the main thesis of "HOW COULD WE NOT SEE THIS COMING???".
It was as plain as day two years ago that basing the entire economy on a single industry that is totally untested and not actually profitable anywhere was a bad idea. Doubly so when it relies on goods manufactured by literally a single small island nation that has thousands of Chinese nukes pointed at it.
People have been warning about this till they've turned blue and people in power have done literally NOTHING about it. They've done worse than nothing. They actively encourage more and more pigeonholing into the AI wagon, and the overall risk mitigation strategy is basically just cross your fingers and hope for the best.
Normal people should be outraged and frustrated that we've let the world come to the precipice in this way just to make a couple of hundred people richer than God.
You know what's even more frustrating?
The current admin is actively making this way worse in every way it can. Not just by starting wars, increasing debt, taking open bribes from corporations, but by creating protectionism for the AI and crypto industries by undermining state's ability to regulate AI
It seems like every action made by Trump is designed to destroy the country as fast as possible.
The decisions are based on what should be good for Trump by next Tuesday.
I'm skeptical that the AI bubble collapsing would create a crisis worse than 2008, especially when it is itself is creating economic havoc. At a certain point, the catastrophizing is a narrative to sell you on the idea of bailing out people's faulty investments.
It will be very nice if all of this ends with open source models running on our laptops that negate any need to ever call out to OpenAI. Trillions of dollars teased out of the ownership class evaporated into a FOSS package.
I swear I just read like weeks ago some company was tryna do laptops for rent or something like that. I don’t think we’re moving towards more consumer ownership lol
It's already looking like, with Google's acceleration of memory compression for AI systems, things aren't as bleak as they look. I feel you, I've been needing to get another 32GB of DDR5 memory for my PC (for virtualization/etc locally), but haven't been able to stomach the recent surge in pricing. However, I've directly worked on cloud infrastructure that utilizes similar technology for analytics and machine learning systems, and we're seeing the inclusion in things like AMD's AI CPU lines with NPUs dedicated to this in relatively baseline (if higher-end, but consumer-grade) models suggests we will be able to do this mostly locally surprisingly soon. More responsible companies like AMD will likely still be bearing this torch after the inevitable bubble burst comes, as well.
To a point all it takes for a relatively basic computer to have is this capability is a separate processing utility so you don't blow your GPU on a 120b model (did that once) or suffer with CPU-only AI computation. The chips that enable this are currently cutting edge, but things move quick, and they'll be relatively normal in a couple years.
Beyond this, I'd say initially I thought that research was their "moat." I thought AI wrapper companies would be doomed (and I think many still are, I saw one that was just a UI for making workflows). But "AI use-case" companies will, I think, be ultimately fine even if models stop improving / if they are using local models on scaled-down machines. You can always swap MCP for buttons and use LLMs for text or document-specific tasks, at the end of the day.
But for big tech companies whose approach was paying out the ass on research, with heavy dependence on everything listed in the article, they seem oddly susceptible in comparison?
Oh yeah my partner was talking about how they're developing hardware to support these models specifically, similar to how bitcoin rigs 10 years ago had gpus design specifically to mine bitcoin.
I was really confused as to how they could do that, I work with these models and it seems like they're changing all the time, we can't run weekly query benchmarks because there's some natural drift that happens on the models specifically, we've had to move to running a baseline right before we make changes instead.
Because if you dig deep enough, it's all just doing simple math on really big arrays of numbers. All of the advancements we've had over the entire lifetime of the technology are just variations on "what if we arranged the arrays a bit differently?" or "what if we had a wrapper program that did something relatively computationally easy with the numbers going in or out?"
(Which is why GPUs: 3D graphics is another thing that mostly involves doing lots of math on arrays of numbers, so that's what GPUs are optimized for on the inside.)
OH so its not optimized for THAT SPECIFIC MODEL its optimized for doing math on massive arrays
Correct!
As I understand it, bitcoin ASICs were the same sort of thing: not actually specific to bitcoin, just optimized for doing the one computationally expensive part bitcoin happens to rely heavily on (hash algorithms) over everything else.
Ah thats so cool thank you.
I really wanna get into embedded software. I don’t even know where to start besides my little Arduino hobby.
I just wish I could know for sure its gonna be better than my current ai slop job before I go and study up and do all that social work to try and break in.
They also trained it on mostly FOSS code in the first place.
Interesting - I wonder if the Chinese AI companies will be impacted in the same way? If this crash does come to pass, it will significantly disadvantage the U.S. in that capability race.
They will be impacted but not as heavily since their investments are largely state driven. But lack of chips from Taiwan will still hurt, as will a crash in prices and demand. So maybe they come out ahead but still limping.
Mind you, I think Taiwan has agreements with the US such that they can't export chips to China. So China has to make their own, and they're currently a few generations behind current AI chips. That's the main reason DeepSeek was more efficient with processing power: it had to be to make use of lesser domestic chip capacity.
Yes, with the caveat that this ban only applies to the top end, most modern AI chips. China still relies heavily on Taiwan for all manner of other chipsets.
My problem with this is that we've always been bad at forecasting the next economic crisis. That so many people are converging on this makes me think that the opposite is going to happen.