Maybe I'm the one who is out of touch here. Maybe AI truly is groundbreaking and earth shattering. But I can't help but read these two above excerpts, and think of a person in 2005 firing up...
When I rebooted my messy personal website a few weeks ago, I realized: I would have paid $25,000 for someone else to do this.
When a friend asked me to convert a large, thorny data set, I downloaded it, cleaned it up and made it pretty and easy to explore. In the past I would have charged $350,000.
Maybe I'm the one who is out of touch here. Maybe AI truly is groundbreaking and earth shattering. But I can't help but read these two above excerpts, and think of a person in 2005 firing up WordPress and thinking they would have paid tens of thousands to have a developer make that website. When they actually would have used geocities or a more basic html template instead. I can't help but think of someone exporting a data set into Excel, running the Filter function, cleaning it up, and thinking they could have charged hundreds of thousands as a consultant for that. It's just a gross misunderstanding of what proper development can actually accomplish, and where you actually get value from hiring a human.
AI isn't going to be able to replace the web development team of a fortune 500. It can't replace the inhouse IT of these companies. If you try to do that, you are going to have huge issues that cost you hundreds of thousands of dollars in downtime. What are you going to do when your VPN goes down at 7 am on Monday if you don't have someone who supports it? Or when your sales portal designed by Claude crashes, or refunds all of your customers but keeps shipping orders? What do you do when a bug lists a $1,000 item for $0.10?
The $350k for dataset clean seems absurd, or seems to assume a temp rate billed yearly even though it wasn't. Something like a $175 an hour for top of the line work for a one and done temp job is...
The $350k for dataset clean seems absurd, or seems to assume a temp rate billed yearly even though it wasn't. Something like a $175 an hour for top of the line work for a one and done temp job is believable, but obviously you're not actually charging $350k.
Data cleanup CAN be somewhere where AI could punch above its weight because you might know the correct answer and thus can quickly check if the AI got it right (everything SHOULD sum to X, but it's not), however it's a lot more likely to be one of the worst possible uses? When you can't check the AI's output it's hard to tell if it's actually getting the right result or just getting a result that passes a cursory glance.
In the end without a link to the data (which I know they can't) I feel like I'd default to "and then everyone clapped" energy on it. Could also just be an insane cost of living area (Cali/NY) which is another reason I hate raw numbers thrown out like that without some sort of breakdown as to why they'd make sense.
Feeding an LLM thousands of raw data cells to manually transform is an awful idea that inevitably results in hallucinations somewhere along the way. Using an LLM to create a script or function is...
When you can't check the AI's output it's hard to tell if it's actually getting the right result or just getting a result that passes a cursory glance.
Feeding an LLM thousands of raw data cells to manually transform is an awful idea that inevitably results in hallucinations somewhere along the way. Using an LLM to create a script or function is more reasonable, but it's still not very accessible to someone that has never heard of R or pandas.
I mean you can do it in raw VBA, but this is where we go in the loop of the author supposedly charging 350k for the work. I'm having a hard time seeing work like that EVER only needing in sheet...
I mean you can do it in raw VBA, but this is where we go in the loop of the author supposedly charging 350k for the work. I'm having a hard time seeing work like that EVER only needing in sheet manipulation.
There are businesses that work on high-end websites and also freelancers who build very cheap ones. The people who hire a team of professionals to work for months on a website are an entirely...
There are businesses that work on high-end websites and also freelancers who build very cheap ones. The people who hire a team of professionals to work for months on a website are an entirely different market.
Reading your comment, it solidified something for me: those figures up there really give off the vibe of that "ten dollar banana" meme. Extremely out of touch.
Reading your comment, it solidified something for me: those figures up there really give off the vibe of that "ten dollar banana" meme. Extremely out of touch.
Those are pretty much why the "AI drive thru"'s first wave failed spectacularly. It doesn't even sound like something you'd need AI for (convert voice recognition data into a string and query a...
Or when your sales portal designed by Claude crashes, or refunds all of your customers but keeps shipping orders? What do you do when a bug lists a $1,000 item for $0.10?
Those are pretty much why the "AI drive thru"'s first wave failed spectacularly. It doesn't even sound like something you'd need AI for (convert voice recognition data into a string and query a database for existence and stock), but I struggled for over a minute to order a basic combo meal before a human took over.
And they want to have this technolgy drive legal firms and medical devices.
I think what Cory Doctorow predicts in his essay will happen: at some point somewhere, a human will be in the loop, tasked with overseeing an operation they cannot realistically oversee. That's...
If you try to do that, you are going to have huge issues that cost you hundreds of thousands of dollars in downtime. What are you going to do when your VPN goes down at 7 am on Monday if you don't have someone who supports it? Or when your sales portal designed by Claude crashes, or refunds all of your customers but keeps shipping orders? What do you do when a bug lists a $1,000 item for $0.10?
I think what Cory Doctorow predicts in his essay will happen: at some point somewhere, a human will be in the loop, tasked with overseeing an operation they cannot realistically oversee. That's not really their job, their true purpose is to be the person to blame when things go wrong.
I anticipate this process will happen over time as teams shrink down to one person that can handle the exceptions while the normal things are automated more and more, until they too will be fired so that we'll get an AI overwatching the other AIs and then no one will understand anything anymore, the engineers that did will be retired or dead and the new generation will have to figure it out all over again when shit explodes.
Great, another article trying to convince us that AI is good and the future is now. I guess I better allow them to increase my hydro bill so they can build more data centres! You know you have a...
Great, another article trying to convince us that AI is good and the future is now. I guess I better allow them to increase my hydro bill so they can build more data centres! You know you have a winning product when you have to set endless amounts of money on fire to convince everyone that actually your AI slop is good.
You're in BC I take it then? BC Hydro banned crypto mining, we should either put a higher cost on power for AI data centers or just ban them from using our power as well.
my hydro bill
You're in BC I take it then?
BC Hydro banned crypto mining, we should either put a higher cost on power for AI data centers or just ban them from using our power as well.
I remember my Canadian friend once asked a woman living on a Native American reservation in Arizona what her “hydro bill” looked like. One’s confusion transferred to the other and only later did...
I remember my Canadian friend once asked a woman living on a Native American reservation in Arizona what her “hydro bill” looked like. One’s confusion transferred to the other and only later did my friend realize what had happened.
I had a discussion with my mom about it once when I was a kid. She thought hydro was a root word (I don't know if that the right term) for power and I was trying to explain that it was water. She...
I had a discussion with my mom about it once when I was a kid. She thought hydro was a root word (I don't know if that the right term) for power and I was trying to explain that it was water. She didn't believe me.
Really? What kind of features are on this website? I made a new portfolio website in a few weeks for free* (with Github Pages as the host). Likewise, I spent a few months helping out my parents...
Exemplary
When I rebooted my messy personal website a few weeks ago, I realized: I would have paid $25,000 for someone else to do this.
Really? What kind of features are on this website? I made a new portfolio website in a few weeks for free* (with Github Pages as the host). Likewise, I spent a few months helping out my parents with their business website and I can't imagine they spent more than a few hundred on enterprise plans on Squarespace and google, and buying a few domains.
Website work has always been pretty common as far as finding mercanary work for is involved. It's one of the few things you can still find postings of in this slump. It's been a very long time since we needed a small team to execute all this. WYSIWYG website builders have been around for at least 15 years at this point. So I'm not as impressed hearing that one can be vibe coded in an afternoon.
the software I’m making for myself on my phone as good as handcrafted, bespoke code? No. But it’s immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company’s quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system.
Yeah, we call that enshittification. We haven't achieved 6-7 figure projects in a shorter time. We've done the equivalent of the spiderman doodle where we spend a minute with something and decide it's 10 minute quality (with 10 seconds level maintainability). Meanwhile, 10 minutes already slips from the 3-4 hours of work a professional level piece really takes.
It's ultimately a scapegoat to accept even worse work. I'm sure engineers have been ringing alarms over "keep it simple" for decades, which would let leaner teams produce faster. Only to be ignored until a snake oil salesman presents the idea of it to executives on the promise of one day needing 0-2 engineers in the whole company.
What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes poof? That doesn’t mean that the software will be good. But most software today is not good.
Once again: this isn't a change in engineering nor even guidance, it's a lowering of expectations. I feel I'll just be repeating my last paragraph to rant about this, but: why does AI suddenly give you permission to push out slop?
It was never the technology stopping this. It was the bureaucracy rejecting it piece by piece. The PR people worried about public perception of making a bad product. The business crunchers measuring expected revenue if we include/excluse features X/Y/Z. The security people worried about vulnerabilities that lead to ransomware . The legal people worried about lawsuits if this product is mission critical and fails. As well as stakeholders having all these teams advising them.
Where did all those layers poof too suddenly? Where's the accountability?
But I’ve been around too long. The web wasn’t “real” software until it was. Blogging wasn’t publishing. Big, serious companies weren’t going to migrate to the cloud, and then one day they did.
Survivorship bias. We can point to 10x the failures over this too.
Remember the idea on how our phones would replace out laptops? It's kinda true, but not really. We didn't end up with those hybrid situations nor thin clients that we snap our phones into for serious work. It's an odd situation because nothing stops you from doing this today from a technological POV. We just didn't follow through.
VR/Metaverse was big enough to have a trillion dollar company rebrand. I think by now it's safe to say the wave is over for now. They will absolutely try again in 5-10 years, but the tech needs to be very convinient to really gain mass adoption. (I'll tack on 3DTV/theatres into this as well from an AR perspective).
Social Media... yeah, what a mess in general. The world isn't more connected, you don't go on "social media" to "talk to friends" these days. It's a gig economy entertainment platform. Enshittification basically created this cold war between those who want to build community and those who want to broadcast their voice to the entire world. Any true connections made are despite social media's attempts, not because of it.
Crypto is the big one right now. Sure, it made some lucky people very rich. But it's at a very awkward reputation otherwise. They are still trying to push crypto as "real money" and it's only caused it to lose all those decentralized qualities the old guard lauded crypto for.
To name some high profile examples. I'm sure there's doznes of small scale things out there too.
The market keeps convulsing, and I wish we could hit the brakes. But we live in a brakeless era.
We, the consumers, are at least half the brakes. As usual, apathy will destroy us all. I'm really not a fan of this sort of techno-defeatism as if there's nothing at all we can do to stop this. People who care can spread awareness to others, write to represenatives to propose regulations, unionize and push back. And I see all this happening.
Even if you think this is the next industrial revolution, it didn't revolution in a clean, bloodless state. And then after a few generations, it wasn't a clean peaceful disagreement that lead to unioinization.
When it's your own personal website and you decide that the cheaper job is good enough then it's good enough. We just do not need the gold-plated website. It's not "enshitification" when you made...
When it's your own personal website and you decide that the cheaper job is good enough then it's good enough. We just do not need the gold-plated website. It's not "enshitification" when you made your decision to not spend the money.
25k for a personal website is just an insane price-tag in general. If you're hiring a freelancer to do this, you're getting massively overcharged unless you are asking for some wild bespoke stuff.
25k for a personal website is just an insane price-tag in general. If you're hiring a freelancer to do this, you're getting massively overcharged unless you are asking for some wild bespoke stuff.
scale and what you're not spending on varies here. My general theme here wasn't that "AI isn't efficient enough" (even if that's a bit of my sentiment). It's "we didn't need AI to implement the...
It's not "enshitification" when you made your decision to not spend the money.
scale and what you're not spending on varies here. My general theme here wasn't that "AI isn't efficient enough" (even if that's a bit of my sentiment). It's "we didn't need AI to implement the efficiency examples outlied here". We could have cut 80% of a workforce and forced the work on the remaining 20% a full decade ago in many sectors. I don't think it's a good idea, but it was possible.
It's more that modern corporations are executing on something they've wanted to do anyway (massively "downscale" on North American talent in lieu of hiring overseas or H1B's), using AI as the scapegoat, and worsening their products as a result. That's pretty much enshittification in a nutshell; workers lose, consumers lose, corporations don't care.
Again, I don't think this applies when you are making direct decisions about how much to spend on yourself. You can buy a fancy dinner or a cheap dinner, a fancy car or a used car. You can decide...
Again, I don't think this applies when you are making direct decisions about how much to spend on yourself. You can buy a fancy dinner or a cheap dinner, a fancy car or a used car. You can decide how much you're spending on your house. If you're deciding yourself to buy the cheap stuff, it's not "corporations" imposing this on you.
Enshittification means you don't get a choice on what to buy and on what quality. I cannot choose to "turn off" CoPilot without scouring the net on all the specific registers to hit in just the...
If you're deciding yourself to buy the cheap stuff, it's not "corporations" imposing this on you.
Enshittification means you don't get a choice on what to buy and on what quality. I cannot choose to "turn off" CoPilot without scouring the net on all the specific registers to hit in just the right way. And I can't go back to an older update either (Microsoft will very aggressively try to push updates on you). I either accept the slop, throw the entire OS away and install linux, or throw my computer out and buy a Mac (AKA, the kings of 'you will do things our way and like it').
Even if there was competition, enshittification isn't concerned with that. It's about the fact that your current car (be it fancy or a beat 'em up) gets nigh objectively worse for artificial reasons. The ability for me to sell my BMW for a Lexus or Porsche isnt mutually exclusive with the fact that over the last years BMW tried to sell me a subscription to a feature that was built into the car price before.
Yes, sometimes companies are in a position to impose decisions on their customers that they don't like, and that can be frustrating. I've experienced that too with software updates. But I still...
Yes, sometimes companies are in a position to impose decisions on their customers that they don't like, and that can be frustrating. I've experienced that too with software updates.
But I still think it's wrong to call "enshitification" when that's not true! That's pretending to be more helpless than you actually are. When building your own custom website, you have lots of choices.
From the article: [...] [...] [...] [...] [...] [...] [...]
From the article:
November was, for me and many others in tech, a great surprise. Before, A.I. coding tools were often useful, but halting and clumsy. Now, the bot can run for a full hour and make whole, designed websites and apps that may be flawed, but credible. I spent an entire session of therapy talking about it.
[...]
Personally this all feels premature, but markets aren’t subtle thinkers. And I get it. When you watch a large language model slice through some horrible, expensive problem — like migrating data from an old platform to a modern one — you feel the earth shifting. I was the chief executive of a software services firm, which made me a professional software cost estimator. When I rebooted my messy personal website a few weeks ago, I realized: I would have paid $25,000 for someone else to do this. When a friend asked me to convert a large, thorny data set, I downloaded it, cleaned it up and made it pretty and easy to explore. In the past I would have charged $350,000.
That last price is full 2021 retail — it implies a product manager, a designer, two engineers (one senior) and four to six months of design, coding and testing. Plus maintenance. Bespoke software is joltingly expensive. Today, though, when the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month plan.
That’s not an altogether pleasant feeling. The faces of former employees keep flashing before me. All those designers and JavaScript coders. I could not hire the majority of them now, because I would have no idea how to bill for their time. Some companies, including IBM, think A.I. will create tons of new jobs. But no one thinks they’ll be the same as the old jobs.
[...]
Is the software I’m making for myself on my phone as good as handcrafted, bespoke code? No. But it’s immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company’s quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system.
[...]
Except … what if, going forward, it’s not? What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes poof? That doesn’t mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly.
And for lots of users, that’s going to be fine. People don’t judge A.I. code the same way they judge slop articles or glazed videos. They’re not looking for the human connection of art. They’re looking to achieve a goal. Code just has to work.
[...]
The market keeps convulsing, and I wish we could hit the brakes. But we live in a brakeless era.
No matter where you work, my hunch is this is coming for you. Have you noticed the software you use every day adding “A.I. features”? That’s the top of the slippery slope. Whatever unifying principle equates to ship risk in your industry, people are trying to mitigate it with A.I. Insurance, finance, architecture, manufacturing, textiles, every kind of project management — they want to automate it all through A.I.
[...]
I’ve spent my last few years working with a team to build an A.I. software platform, trying to help clients and customers navigate all of these changes. That sounds like the perfect job for the moment, right? It’s not. Every six months, some new A.I. bomb goes off in our industry, and we have to metabolize the change, reset our product, change our strategy and marketing and adapt, at great expense. Our road map keeps getting pushed back as a result of all this “progress.” Everyone is fried.
[...]
All of the people I love hate this stuff, and all the people I hate love it. And yet, likely because of the same personality flaws that drew me to technology in the first place, I am annoyingly excited.
Here is why: I collect stories of software woe. I think of the friend at an immigration nonprofit who needs to click countless times, in mounting frustration, to generate critical reports. Or the small business owners trying to operate everything with email and losing orders as a result. Or my doctor, whose time with patients is eaten up by having to tap furiously into the hospital’s electronic health record system.
[...]
After decades of stories like those, I believe there are millions, maybe billions, of software products that don’t exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can’t find the budget. They make do with spreadsheets and to-do lists.
My industry is famous for saying “no,” or selling you something you don’t need. We have an earned reputation as a lot of really tiresome dudes. But I think if vibe coding gets a little bit better, a little more accessible and a little more reliable, people won’t have to wait on us. They can just watch some how-to videos and learn, and then they can have the power of these tools for themselves. I could teach you now to make a complex web app in a few weeks. In about six months you could do a lot of things that took me 20 years to learn. I’m writing all kinds of code I never could before — but you can too. If we can’t stop the freight train, we could at least hop on for a ride.
This drives me nuts. Yes we can stop. That is the purpose of the Government - it just sucks at being proactive. We should be legislating and non-profiting real guard rails and thinking deeply...
The market keeps convulsing, and I wish we could hit the brakes. But we live in a brakeless era.
No matter where you work, my hunch is this is coming for you. Have you noticed the software you use every day adding “A.I. features”? That’s the top of the slippery slope. Whatever unifying principle equates to ship risk in your industry, people are trying to mitigate it with A.I. Insurance, finance, architecture, manufacturing, textiles, every kind of project management — they want to automate it all through A.I.
This drives me nuts. Yes we can stop. That is the purpose of the Government - it just sucks at being proactive. We should be legislating and non-profiting real guard rails and thinking deeply about what we want to happen. Instead we've let the AI companies push us into some kind of odd determinism that this is our own only possible future.
I’m curious what a regulation to stop using AI would actually look like. No LLMs in particular I guess? Because all the industries mentioned already make copious use of machine learning...
I’m curious what a regulation to stop using AI would actually look like. No LLMs in particular I guess? Because all the industries mentioned already make copious use of machine learning technology, and have for decades.
I think my ideal (and I recognize the unfeasibility here due to cost and complexity) would be a highly competent regulatory authority that puts out regularly adjusted limits on: Data centre...
I think my ideal (and I recognize the unfeasibility here due to cost and complexity) would be a highly competent regulatory authority that puts out regularly adjusted limits on:
Data centre rollout
Emissions (tied to 1) and other power generation
Data centre utilization by a single entity
Frontier research being conducted, period
Frontier research being rolled out commercially
It's insane that 4 and 5 in particular are being researched and released with ZERO CONTROLS. Look at biomedical research - we have a highly structured, rigorous, and regimented process for the release of drugs (or we did before the MAGA disease struck). We should, hypothetically, be able to have an AI equivalent.
THANK YOU! You'd think we have enough evidence of the societal damage brought on by Facebook's "move fast and break things" mantra to not allow the same mistake with something that could...
we have a highly structured, rigorous, and regimented process for the release of drugs (or we did before the MAGA disease struck). We should, hypothetically, be able to have an AI equivalent.
THANK YOU! You'd think we have enough evidence of the societal damage brought on by Facebook's "move fast and break things" mantra to not allow the same mistake with something that could potentially upend every single industry and completely blur the lines between fact and fiction. Absolute insanity and a complete betrayal of the original mission behind the creation of these tools.
How would you envision this interacting with, say, ByteDance, which is in China? What about national security concerns, with America falling behind its opponents?
Frontier research being conducted, period
Frontier research being rolled out commercially
How would you envision this interacting with, say, ByteDance, which is in China? What about national security concerns, with America falling behind its opponents?
This is just from the outside looking in and maybe things are going great, but the question of America "falling behind" will not be helped by the current AI strategy in the States. China isn't...
Exemplary
This is just from the outside looking in and maybe things are going great, but the question of America "falling behind" will not be helped by the current AI strategy in the States.
China isn't exactly all rainbows and sunshine either, but they don't have 40% of their economy in the AI basket.
In the same way, ByteDance is not putting all its prospects on AI. If the tech proves to be another fad, BytePlus (ByteDance AI division AFAIK) takes a hit and its at most a $200bil investment lost. They still have TikTok global, Douyin (Chinese tiktok), Nuverse is their games division and a few social media and web services in South/East Asia. They can buy from NVidia but also have access to Chineese foundries so are not subject to the inflating hardware costs. But most importantly, Chinese AI labs don't need to be on the bleeding edge of AI research. It is known that ByteDance used western models and data sets to make their own and have delivered tangible results with a distributed, lower cost research structure.
By contrast, Microsoft has cannibalized its developers, multiple divisions and a lot of goodwill in the pursuit of Copilot, and this is on the back of a lot of unpopular decisons. There serious consideration to untangle systems from full MS stacks. Some going as far as client machines, which would have been insane even a year ago.
Mark Zuckerberg is probably over the moon because the AI money furnace has replaced the Metaverse money furnace. Elon Musk has killed Twitter, Tesla EV's and is looking ready to put SpaceX on the line for his dream of humanoid-AI-robot-slave-girlfriends. NVidia is a big winner now but can they really keep burning so hot if every financial report is make-or-break. And Anthropic and openAi are nothing more than acquisition pigs that might just be past their prime and I suspect will soon be bought outright or fizzle out . This is 20% of the US economy. Another 20% is Apple and Google who are probably playing this smarter than the rest, but whats driving growth there if not AI. More google ads? More power for your web clients and text messages?
What does this have to do with national security?
Everything is riding on not just the mass adoption of AI, but the exponential returns it promises. Because so much is on the line, rational decisions are not being made. That is not security.
Two related examples: If China just started mass exporting fairly priced traditional hardware today, I suspect that's multiple US companies and Taiwan's semiconductor market gone (and all the Geo-political securities with it). Those AI chips are paid for and once you become an unreliable supplier, customers are not changing supply chains again to give you a second chance. And even with the trade war, there is demand for that hardware in the States. Dropping tariffs signals weakness. Keeping it fuels unrest and encourages black market trade and all the security risks that entails.
What if they China starts putting out GPUs at fractions of NVidia prices? Blackwells are not magical. They can be reversed engineered, iterated on, even streamlined and downsized. ByteDance plans to buy 2000. US companies are committed to NVidia supply for at least two years. Canada is looking very cozy with China and have just iced out the US from a Can-Aus-UK-EU deal. China is lowering trade barriers and there are already ships of low cost of dirt cost consumer goods, as well as Solar Panels and Inverters and going everywhere. Whats the play then? Strong arm the entire world into not taking a good deal?
Besides all that, what does an AI national security failure look like? Killer robots, digital/info warfare and instant atomic apocalypse have been facts of life for a while. So what does bad AI superpower do?
I imagine utility grids being vulnerable and hundreds of thousands of people at risk of loosing access to water and power. Mass surveillance as tools for targeting dissent and blackmail. Unreliable information systems leading to ineffective decision making and centralized failure cascades. Seamless social engineering through manipulating online discussion, algorithmic feeds and broadcast media. A military and police state that can act and kill with impunity, beyond oversight or controls. Peoples jobs and incomes being constantly on edge while asset ownership is impossible, even for creatives and inventors. Third parties having complete control of your hardware, storage and tools. An economy constantly on the verge of hyper/stagflation. Intellectual regression through the loss of skills, knowledge centers and education, leading to occult, mythical or magical thinking. Population collapse. Face scanning to access communication channels. Your doorbell spying on you. Society wide existential dread.
If that's the case, it just sounds a lot like the cost of AI progress. You could argue that it's not the fault of AI for most of these social failures but it is well known where the AI oligarchs all stand on these matters.
The whole AI bubble is a chatGPT mash-up of all the social panics: from the Red Scare where the social/moral conformity justified programs like MK Ultra and Standford Prison Projects, to the "stopping another 9/11" mindset that led to the DHS surveillance state and even hints of "fixing 2008" by throwing money at billionaires at the expense of American taxpayers. Constantly leveraging real risks and anxiety to justify rampant abuse and profiteering. And things never seem to get better or go back to normal.
The question is moot, with half of Americans acting like their brains have fallen out of their crania. That's partially thanks to manipulative AI, but also social media and news algorithms, all...
What about national security concerns, with America falling behind its opponents?
The question is moot, with half of Americans acting like their brains have fallen out of their crania.
That's partially thanks to manipulative AI, but also social media and news algorithms, all designed to make users addicted, worsening an already bad outcome. If national security is a concern, you/they should regulate these systems to oblivion, starting yesterday.
I agree with a lot, but not all, of what @SloMoMonday says below. Most importantly, I agree with the main point that this isn't an optimal economic strategy even from a cold geo-political conflict...
I agree with a lot, but not all, of what @SloMoMonday says below. Most importantly, I agree with the main point that this isn't an optimal economic strategy even from a cold geo-political conflict perspective. That said, it doesn't actually address your question directly, so I'll respond too.
I'll focus on your actual question in 2 parts.
First, an ideal but unlikely outcome: We manage to negotiate in good faith with the Chinese Government to come to agreed upon global standards of research and care. We have similar international alignment already on: the Law of the Sea, Nuclear Arms Control (ish), and intellectual protection law (I know the Chinese are sketchy on enforcement but they are party to the treaty). So, in this ideal world each country has their own governing body, but those bodies are largely aligned, allowing us to more controlled manage this period of flux.
Realistically, I would say at minimum we do what the Chinese are already doing: none of the Chinese companies are developing things in a way which the Communist Party isn't already approving. They're already enforcing stricter controls on the usage of personal information for instance (lol of course this doesn't apply to the Government's uses of AI but that's another topic). So you can still encourage pretty break-neck development while also putting some government back stops and "final say" authority on how this research is being conducted.
This sounds like a recommendation for a total surveillance state, or a return to a pre-computer society. The FDA can just come and inspect your facilities any time they want, and if you refuse or...
we have a highly structured, rigorous, and regimented process for the release of drugs (or we did before the MAGA disease struck). We should, hypothetically, be able to have an AI equivalent.
This sounds like a recommendation for a total surveillance state, or a return to a pre-computer society.
The FDA can just come and inspect your facilities any time they want, and if you refuse or they don't like what they see, they can shut down your entire ability to do business. There's a long and involved submission process with many steps that usually has hundreds or even thousands of people involved in working on various parts it. On the enforcement end, the DEA can investigate whomever they want and arrest them if suspected.
Tying this back to computing, the government would have to be able to regulate the equipment individuals are allowed to have, monitor any capable equipment for misuse, and have a strong enforcement capability to raid anyone's house if suspected of illicit AI development activities. This sounds absolutely horrible to me. You might say that this is just for the big players, but what if people come up with ways to do more with less hardware? You'd have to be constantly checking.
ETA: I don't think the issue with your idea is feasibility. It's absolutely feasible to do that. I think the main issue is that it runs into a bunch of moral issues.
Wow, talk about a "slippery slope" strawman. The FDA can shut down your business, but how often do they do that? Do you have any evidence that this is some kind of foundational problem? We are...
Wow, talk about a "slippery slope" strawman. The FDA can shut down your business, but how often do they do that? Do you have any evidence that this is some kind of foundational problem? We are constantly seeing medical companies and drug companies growing and delivering new solutions, so I'd hardly call it a police state solution. Are you saying that we should abolish the FDA?
As for the actual topic at hand, it absolutely is a matter of size and scale. The Government is a living entity, not a permanent, unchanging monolith. We deal with large players who are using entire States' worth of power, not random people tinkering with open source AI tools at home. If one day we reach a point where every AI model is small enough to be distributed at that scale, we've clearly reached a new paradigm shift (again), and should be reflecting on the possibilities for governance at that point.
I'd appreciate it if you wouldn't label my comment like that. I'm arguing in good faith. I'm assuming you're arguing in good faith. Let's have a conversation, not a fight. For context, I work in...
I'd appreciate it if you wouldn't label my comment like that. I'm arguing in good faith. I'm assuming you're arguing in good faith. Let's have a conversation, not a fight.
For context, I work in drug development. So no, I'm not suggesting we abolish the FDA. People aren't generally doing drug development in their garages, and if they do, the DEA might come knocking. Actual shutdowns are fairly rare, but they do happen. I tried briefly to find a good list but failed, so here's just one example from 2023. The FDA also issues warning letters more frequently, which are considered very serious and effectively hamper progress while the issues are being addressed. This can delay some processes for months, which amounts to many millions of dollars in costs. The last time I personally was involved in supporting an FDA audit, my department basically stopped all development work during that time to work on the documentation.
So, back to computing, the most problematic issue to me is mention of frontier research and rollout. Lots of research happens at smaller scale and rollout can be just sharing your findings. How can you regulate that without widespread surveillance and enforcement?
Fair point about civility - I will try and keep it respectful. But my response is that jumping from "we need a government regulatory body" to "this is either a total surveillance state or...
Fair point about civility - I will try and keep it respectful. But my response is that jumping from "we need a government regulatory body" to "this is either a total surveillance state or pre-computer society" is a wild jump that is highly disrespectful and smells a lot to me of sea lion-ing. So that's where my strong response came from. I think this subsequent response is much more civil.
To go back to your response - the link quoted notes that they received a warning way back in 2019 and failed to comply with said warning even 3 years later before they were shut down. That's incredibly reasonable to me and frankly I'd probably prefer they be shut down much before then. As for months of work and millions in costs to respond to an audit? Those are all things that can and should be tinkered with as we try and find a right-sized response, rather than throwing the whole concept of government oversight out the window.
Frontier research in AI (or, more specifically, LLMs) is almost entirely being driven by the large corporations at this point. Roll-out as I reference it has nothing to do with sharing your findings. Roll-out as I mean it is releasing a commercial product for broad consumption and being backed by enough compute to deploy at scale. So a multi-billion dollar initiative at minimum. The core companies and behaviour we need to be targetting could be selected by a combined threshold of: product deployment (how many people have access to the tool?) + market cap + proposed capabilities (a nice by-product here would be a cooling of the constant spam of OUR AI IS OMNISCIENT AND SOLVES EVERY PROBLEM).
Thinking a bit deeper about it, I see no reason why we shouldn't have some social controls on AI research more broadly. We have controls over human genetic research, nuclear power research, and CERN is highly regulated, to give a few examples. Why is something that is globe-spanning and society-impacting like AI excluded from this?
edit: i had a whole comment here, but I've reflected on the comment on sealioning, and decided that if you dont want to have a debate on this particular point, I wont try to force you or harass...
edit: i had a whole comment here, but I've reflected on the comment on sealioning, and decided that if you dont want to have a debate on this particular point, I wont try to force you or harass you. Sorry if you felt like I made your day worse in any way. Not my intention.
Okay, like, I have met absolutely nobody that isn't on Reddit (tildes) that actually genuinely considers AI to be a problem worth legislating against. Absolutely everyone, no matter their...
Okay, like, I have met absolutely nobody that isn't on Reddit (tildes) that actually genuinely considers AI to be a problem worth legislating against. Absolutely everyone, no matter their background, either doesn't know much about it or doesn't care. Most people just happily use these tools when they can and know how to.
I sure hope that the Government doesn't legislate based on interests of giant media corporations and redditors...
Uh, okay? And I'd say over half of the people I know and work with are sick of AI hype and worried about the many problems currently being exploded by these companies. Nevermind broader...
Uh, okay? And I'd say over half of the people I know and work with are sick of AI hype and worried about the many problems currently being exploded by these companies. Nevermind broader perspectives from experts, academics, and actual facts like skyrocketing prices and power consumption.
Also very strange to use another website to describe users of this one.
Most people IRL use Facebook despite their pretty awful user experience and addictive/inflammatory algorithms. Most people IRL in 2001 didn't care that Windows XP and Internet Explorer were...
Most people IRL use Facebook despite their pretty awful user experience and addictive/inflammatory algorithms. Most people IRL in 2001 didn't care that Windows XP and Internet Explorer were insecure, vulnerability-riddled incompetent messes that made web browsing far more dangerous than it needed to be.
Most people IRL don't care about their phone's OS (or the monopolistic abuses associated with that OSes dominance) or if their bank supports non-SMS 2FA. Most people IRL don't care if their eggs come from tortured chickens, or if their carrots come from a field contaminated with PFAS because the farmer used cheap fertilizer derived from urban human waste. Most people IRL don't care about ICE's human rights violations, or Israel's latest human rights violation.
In the 1800s, only bankrupted farmers deeply cared about railroad monopolies. Today, only a small subset of micromobility enthusiasts advocate for better public transit, bike lanes, and safer walkability.
Just because the lowest common denominator person doesn't give a shit doesn't mean that the thing doesn't matter.
And by the way: right now the American government basically ONLY legislates based on giant corporation interests, thanks to Citizen's United. God forbid they listen to some redditors who might be informed about the subject! Maybe not listening to experts has some connection with the fact that American legislators can barely comprehend computers, the internet, smartphones, and software, let alone LLMs?
I'll also add, even though most people don't care about these particular things, they care about the consequences. They just don't have the expertise and interest to connect the two. Like most...
I'll also add, even though most people don't care about these particular things, they care about the consequences. They just don't have the expertise and interest to connect the two.
Like most people don't care about 2FA, but they do care about their bank account being hacked and losing their balance. Most people don't care about public transit and bike lanes and walkability, but they don't like sitting in traffic, having lung diseases, getting hit by cars.
All of those things are things that affect almost everyone, they just don't care about the root causes of them.
I agree that it feels unlikely right now, but I strongly believe in the power of optimism and pushing for positive change. We shouldn't accept it not happening just because the Governments right...
I agree that it feels unlikely right now, but I strongly believe in the power of optimism and pushing for positive change. We shouldn't accept it not happening just because the Governments right now suck. We should be pushing for better Governance.
I agree that we should try to get better governance. A good start will be Democrats winning the midterms. But I also think that focusing on long-term, idealistic solutions that we realistically...
I agree that we should try to get better governance. A good start will be Democrats winning the midterms.
But I also think that focusing on long-term, idealistic solutions that we realistically can only make a tiny contribution to can sometimes be a distraction from more mundane, short-term, practical fixes that don't require boiling any oceans. For example, anything that requires changing the U.S. Constitution seems quite impractical.
That's true. I'm not confident where the line between actionable and dreaming too small lies in this case. Probably in an advisory non profit that builds global consensus on definitions and guard...
That's true. I'm not confident where the line between actionable and dreaming too small lies in this case. Probably in an advisory non profit that builds global consensus on definitions and guard rails? Get experts and academics involved.
Random thoughts and to put the UBI concerns aside for a bit, let's say the hucksters are completely correct (notice the ones writing pieces like these have an "AI platform" and selling something)...
Random thoughts and to put the UBI concerns aside for a bit, let's say the hucksters are completely correct (notice the ones writing pieces like these have an "AI platform" and selling something) and to add some fun, let's say there's some hints of AGI in there or at least a fledgling capability for continuous unprompted improvement not induced by humans. After all, what's all this hype for if we can't speedrun the singularity? I find fancy chatbots boring, give me a fraction of the Minds from the Culture series (and some of the post-scarcity living standards too)
From what I'm gathering on the tone and how these AI pieces are worded, if the end goal is the techbros exclusively own the AI tech and we got to a point where everyone else's labor has been eliminated (barring any massive social unrest at that point, nor the massive economic impact of effectively making the working class extinct), what's stopping the AI itself from going full cyberpunk and take over all operations of the company? After all, there's still a weak link in the chain and having humans are inefficient. We can't all be entrepreneurs. If you've automated all the non-executive labor away, I presume that includes any human that would have the knowledge put in AI safeguards as well. I mean you could airgap it, but that's a pain if you want to connect the model to the internet, which means it has a way and capability to backup itself to another place.
It reminds me of AI cores in the game Starsector. You can assign a core to run a planetary colony extremely efficiently, with the downside that it entrenches itself after a long time, and if you ever try to unplug it after you've become utterly reliant on it you will devastate your colony's economy along with it.
And on business continuity - it's also absolutely mental to break a cardinal rule of business: never outsource your core competency. I'm pretty sure the likes of Google or Anthropic would eventually assimilate anyone who gets a wildly successful AI business on top of their tech to muscle out the middle man. If the AI is really that good as the claims they do, then hardware safeguards wouldn't be enough as it would hold your entire business hostage if you ever try to attempt to rein it in at that point. Look how much of the web is taken out whenever AWS or Cloudflare suffers an outage. If you're all-in on AI and the AI goes down (not if but when), what's your business going to do? You can't really self-host these models to run at the scale you're using them without requiring astronomical amounts of hardware. That's before the constant training the models need to do to keep up, with no one else to do them (remember at this point we've effectively eliminated all non-executive or upper management labor).
The prospect of essentially becoming a mindless automation at work all day following the orders of an unthinking unfeeling AI boss is even more depressing than just being exterminated by the...
The prospect of essentially becoming a mindless automation at work all day following the orders of an unthinking unfeeling AI boss is even more depressing than just being exterminated by the machine uprising, and infinitely more ironic.
Pressing X for doubt. Or I'm working for peanuts and not charging what I should, which is possible I guess
When a friend asked me to convert a large, thorny data set, I downloaded it, cleaned it up and made it pretty and easy to explore. In the past I would have charged $350,000.
Pressing X for doubt.
Or I'm working for peanuts and not charging what I should, which is possible I guess
We know that software engineers often make a lot of money, which shows what some businesses are willing to pay, at least sometimes. When businesses like that hire consultants, they often pay high...
We know that software engineers often make a lot of money, which shows what some businesses are willing to pay, at least sometimes. When businesses like that hire consultants, they often pay high prices as well.
Patrick MacKenzie has often advocated for people to charge more.
Maybe I'm the one who is out of touch here. Maybe AI truly is groundbreaking and earth shattering. But I can't help but read these two above excerpts, and think of a person in 2005 firing up WordPress and thinking they would have paid tens of thousands to have a developer make that website. When they actually would have used geocities or a more basic html template instead. I can't help but think of someone exporting a data set into Excel, running the Filter function, cleaning it up, and thinking they could have charged hundreds of thousands as a consultant for that. It's just a gross misunderstanding of what proper development can actually accomplish, and where you actually get value from hiring a human.
AI isn't going to be able to replace the web development team of a fortune 500. It can't replace the inhouse IT of these companies. If you try to do that, you are going to have huge issues that cost you hundreds of thousands of dollars in downtime. What are you going to do when your VPN goes down at 7 am on Monday if you don't have someone who supports it? Or when your sales portal designed by Claude crashes, or refunds all of your customers but keeps shipping orders? What do you do when a bug lists a $1,000 item for $0.10?
The $350k for dataset clean seems absurd, or seems to assume a temp rate billed yearly even though it wasn't. Something like a $175 an hour for top of the line work for a one and done temp job is believable, but obviously you're not actually charging $350k.
Data cleanup CAN be somewhere where AI could punch above its weight because you might know the correct answer and thus can quickly check if the AI got it right (everything SHOULD sum to X, but it's not), however it's a lot more likely to be one of the worst possible uses? When you can't check the AI's output it's hard to tell if it's actually getting the right result or just getting a result that passes a cursory glance.
In the end without a link to the data (which I know they can't) I feel like I'd default to "and then everyone clapped" energy on it. Could also just be an insane cost of living area (Cali/NY) which is another reason I hate raw numbers thrown out like that without some sort of breakdown as to why they'd make sense.
Feeding an LLM thousands of raw data cells to manually transform is an awful idea that inevitably results in hallucinations somewhere along the way. Using an LLM to create a script or function is more reasonable, but it's still not very accessible to someone that has never heard of R or pandas.
I mean you can do it in raw VBA, but this is where we go in the loop of the author supposedly charging 350k for the work. I'm having a hard time seeing work like that EVER only needing in sheet manipulation.
There are businesses that work on high-end websites and also freelancers who build very cheap ones. The people who hire a team of professionals to work for months on a website are an entirely different market.
Reading your comment, it solidified something for me: those figures up there really give off the vibe of that "ten dollar banana" meme. Extremely out of touch.
Those are pretty much why the "AI drive thru"'s first wave failed spectacularly. It doesn't even sound like something you'd need AI for (convert voice recognition data into a string and query a database for existence and stock), but I struggled for over a minute to order a basic combo meal before a human took over.
And they want to have this technolgy drive legal firms and medical devices.
I think what Cory Doctorow predicts in his essay will happen: at some point somewhere, a human will be in the loop, tasked with overseeing an operation they cannot realistically oversee. That's not really their job, their true purpose is to be the person to blame when things go wrong.
I anticipate this process will happen over time as teams shrink down to one person that can handle the exceptions while the normal things are automated more and more, until they too will be fired so that we'll get an AI overwatching the other AIs and then no one will understand anything anymore, the engineers that did will be retired or dead and the new generation will have to figure it out all over again when shit explodes.
Great, another article trying to convince us that AI is good and the future is now. I guess I better allow them to increase my hydro bill so they can build more data centres! You know you have a winning product when you have to set endless amounts of money on fire to convince everyone that actually your AI slop is good.
You're in BC I take it then?
BC Hydro banned crypto mining, we should either put a higher cost on power for AI data centers or just ban them from using our power as well.
Unsure if it's used elsewhere in Canada, but Ontario refers to it as a hydro bill as well.
Quebec also has Hydro-Quebec, I'm sure it's a term used across the whole country.
In Alberta, we call it our coal bill!
Just kidding, we call it electricity.
I remember my Canadian friend once asked a woman living on a Native American reservation in Arizona what her “hydro bill” looked like. One’s confusion transferred to the other and only later did my friend realize what had happened.
I had a discussion with my mom about it once when I was a kid. She thought hydro was a root word (I don't know if that the right term) for power and I was trying to explain that it was water. She didn't believe me.
Ontario. It was more just a general complaint about the cost of data centres in general. I do love to hear that about BC Hydro! Good policy.
Why does this not surprise me?
Really? What kind of features are on this website? I made a new portfolio website in a few weeks for free* (with Github Pages as the host). Likewise, I spent a few months helping out my parents with their business website and I can't imagine they spent more than a few hundred on enterprise plans on Squarespace and google, and buying a few domains.
Website work has always been pretty common as far as finding mercanary work for is involved. It's one of the few things you can still find postings of in this slump. It's been a very long time since we needed a small team to execute all this. WYSIWYG website builders have been around for at least 15 years at this point. So I'm not as impressed hearing that one can be vibe coded in an afternoon.
Yeah, we call that enshittification. We haven't achieved 6-7 figure projects in a shorter time. We've done the equivalent of the spiderman doodle where we spend a minute with something and decide it's 10 minute quality (with 10 seconds level maintainability). Meanwhile, 10 minutes already slips from the 3-4 hours of work a professional level piece really takes.
It's ultimately a scapegoat to accept even worse work. I'm sure engineers have been ringing alarms over "keep it simple" for decades, which would let leaner teams produce faster. Only to be ignored until a snake oil salesman presents the idea of it to executives on the promise of one day needing 0-2 engineers in the whole company.
Once again: this isn't a change in engineering nor even guidance, it's a lowering of expectations. I feel I'll just be repeating my last paragraph to rant about this, but: why does AI suddenly give you permission to push out slop?
It was never the technology stopping this. It was the bureaucracy rejecting it piece by piece. The PR people worried about public perception of making a bad product. The business crunchers measuring expected revenue if we include/excluse features X/Y/Z. The security people worried about vulnerabilities that lead to ransomware . The legal people worried about lawsuits if this product is mission critical and fails. As well as stakeholders having all these teams advising them.
Where did all those layers poof too suddenly? Where's the accountability?
Survivorship bias. We can point to 10x the failures over this too.
Remember the idea on how our phones would replace out laptops? It's kinda true, but not really. We didn't end up with those hybrid situations nor thin clients that we snap our phones into for serious work. It's an odd situation because nothing stops you from doing this today from a technological POV. We just didn't follow through.
VR/Metaverse was big enough to have a trillion dollar company rebrand. I think by now it's safe to say the wave is over for now. They will absolutely try again in 5-10 years, but the tech needs to be very convinient to really gain mass adoption. (I'll tack on 3DTV/theatres into this as well from an AR perspective).
Social Media... yeah, what a mess in general. The world isn't more connected, you don't go on "social media" to "talk to friends" these days. It's a gig economy entertainment platform. Enshittification basically created this cold war between those who want to build community and those who want to broadcast their voice to the entire world. Any true connections made are despite social media's attempts, not because of it.
Crypto is the big one right now. Sure, it made some lucky people very rich. But it's at a very awkward reputation otherwise. They are still trying to push crypto as "real money" and it's only caused it to lose all those decentralized qualities the old guard lauded crypto for.
To name some high profile examples. I'm sure there's doznes of small scale things out there too.
We, the consumers, are at least half the brakes. As usual, apathy will destroy us all. I'm really not a fan of this sort of techno-defeatism as if there's nothing at all we can do to stop this. People who care can spread awareness to others, write to represenatives to propose regulations, unionize and push back. And I see all this happening.
Even if you think this is the next industrial revolution, it didn't revolution in a clean, bloodless state. And then after a few generations, it wasn't a clean peaceful disagreement that lead to unioinization.
When it's your own personal website and you decide that the cheaper job is good enough then it's good enough. We just do not need the gold-plated website. It's not "enshitification" when you made your decision to not spend the money.
25k for a personal website is just an insane price-tag in general. If you're hiring a freelancer to do this, you're getting massively overcharged unless you are asking for some wild bespoke stuff.
scale and what you're not spending on varies here. My general theme here wasn't that "AI isn't efficient enough" (even if that's a bit of my sentiment). It's "we didn't need AI to implement the efficiency examples outlied here". We could have cut 80% of a workforce and forced the work on the remaining 20% a full decade ago in many sectors. I don't think it's a good idea, but it was possible.
It's more that modern corporations are executing on something they've wanted to do anyway (massively "downscale" on North American talent in lieu of hiring overseas or H1B's), using AI as the scapegoat, and worsening their products as a result. That's pretty much enshittification in a nutshell; workers lose, consumers lose, corporations don't care.
Again, I don't think this applies when you are making direct decisions about how much to spend on yourself. You can buy a fancy dinner or a cheap dinner, a fancy car or a used car. You can decide how much you're spending on your house. If you're deciding yourself to buy the cheap stuff, it's not "corporations" imposing this on you.
Enshittification means you don't get a choice on what to buy and on what quality. I cannot choose to "turn off" CoPilot without scouring the net on all the specific registers to hit in just the right way. And I can't go back to an older update either (Microsoft will very aggressively try to push updates on you). I either accept the slop, throw the entire OS away and install linux, or throw my computer out and buy a Mac (AKA, the kings of 'you will do things our way and like it').
Even if there was competition, enshittification isn't concerned with that. It's about the fact that your current car (be it fancy or a beat 'em up) gets nigh objectively worse for artificial reasons. The ability for me to sell my BMW for a Lexus or Porsche isnt mutually exclusive with the fact that over the last years BMW tried to sell me a subscription to a feature that was built into the car price before.
Yes, sometimes companies are in a position to impose decisions on their customers that they don't like, and that can be frustrating. I've experienced that too with software updates.
But I still think it's wrong to call "enshitification" when that's not true! That's pretending to be more helpless than you actually are. When building your own custom website, you have lots of choices.
From the article:
[...]
[...]
[...]
[...]
[...]
[...]
[...]
This drives me nuts. Yes we can stop. That is the purpose of the Government - it just sucks at being proactive. We should be legislating and non-profiting real guard rails and thinking deeply about what we want to happen. Instead we've let the AI companies push us into some kind of odd determinism that this is our own only possible future.
I’m curious what a regulation to stop using AI would actually look like. No LLMs in particular I guess? Because all the industries mentioned already make copious use of machine learning technology, and have for decades.
I think my ideal (and I recognize the unfeasibility here due to cost and complexity) would be a highly competent regulatory authority that puts out regularly adjusted limits on:
It's insane that 4 and 5 in particular are being researched and released with ZERO CONTROLS. Look at biomedical research - we have a highly structured, rigorous, and regimented process for the release of drugs (or we did before the MAGA disease struck). We should, hypothetically, be able to have an AI equivalent.
THANK YOU! You'd think we have enough evidence of the societal damage brought on by Facebook's "move fast and break things" mantra to not allow the same mistake with something that could potentially upend every single industry and completely blur the lines between fact and fiction. Absolute insanity and a complete betrayal of the original mission behind the creation of these tools.
How would you envision this interacting with, say, ByteDance, which is in China? What about national security concerns, with America falling behind its opponents?
This is just from the outside looking in and maybe things are going great, but the question of America "falling behind" will not be helped by the current AI strategy in the States.
China isn't exactly all rainbows and sunshine either, but they don't have 40% of their economy in the AI basket.
In the same way, ByteDance is not putting all its prospects on AI. If the tech proves to be another fad, BytePlus (ByteDance AI division AFAIK) takes a hit and its at most a $200bil investment lost. They still have TikTok global, Douyin (Chinese tiktok), Nuverse is their games division and a few social media and web services in South/East Asia. They can buy from NVidia but also have access to Chineese foundries so are not subject to the inflating hardware costs. But most importantly, Chinese AI labs don't need to be on the bleeding edge of AI research. It is known that ByteDance used western models and data sets to make their own and have delivered tangible results with a distributed, lower cost research structure.
By contrast, Microsoft has cannibalized its developers, multiple divisions and a lot of goodwill in the pursuit of Copilot, and this is on the back of a lot of unpopular decisons. There serious consideration to untangle systems from full MS stacks. Some going as far as client machines, which would have been insane even a year ago.
Mark Zuckerberg is probably over the moon because the AI money furnace has replaced the Metaverse money furnace. Elon Musk has killed Twitter, Tesla EV's and is looking ready to put SpaceX on the line for his dream of humanoid-AI-robot-slave-girlfriends. NVidia is a big winner now but can they really keep burning so hot if every financial report is make-or-break. And Anthropic and openAi are nothing more than acquisition pigs that might just be past their prime and I suspect will soon be bought outright or fizzle out . This is 20% of the US economy. Another 20% is Apple and Google who are probably playing this smarter than the rest, but whats driving growth there if not AI. More google ads? More power for your web clients and text messages?
What does this have to do with national security?
Everything is riding on not just the mass adoption of AI, but the exponential returns it promises. Because so much is on the line, rational decisions are not being made. That is not security.
Two related examples: If China just started mass exporting fairly priced traditional hardware today, I suspect that's multiple US companies and Taiwan's semiconductor market gone (and all the Geo-political securities with it). Those AI chips are paid for and once you become an unreliable supplier, customers are not changing supply chains again to give you a second chance. And even with the trade war, there is demand for that hardware in the States. Dropping tariffs signals weakness. Keeping it fuels unrest and encourages black market trade and all the security risks that entails.
What if they China starts putting out GPUs at fractions of NVidia prices? Blackwells are not magical. They can be reversed engineered, iterated on, even streamlined and downsized. ByteDance plans to buy 2000. US companies are committed to NVidia supply for at least two years. Canada is looking very cozy with China and have just iced out the US from a Can-Aus-UK-EU deal. China is lowering trade barriers and there are already ships of low cost of dirt cost consumer goods, as well as Solar Panels and Inverters and going everywhere. Whats the play then? Strong arm the entire world into not taking a good deal?
Besides all that, what does an AI national security failure look like? Killer robots, digital/info warfare and instant atomic apocalypse have been facts of life for a while. So what does bad AI superpower do?
I imagine utility grids being vulnerable and hundreds of thousands of people at risk of loosing access to water and power. Mass surveillance as tools for targeting dissent and blackmail. Unreliable information systems leading to ineffective decision making and centralized failure cascades. Seamless social engineering through manipulating online discussion, algorithmic feeds and broadcast media. A military and police state that can act and kill with impunity, beyond oversight or controls. Peoples jobs and incomes being constantly on edge while asset ownership is impossible, even for creatives and inventors. Third parties having complete control of your hardware, storage and tools. An economy constantly on the verge of hyper/stagflation. Intellectual regression through the loss of skills, knowledge centers and education, leading to occult, mythical or magical thinking. Population collapse. Face scanning to access communication channels. Your doorbell spying on you. Society wide existential dread.
If that's the case, it just sounds a lot like the cost of AI progress. You could argue that it's not the fault of AI for most of these social failures but it is well known where the AI oligarchs all stand on these matters.
The whole AI bubble is a chatGPT mash-up of all the social panics: from the Red Scare where the social/moral conformity justified programs like MK Ultra and Standford Prison Projects, to the "stopping another 9/11" mindset that led to the DHS surveillance state and even hints of "fixing 2008" by throwing money at billionaires at the expense of American taxpayers. Constantly leveraging real risks and anxiety to justify rampant abuse and profiteering. And things never seem to get better or go back to normal.
The question is moot, with half of Americans acting like their brains have fallen out of their crania.
That's partially thanks to manipulative AI, but also social media and news algorithms, all designed to make users addicted, worsening an already bad outcome. If national security is a concern, you/they should regulate these systems to oblivion, starting yesterday.
I agree with a lot, but not all, of what @SloMoMonday says below. Most importantly, I agree with the main point that this isn't an optimal economic strategy even from a cold geo-political conflict perspective. That said, it doesn't actually address your question directly, so I'll respond too.
I'll focus on your actual question in 2 parts.
First, an ideal but unlikely outcome: We manage to negotiate in good faith with the Chinese Government to come to agreed upon global standards of research and care. We have similar international alignment already on: the Law of the Sea, Nuclear Arms Control (ish), and intellectual protection law (I know the Chinese are sketchy on enforcement but they are party to the treaty). So, in this ideal world each country has their own governing body, but those bodies are largely aligned, allowing us to more controlled manage this period of flux.
Realistically, I would say at minimum we do what the Chinese are already doing: none of the Chinese companies are developing things in a way which the Communist Party isn't already approving. They're already enforcing stricter controls on the usage of personal information for instance (lol of course this doesn't apply to the Government's uses of AI but that's another topic). So you can still encourage pretty break-neck development while also putting some government back stops and "final say" authority on how this research is being conducted.
This sounds like a recommendation for a total surveillance state, or a return to a pre-computer society.
The FDA can just come and inspect your facilities any time they want, and if you refuse or they don't like what they see, they can shut down your entire ability to do business. There's a long and involved submission process with many steps that usually has hundreds or even thousands of people involved in working on various parts it. On the enforcement end, the DEA can investigate whomever they want and arrest them if suspected.
Tying this back to computing, the government would have to be able to regulate the equipment individuals are allowed to have, monitor any capable equipment for misuse, and have a strong enforcement capability to raid anyone's house if suspected of illicit AI development activities. This sounds absolutely horrible to me. You might say that this is just for the big players, but what if people come up with ways to do more with less hardware? You'd have to be constantly checking.
ETA: I don't think the issue with your idea is feasibility. It's absolutely feasible to do that. I think the main issue is that it runs into a bunch of moral issues.
Wow, talk about a "slippery slope" strawman. The FDA can shut down your business, but how often do they do that? Do you have any evidence that this is some kind of foundational problem? We are constantly seeing medical companies and drug companies growing and delivering new solutions, so I'd hardly call it a police state solution. Are you saying that we should abolish the FDA?
As for the actual topic at hand, it absolutely is a matter of size and scale. The Government is a living entity, not a permanent, unchanging monolith. We deal with large players who are using entire States' worth of power, not random people tinkering with open source AI tools at home. If one day we reach a point where every AI model is small enough to be distributed at that scale, we've clearly reached a new paradigm shift (again), and should be reflecting on the possibilities for governance at that point.
I'd appreciate it if you wouldn't label my comment like that. I'm arguing in good faith. I'm assuming you're arguing in good faith. Let's have a conversation, not a fight.
For context, I work in drug development. So no, I'm not suggesting we abolish the FDA. People aren't generally doing drug development in their garages, and if they do, the DEA might come knocking. Actual shutdowns are fairly rare, but they do happen. I tried briefly to find a good list but failed, so here's just one example from 2023. The FDA also issues warning letters more frequently, which are considered very serious and effectively hamper progress while the issues are being addressed. This can delay some processes for months, which amounts to many millions of dollars in costs. The last time I personally was involved in supporting an FDA audit, my department basically stopped all development work during that time to work on the documentation.
So, back to computing, the most problematic issue to me is mention of frontier research and rollout. Lots of research happens at smaller scale and rollout can be just sharing your findings. How can you regulate that without widespread surveillance and enforcement?
Fair point about civility - I will try and keep it respectful. But my response is that jumping from "we need a government regulatory body" to "this is either a total surveillance state or pre-computer society" is a wild jump that is highly disrespectful and smells a lot to me of sea lion-ing. So that's where my strong response came from. I think this subsequent response is much more civil.
To go back to your response - the link quoted notes that they received a warning way back in 2019 and failed to comply with said warning even 3 years later before they were shut down. That's incredibly reasonable to me and frankly I'd probably prefer they be shut down much before then. As for months of work and millions in costs to respond to an audit? Those are all things that can and should be tinkered with as we try and find a right-sized response, rather than throwing the whole concept of government oversight out the window.
Frontier research in AI (or, more specifically, LLMs) is almost entirely being driven by the large corporations at this point. Roll-out as I reference it has nothing to do with sharing your findings. Roll-out as I mean it is releasing a commercial product for broad consumption and being backed by enough compute to deploy at scale. So a multi-billion dollar initiative at minimum. The core companies and behaviour we need to be targetting could be selected by a combined threshold of: product deployment (how many people have access to the tool?) + market cap + proposed capabilities (a nice by-product here would be a cooling of the constant spam of OUR AI IS OMNISCIENT AND SOLVES EVERY PROBLEM).
Thinking a bit deeper about it, I see no reason why we shouldn't have some social controls on AI research more broadly. We have controls over human genetic research, nuclear power research, and CERN is highly regulated, to give a few examples. Why is something that is globe-spanning and society-impacting like AI excluded from this?
edit: i had a whole comment here, but I've reflected on the comment on sealioning, and decided that if you dont want to have a debate on this particular point, I wont try to force you or harass you. Sorry if you felt like I made your day worse in any way. Not my intention.
Okay, like, I have met absolutely nobody that isn't on Reddit (tildes) that actually genuinely considers AI to be a problem worth legislating against. Absolutely everyone, no matter their background, either doesn't know much about it or doesn't care. Most people just happily use these tools when they can and know how to.
I sure hope that the Government doesn't legislate based on interests of giant media corporations and redditors...
Uh, okay? And I'd say over half of the people I know and work with are sick of AI hype and worried about the many problems currently being exploded by these companies. Nevermind broader perspectives from experts, academics, and actual facts like skyrocketing prices and power consumption.
Also very strange to use another website to describe users of this one.
Most people IRL use Facebook despite their pretty awful user experience and addictive/inflammatory algorithms. Most people IRL in 2001 didn't care that Windows XP and Internet Explorer were insecure, vulnerability-riddled incompetent messes that made web browsing far more dangerous than it needed to be.
Most people IRL don't care about their phone's OS (or the monopolistic abuses associated with that OSes dominance) or if their bank supports non-SMS 2FA. Most people IRL don't care if their eggs come from tortured chickens, or if their carrots come from a field contaminated with PFAS because the farmer used cheap fertilizer derived from urban human waste. Most people IRL don't care about ICE's human rights violations, or Israel's latest human rights violation.
In the 1800s, only bankrupted farmers deeply cared about railroad monopolies. Today, only a small subset of micromobility enthusiasts advocate for better public transit, bike lanes, and safer walkability.
Just because the lowest common denominator person doesn't give a shit doesn't mean that the thing doesn't matter.
And by the way: right now the American government basically ONLY legislates based on giant corporation interests, thanks to Citizen's United. God forbid they listen to some redditors who might be informed about the subject! Maybe not listening to experts has some connection with the fact that American legislators can barely comprehend computers, the internet, smartphones, and software, let alone LLMs?
I'll also add, even though most people don't care about these particular things, they care about the consequences. They just don't have the expertise and interest to connect the two.
Like most people don't care about 2FA, but they do care about their bank account being hacked and losing their balance. Most people don't care about public transit and bike lanes and walkability, but they don't like sitting in traffic, having lung diseases, getting hit by cars.
All of those things are things that affect almost everyone, they just don't care about the root causes of them.
That might be true in the abstract, but it seems pretty unlikely for the governments we actually have.
I agree that it feels unlikely right now, but I strongly believe in the power of optimism and pushing for positive change. We shouldn't accept it not happening just because the Governments right now suck. We should be pushing for better Governance.
I agree that we should try to get better governance. A good start will be Democrats winning the midterms.
But I also think that focusing on long-term, idealistic solutions that we realistically can only make a tiny contribution to can sometimes be a distraction from more mundane, short-term, practical fixes that don't require boiling any oceans. For example, anything that requires changing the U.S. Constitution seems quite impractical.
That's true. I'm not confident where the line between actionable and dreaming too small lies in this case. Probably in an advisory non profit that builds global consensus on definitions and guard rails? Get experts and academics involved.
Random thoughts and to put the UBI concerns aside for a bit, let's say the hucksters are completely correct (notice the ones writing pieces like these have an "AI platform" and selling something) and to add some fun, let's say there's some hints of AGI in there or at least a fledgling capability for continuous unprompted improvement not induced by humans. After all, what's all this hype for if we can't speedrun the singularity? I find fancy chatbots boring, give me a fraction of the Minds from the Culture series (and some of the post-scarcity living standards too)
From what I'm gathering on the tone and how these AI pieces are worded, if the end goal is the techbros exclusively own the AI tech and we got to a point where everyone else's labor has been eliminated (barring any massive social unrest at that point, nor the massive economic impact of effectively making the working class extinct), what's stopping the AI itself from going full cyberpunk and take over all operations of the company? After all, there's still a weak link in the chain and having humans are inefficient. We can't all be entrepreneurs. If you've automated all the non-executive labor away, I presume that includes any human that would have the knowledge put in AI safeguards as well. I mean you could airgap it, but that's a pain if you want to connect the model to the internet, which means it has a way and capability to backup itself to another place.
It reminds me of AI cores in the game Starsector. You can assign a core to run a planetary colony extremely efficiently, with the downside that it entrenches itself after a long time, and if you ever try to unplug it after you've become utterly reliant on it you will devastate your colony's economy along with it.
And on business continuity - it's also absolutely mental to break a cardinal rule of business: never outsource your core competency. I'm pretty sure the likes of Google or Anthropic would eventually assimilate anyone who gets a wildly successful AI business on top of their tech to muscle out the middle man. If the AI is really that good as the claims they do, then hardware safeguards wouldn't be enough as it would hold your entire business hostage if you ever try to attempt to rein it in at that point. Look how much of the web is taken out whenever AWS or Cloudflare suffers an outage. If you're all-in on AI and the AI goes down (not if but when), what's your business going to do? You can't really self-host these models to run at the scale you're using them without requiring astronomical amounts of hardware. That's before the constant training the models need to do to keep up, with no one else to do them (remember at this point we've effectively eliminated all non-executive or upper management labor).
Labor isn't going to be eliminated. AI ghosts can do a lot of things, but they don't have hands.
The prospect of essentially becoming a mindless automation at work all day following the orders of an unthinking unfeeling AI boss is even more depressing than just being exterminated by the machine uprising, and infinitely more ironic.
I guess that's a way to imagine it if you want to be depressed about it, but it seems like there might be other possible futures.
Pressing X for doubt.
Or I'm working for peanuts and not charging what I should, which is possible I guess
We know that software engineers often make a lot of money, which shows what some businesses are willing to pay, at least sometimes. When businesses like that hire consultants, they often pay high prices as well.
Patrick MacKenzie has often advocated for people to charge more.
But, that was then. What will happen now?