Experienced this firsthand the other day. ResetEra, a large progressive forum I've been on since it was originally NeoGAF in 2007, has been trending strongly in this populist anti-AI direction --...
Experienced this firsthand the other day. ResetEra, a large progressive forum I've been on since it was originally NeoGAF in 2007, has been trending strongly in this populist anti-AI direction -- thanks in part to moderators distorting their policies to shut down more informed or educational discussion while turning a blind eye to ragebait and witchhunts, which drives away experts while emboldening absolutists. It reached a new low the other day when posts on the attacks on Altman and that Indiana politician became filled with comments glorifying the violence, calling for more, and attacking anyone who disagreed. I reported the post the day it went up, but no action was taken for days as the murder-fantasies went on for page after page. When I posted in the site's meta-discussion thread calling out the mods for tolerating violent rhetoric, they permabanned me instead.
It's such a frustrating dynamic, because the core problem with AI is not the technology itself, but capitalism writ large. Generative AI as a technology is both conceptually fascinating and value-neutral. If people had no fear of becoming destitute or perverse incentives drowning out creative works, it would just be another creative tool on par with the synthesizer, allowing people to extend their labor and explore their creativity more freely (that was the vibe in the early days of AI Dungeon and DALL-E 2, before the ChatGPT-driven rush to commercialization). Movements like this at best throw the baby out with the bathwater, and at worst discredit the legitimate grievances with misinformation and inchoate violence, turning control of this technology firmly over to megacorporate techno-fascists. It's why critics should be the most engaged with the space, so they understand what they're criticizing and can better recognize both how to effectively regulate it and how to turn aspects of it to the advantage of regular people (open source being the biggest opportunity here). But instead too many people in left-leaning spaces treat anything less than "fuck this devilry and fuck anyone who uses it" as the equivalent of a Silicon Valley techbro chud. Just one more divisive kneejerk culture war.
This is certainly not the only issue people have with generative AI. It fundamentally disgusts and puts off many people, including myself. It's the opposite of creativity and the opposite of...
If people had no fear of becoming destitute or perverse incentives drowning out creative works, it would just be another creative tool on par with the synthesizer, allowing people to extend their labor and explore their creativity more freely
This is certainly not the only issue people have with generative AI. It fundamentally disgusts and puts off many people, including myself. It's the opposite of creativity and the opposite of humanity, which is an essential component of art. There are core issues with the technology itself, these issues are just greatly exacerbated by capitalism. AI "art" would still be mindless regurgitations of training data. LLMs would still make things up.
With regards to the violence being celebrated, this is just the inevitable outcome of these billionaires systematically working to destroy the lives of millions and supporting (both directly and indirectly) politicians and policies that are destabilizing society. I'm far more concerned about the millions that could/will die as a result of their actions - look at the cuts to USAID alone. Or the latest war in Iran. Or Gaza. The list goes on.
These billionaires play a direct role in making that happen, and I'm far more concerned about that violence and destroying of lives than I am about one person retaliating against someone responsible for it. When you push people too far, this result is inevitable. Historically, it's also been one of the only ways to have an effect on the wealthy and powerful, so this is nothing new. Our country was founded on it, after all. Pretty much every successful progressive movement (amongst others) has relied on violence as a tool to achieve their goals, because there's usually no other choice. Is it good that things have gone this far? No, but it's not shocking to me at all. It's also not shocking to me that people also hate the people who use AI, since they're directly and indirectly funding said billionaires - collaborators, perhaps.
To be frank, this is not at all surprising and in my experience that's the direction practically every progressive space online has taken under the guise of "well what's next, you want literal...
To be frank, this is not at all surprising and in my experience that's the direction practically every progressive space online has taken under the guise of "well what's next, you want literal nazis colonizing our discussion?" Whatever is the progressive stance must be followed or you get shunned or at least it's the cause of a huge drama and rift in the community. AI just happens to be the topic where you disagree with that stance. Frankly it's made some unrelated hobby websites incredibly insufferable (thankfully most can still be enjoyed, just without participating in the forums).
I have a hard time generating empathy for Sam Altman; perhaps he should pour a billion dollars into research on replacing me with a computer that could be more effective at the job. I do think...
I have a hard time generating empathy for Sam Altman; perhaps he should pour a billion dollars into research on replacing me with a computer that could be more effective at the job.
I do think that this hatred has been misdirected at people that could realistically be harmed by their actions, though. I randomly ran into this artist (xcancel mirror) who lost access to a freelancing site as people were incorrectly claiming that the artist had used AI to generate their commissions. Directing attention and hatred towards CEOs, the rise of right wing nationalism, the economic systems trapping people in poverty, etc. is great; aiming it at people who stand to lose everything is not.
Loved this quote: I'm not sure why anyone needs to spend so much time and effort building an "AI policy" when the answer is simple: give working people a way forward. People think cryptocurrency...
Loved this quote:
The telescope, whose invention allowed astronomers to gaze at the moons of Jupiter, did not displace laborers in large numbers—instead, it enabled us to perform new and previously unimaginable tasks. This contrasts with the arrival of the power loom, which replaced hand-loom weavers performing existing tasks and therefore prompted opposition as weavers found their incomes threatened. Thus, it stands to reason that when technologies take the form of capital that replaces workers, they are more likely to be resisted.
I'm not sure why anyone needs to spend so much time and effort building an "AI policy" when the answer is simple: give working people a way forward. People think cryptocurrency is dumb, for instance, but it didn't garner significant political opposition until it started to spike GPU and electricity prices. LLMs are doing that on a whole new order of magnitude. Of course people will oppose a policy that will take their job, make remaining jobs more miserable, and drive up the cost of living. Until AI companies meaningfully address that concern, they're going to grow more and more unpopular.
Politicians talk all the time about "creating jobs" and sometimes this happens, but at scale, creating new jobs is apparently easier said than done and people continue to worry.
Politicians talk all the time about "creating jobs" and sometimes this happens, but at scale, creating new jobs is apparently easier said than done and people continue to worry.
In 2026, the politics of AI has a new meta: “caring a lot about AI” is no longer correlated with “knowing a lot about AI.” AI is rising in salience faster than any other issue among US voters. Politicians gearing up for the 2026 midterms and 2028 primaries won’t lag far behind. That means AI policy is no longer the remit of a few wonky technocrats. From now until forever, most people regulating, protesting, and talking about AI will not be interested in AI per se, but rather how it impacts their preexisting belief systems and political agendas. These forces are stronger, more diffuse, and more volatile than we have seen in AI policy before. And the curve is just about to shoot straight up.
I define AI populism as a worldview in which AI is viewed not only as a normal technology but as an elite political project to be resisted. It regards AI as a thing manufactured by out-of-touch billionaires and pushed onto an unwilling public to achieve sinister aims like “capitalist efficiency” (layoffs) and “population management” (surveillance). AI populists don’t really care whether ChatGPT is personally useful, or if Waymos eke out some safety gains: AI’s utility as a tool is immaterial relative to the unwelcome societal change it represents.
Among the public, AI populism shows up as individual attempts to block AI encroachment; for example, data center NIMBYism, AI witchhunts among creatives, and in the extreme, assassination attempts like what happened to Altman this week.
[...]
What seems likely is that the anti-elite and nihilistic attitudes that have dominated US political culture in the last few years are transmuting into anger at AI billionaires. Young people are particularly incensed. Gen Z already grew up in a world that they felt was shrinking, where grift and shitcoins and sports gambling looked like the only paths up. Now, they’re being told AI is the reason they can’t get a job—and potentially never will. Just as the United Healthcare CEO seemed like a justified target to many disillusioned and radicalized young people, so will AI executives be to many more.
Experienced this firsthand the other day. ResetEra, a large progressive forum I've been on since it was originally NeoGAF in 2007, has been trending strongly in this populist anti-AI direction -- thanks in part to moderators distorting their policies to shut down more informed or educational discussion while turning a blind eye to ragebait and witchhunts, which drives away experts while emboldening absolutists. It reached a new low the other day when posts on the attacks on Altman and that Indiana politician became filled with comments glorifying the violence, calling for more, and attacking anyone who disagreed. I reported the post the day it went up, but no action was taken for days as the murder-fantasies went on for page after page. When I posted in the site's meta-discussion thread calling out the mods for tolerating violent rhetoric, they permabanned me instead.
It's such a frustrating dynamic, because the core problem with AI is not the technology itself, but capitalism writ large. Generative AI as a technology is both conceptually fascinating and value-neutral. If people had no fear of becoming destitute or perverse incentives drowning out creative works, it would just be another creative tool on par with the synthesizer, allowing people to extend their labor and explore their creativity more freely (that was the vibe in the early days of AI Dungeon and DALL-E 2, before the ChatGPT-driven rush to commercialization). Movements like this at best throw the baby out with the bathwater, and at worst discredit the legitimate grievances with misinformation and inchoate violence, turning control of this technology firmly over to megacorporate techno-fascists. It's why critics should be the most engaged with the space, so they understand what they're criticizing and can better recognize both how to effectively regulate it and how to turn aspects of it to the advantage of regular people (open source being the biggest opportunity here). But instead too many people in left-leaning spaces treat anything less than "fuck this devilry and fuck anyone who uses it" as the equivalent of a Silicon Valley techbro chud. Just one more divisive kneejerk culture war.
This is certainly not the only issue people have with generative AI. It fundamentally disgusts and puts off many people, including myself. It's the opposite of creativity and the opposite of humanity, which is an essential component of art. There are core issues with the technology itself, these issues are just greatly exacerbated by capitalism. AI "art" would still be mindless regurgitations of training data. LLMs would still make things up.
With regards to the violence being celebrated, this is just the inevitable outcome of these billionaires systematically working to destroy the lives of millions and supporting (both directly and indirectly) politicians and policies that are destabilizing society. I'm far more concerned about the millions that could/will die as a result of their actions - look at the cuts to USAID alone. Or the latest war in Iran. Or Gaza. The list goes on.
These billionaires play a direct role in making that happen, and I'm far more concerned about that violence and destroying of lives than I am about one person retaliating against someone responsible for it. When you push people too far, this result is inevitable. Historically, it's also been one of the only ways to have an effect on the wealthy and powerful, so this is nothing new. Our country was founded on it, after all. Pretty much every successful progressive movement (amongst others) has relied on violence as a tool to achieve their goals, because there's usually no other choice. Is it good that things have gone this far? No, but it's not shocking to me at all. It's also not shocking to me that people also hate the people who use AI, since they're directly and indirectly funding said billionaires - collaborators, perhaps.
To be frank, this is not at all surprising and in my experience that's the direction practically every progressive space online has taken under the guise of "well what's next, you want literal nazis colonizing our discussion?" Whatever is the progressive stance must be followed or you get shunned or at least it's the cause of a huge drama and rift in the community. AI just happens to be the topic where you disagree with that stance. Frankly it's made some unrelated hobby websites incredibly insufferable (thankfully most can still be enjoyed, just without participating in the forums).
I have a hard time generating empathy for Sam Altman; perhaps he should pour a billion dollars into research on replacing me with a computer that could be more effective at the job.
I do think that this hatred has been misdirected at people that could realistically be harmed by their actions, though. I randomly ran into this artist (xcancel mirror) who lost access to a freelancing site as people were incorrectly claiming that the artist had used AI to generate their commissions. Directing attention and hatred towards CEOs, the rise of right wing nationalism, the economic systems trapping people in poverty, etc. is great; aiming it at people who stand to lose everything is not.
Loved this quote:
I'm not sure why anyone needs to spend so much time and effort building an "AI policy" when the answer is simple: give working people a way forward. People think cryptocurrency is dumb, for instance, but it didn't garner significant political opposition until it started to spike GPU and electricity prices. LLMs are doing that on a whole new order of magnitude. Of course people will oppose a policy that will take their job, make remaining jobs more miserable, and drive up the cost of living. Until AI companies meaningfully address that concern, they're going to grow more and more unpopular.
Politicians talk all the time about "creating jobs" and sometimes this happens, but at scale, creating new jobs is apparently easier said than done and people continue to worry.
From the article:
[...]