I have a hard time generating empathy for Sam Altman; perhaps he should pour a billion dollars into research on replacing me with a computer that could be more effective at the job. I do think...
I have a hard time generating empathy for Sam Altman; perhaps he should pour a billion dollars into research on replacing me with a computer that could be more effective at the job.
I do think that this hatred has been misdirected at people that could realistically be harmed by their actions, though. I randomly ran into this artist (xcancel mirror) who lost access to a freelancing site as people were incorrectly claiming that the artist had used AI to generate their commissions. Directing attention and hatred towards CEOs, the rise of right wing nationalism, the economic systems trapping people in poverty, etc. is great; aiming it at people who stand to lose everything is not.
Experienced this firsthand the other day. ResetEra, a large progressive forum I've been on since it was originally NeoGAF in 2007, has been trending strongly in this populist anti-AI direction --...
Experienced this firsthand the other day. ResetEra, a large progressive forum I've been on since it was originally NeoGAF in 2007, has been trending strongly in this populist anti-AI direction -- thanks in part to moderators distorting their policies to shut down more informed or educational discussion while turning a blind eye to ragebait and witchhunts, which drives away experts while emboldening absolutists. It reached a new low the other day when posts on the attacks on Altman and that Indiana politician became filled with comments glorifying the violence, calling for more, and attacking anyone who disagreed. I reported the post the day it went up, but no action was taken for days as the murder-fantasies went on for page after page. When I posted in the site's meta-discussion thread calling out the mods for tolerating violent rhetoric, they permabanned me instead.
It's such a frustrating dynamic, because the core problem with AI is not the technology itself, but capitalism writ large. Generative AI as a technology is both conceptually fascinating and value-neutral. If people had no fear of becoming destitute or perverse incentives drowning out creative works, it would just be another creative tool on par with the synthesizer, allowing people to extend their labor and explore their creativity more freely (that was the vibe in the early days of AI Dungeon and DALL-E 2, before the ChatGPT-driven rush to commercialization). Movements like this at best throw the baby out with the bathwater, and at worst discredit the legitimate grievances with misinformation and inchoate violence, turning control of this technology firmly over to megacorporate techno-fascists. It's why critics should be the most engaged with the space, so they understand what they're criticizing and can better recognize both how to effectively regulate it and how to turn aspects of it to the advantage of regular people (open source being the biggest opportunity here). But instead too many people in left-leaning spaces treat anything less than "fuck this devilry and fuck anyone who uses it" as the equivalent of a Silicon Valley techbro chud. Just one more divisive kneejerk culture war.
This is certainly not the only issue people have with generative AI. It fundamentally disgusts and puts off many people, including myself. It's the opposite of creativity and the opposite of...
If people had no fear of becoming destitute or perverse incentives drowning out creative works, it would just be another creative tool on par with the synthesizer, allowing people to extend their labor and explore their creativity more freely
This is certainly not the only issue people have with generative AI. It fundamentally disgusts and puts off many people, including myself. It's the opposite of creativity and the opposite of humanity, which is an essential component of art. There are core issues with the technology itself, these issues are just greatly exacerbated by capitalism. AI "art" would still be mindless regurgitations of training data. LLMs would still make things up.
With regards to the violence being celebrated, this is just the inevitable outcome of these billionaires systematically working to destroy the lives of millions and supporting (both directly and indirectly) politicians and policies that are destabilizing society. I'm far more concerned about the millions that could/will die as a result of their actions - look at the cuts to USAID alone. Or the latest war in Iran. Or Gaza. The list goes on.
These billionaires play a direct role in making that happen, and I'm far more concerned about that violence and destroying of lives than I am about one person retaliating against someone responsible for it. When you push people too far, this result is inevitable. Historically, it's also been one of the only ways to have an effect on the wealthy and powerful, so this is nothing new. Our country was founded on it, after all. Pretty much every successful progressive movement (amongst others) has relied on violence as a tool to achieve their goals, because there's usually no other choice. Is it good that things have gone this far? No, but it's not shocking to me at all. It's also not shocking to me that people also hate the people who use AI, since they're directly and indirectly funding said billionaires - collaborators, perhaps.
I disagree with much of what you said, but with this especially. The actual data suggests that nonviolent paths are more than twice as successful. And when we're talking about cultural change and...
Pretty much every successful progressive movement (amongst others) has relied on violence as a tool to achieve their goals, because there's usually no other choice.
I disagree with much of what you said, but with this especially. The actual data suggests that nonviolent paths are more than twice as successful. And when we're talking about cultural change and not a complete overthrow of government, the rate of nonviolent success is even higher!
It's rather disingenuous to phrase it as "actual data" as though there's some universally recognized or agreed upon data points that cleanly define and disentangle violent and non-violent...
Exemplary
It's rather disingenuous to phrase it as "actual data" as though there's some universally recognized or agreed upon data points that cleanly define and disentangle violent and non-violent activities.
Here's a bit of that 'actual data', since conveniently, the person who came up with the framework for that data, sells it in a book which I didn't pay for and I suspect most others here wouldn't want to pay for to verify the data.
The following quotes are from the book, not from Wikipedia, but I wanted to link the Wikipedia page as a baseline for people to read about one of the data points without having to pay for it.
Fretilin’s armed wing, the Forças Armadas de Libertação Nacional de Timor-Leste (Falintil), led the early resistance to Indonesian occupation forces in the form of conventional and guerrilla warfare. Using weapons left behind by Portuguese troops, Falintil forces waged armed struggle from East Timor’s mountainous jungle region. But Falintil would not win the day. Despite some early successes, by 1980 Indonesia’s brutal counterinsurgency campaign had decimated the armed resistance along with nearly one third of the East Timorese population.
So I guess this must count as a data point against the success of violence.
Yet nearly two decades later, a nonviolent resistance movement helped to successfully remove Indonesian troops from East Timor and win independence for the annexed territory.
So two decades later, nonviolence succeeded...
Suharto was ousted in 1998 after an economic crisis and mass popular uprising, and Indonesia’s new leader, B. J. Habibie, quickly pushed through a series of political and economic reforms designed to restore stability and international credibility to the country. There was tremendous international pressure on Habibie to resolve the East Timor issue, which had become a diplomatic embarrassment, not to mention a huge drain on Indonesia’s budget.
So Indonesian President (and he was basically a dictator) who controlled Indonesia for 31 years and was 77 years old, embroiled in issues beyond East Timor, was finally defeated by nonviolence. I guess the thousands of people who died fighting in the resistance died for nothing and if they had only resisted peacefully in the beginning, they would have been able to achieve success...
Although a small number of Falintil guerrillas (whose targets had been strictly military) kept their weapons until the very end, it was not their violent resistance that liberated the territory from Indonesian occupation. As one Clandestine Front member explained, “The Falintil was an important symbol of resistance and their presence in the mountains helped boost morale, but nonviolent struggle ultimately allowed us to achieve victory. The whole population fought for independence, even Indonesians, and this was decisive.”
Just to reiterate, that quote is from the book. So nonviolence won, but violent resistance was an important symbol for resistance overall. But nonviolence gets the point, there's no assists in this data.
Now the following is pulled from the Wikipedia page linked above
The US played a crucial role in supplying weapons to Indonesia. A week after the invasion of East Timor the National Security Council prepared a detailed analysis of the Indonesian military units involved and the US equipment they used.
Go to the page to check out the equipment if you wish, but the critical point I'd like to highlight here is that violence won the fight to begin with. The Indonesian dictator didn't take over East Timor with beautiful and persuasive rhetoric, but with the backing of the US and violent force.
Additionally, the data used in this book started in the 20th century, and I discovered this article published in SAGE
Where it covers a response to the 'actual data', so it's an interesting perspective where they are responding to a critical response to the data, so you get a broader picture of what is contentious about that data without a fixed framing of the direct response. In this article, it mentions the critical response goes back to the 19th century, and it discusses the flaws in the critical response about using data from the 19th century. However what I find enlightening about this is that in that critical response, they claim that if you use data from the 19th century, then violent responses have more success than nonviolent. Again, that page covers the flaws of using data from that time period, but I do think it's interesting while those flaws may be the reason why the Erica Chenoweth piece started where it did, but it's also convenient that they just happen to start their dataset at a point where it favors their argument.
Here's another quote from the Chenoweth 'actual data' book
Our central contention is that nonviolent campaigns have a participation advantage over violent insurgencies, which is an important factor in determining campaign outcomes. The moral, physical, informational, and commitment barriers to participation are much lower for nonviolent resistance than for violent insurgency. Higher levels of participation contribute to a number of mechanisms necessary for success, including enhanced resilience, higher probabilities of tactical innovation, expanded civic disruption (thereby raising the costs to the regime of maintaining the status quo), and loyalty shifts involving the opponent’s erstwhile supporters, including members of the security forces.
I find that to be a fairly reasonable argument. What this doesn't account for however are the myriad of circumstances and motivations that lead to popular support. There's no reconciliation of how violence can play a part in that. So is it a knock against the success of violence if a cult in Waco, Texas, fails to hold their freedom? Perhaps they would have been more successful in a non-violent approach. Of course I'm intentionally choosing an incident that was a relatively small group of people that failed to achieve what they had wanted in a violent encounter, because I think it highlights the flaws in what incidents you count. I don't think this incident was counted and I didn't dig into the book to find out, I picked it on my own.
I think the idea when people compare violent and nonviolent activities is that there may be some similar level of participation, even if that's not the normal case. In essence, the sentimental force behind the violent movements are also in existence behind the nonviolent ones.
This is a good post and I think your core point - that much depends on where we set boundaries and all analysis is inherently biased by where we observe from - is good. It is not, however,...
This is a good post and I think your core point - that much depends on where we set boundaries and all analysis is inherently biased by where we observe from - is good. It is not, however, disingenuous to refer to a flawed dataset (as all datasets are!) as "actual data" when the alternative is 100% vibes. (I also laughed at you calling out how 'convenient' it was that the author's data is... in their book).
Here's where I think it's disingenuous, and I don't mean it to be an attack on you, but it comes across as a cudgel of 'science' or 'fact'. But the data in this case is just made up by a few...
Exemplary
Here's where I think it's disingenuous, and I don't mean it to be an attack on you, but it comes across as a cudgel of 'science' or 'fact'. But the data in this case is just made up by a few people.
To be fair, all data is on some level just made up of course. If you are tallying points in a basketball game, the ball going through the hoop counts for points and was part of the design of the game but that also makes it universally recognized on some level. How to tally the data of scoring points in a basketball game is pretty straightforward subsequently, and it would be pretty straightforward to present it as 'actual data'. But then there's someone who passes the ball to the person who puts the ball through the basket, and that person gets an "assist". It's data, but it's more made up because of how it's defined and by who. The NBA counts assists differently than other leagues, or even over historical NBA. Even so, it's still a widely recognized stat that at least by context someone can often determine what definition of assist goes with the data, and the non-specific definitions are widely known on some level by people who follow the sport at least.
There could be a dataset for players who picks their nose on the court but no one is tracking that. But my point is, how you present what qualifies as 'actual data' matters. If I say 'the data shows the team that picks their nose on the court most wins', and then I go selectively looking through games, and then also choosing what counts as 'nose picking', and present it as 'actual data', in some sense it's true that it's data, it's bad data, but if not for the comical premise, I'm giving it more authority than it actually has by presenting it as 'actual data' because I'm the only one tracking the data. It's one guy (me) who selectively went through things and came up with my own criteria and judgements and chose 'nose picking', it's not a wide group of professionals in the NBA or basketball scene who defined a 'nose picking' stat.
I think on a 'data' level, that's similar to an anecdote. What makes an anecdote less useful in certain contexts is that it's one person's experience or one single event that isn't necessarily representative of all events. I do believe the book had two authors, and perhaps there would be more people involved than that, but on the scale of what we're talking about here, I think it deserves more than just a few people to have some level of agreement of definitions on the subject matter to have more weight behind it. It's not data to be used as a cudgel against philosophical arguments or anecdotal experiences. To be more widely recognized and accepted is where I would draw the line on presenting it more authoritatively.
That was the intent! Claims like "nearly every successful progressive movement resorted to violence" need to be supported, because otherwise people will believe something that is probably not...
but it comes across as a cudgel of 'science' or 'fact'
That was the intent! Claims like "nearly every successful progressive movement resorted to violence" need to be supported, because otherwise people will believe something that is probably not true. This wasn't a statement of opinion, it was a statement of fact. And unless I'm misunderstanding you, even your argument is more like "this data is imperfect" than "here's a competing analysis that shows violence is more effective."
And look, it's not just the one book. This is the consensus of the field as a whole. Additional works have, just like you, questioned some specific cases, added caveats, or argued that context is more important than chernowith suggests, but nobody in the field is seriously arguing that most successful progressive movements resorted to violence. You can ask an LLM to summarize the major criticisms of chernowith or look for papers citing hers on Google scholar or something and check for yourself.
It's not data to be used as a cudgel against philosophical or anecdotal experiences.
Respectfully, I completely disagree. Fact is better than vibes. If I'm wrong, prove me wrong, right? If my data sucks, argue that too! But philosophically, I completely disagree that anecdotal experiences bear the same weight as a book from someone who's actually compiled a dataset to try to prove something. (Edit: Obviously, yes, it depends on the book and the authors. This one is from a Harvard professor, not some kook. You've been arguing in good faith so I don't expect such a facile argument from you, but adding this for posterity.)
We have to strive for better as a society.
In full seriousness - you've clearly given a lot of thought to what's legitimate and what's not. Do you really feel that my random vibe on something is equivalent to published data, even imperfect data?
Re: the middle part, again, philosophically I agree. Not much more to say there.
I might be wrong, but I don't think @Grumble4681 is defending "nearly every successful progressive movement resorted to violence"? They're saying your initial response to that, while providing...
I might be wrong, but I don't think @Grumble4681 is defending "nearly every successful progressive movement resorted to violence"? They're saying your initial response to that, while providing data, implied an authority or broadness that wasn't warranted.
Data is great, but if it isn't contextualized with the complexity of a topic or the weight of evidence it can easily be used to mislead or to shut down further conversation.
You can reach a local minima, where the data that is easily available is insufficient. An example would be WEIRD populations in psychology. The mistake isn't showing how college kids behaved in contrived settings, it was attempting to generalize any of it, at least without heavily emphasizing the limitations.
As a process I think science almost always trundles along to being less wrong. Metascience has done it's best to handle the many very subtle forms of bias, and things have gotten better to the point of reasonably questioning how much of historical data should just get tossed out.
But it's also easy to see, from snapshotting different points in the past, just how wrong you would be from arguing from the best data, ala "a little knowledge is a dangerous thing."
An argument I've seen in the past is that some topics are almost impenetrable, and for those art, philosophy, or anecdotes act as a survey that can leave you more informed than the data. You might not know the rates or trends of arson or assault, but you leave with an awareness of how sub-violent voter intimidation has played out. You carve out awareness of "positive" and "negative" freedoms, like if women/minorities not using voice chat in games because of what people say to them is depriving them of their own free speech*.
I prefer data-driven arguments, and appreciate someone willing to make them or strongman an opposing view. That said, I think it's also good to be aware that academia can be oppressive/alienating.
Implicit in saying someone should present their own data/study is that this is the correct way of engaging in the discussion, which requires a particular temperament and education. A former housemate was involved in direct action activism that uncomfortably bordered on threats/violence (smear posters put up outside of workplaces/schools for a slumlord). Is that experience relevant or something that should be ignored in lieu of research? Maybe it's less useful in making claims [on some topic], but you have to keep in mind that that's how a lot of people engage, so if the goal of a conversation is more than just talking to or convincing like-minded folks other camps need to be considered.
It also tempts people to try to act like topic experts, looking up confirming studies for an hour or so but having no meaningful ability to understand the state of the field or assess the quality of the studies involved. I catch myself doing that more often than I should, with the excuse that I'll at least be improving the quality of the discussion. Sometimes you aren't, though, you're using rhetoric as a cudgel and making convincing but specious claims.
Good/great addition to the discussion. I'll engage in a couple of places. I think that's insightful, and it ties in to your broader comments (which I agree with in whole and thus won't...
Good/great addition to the discussion. I'll engage in a couple of places.
Data is great, but if it isn't contextualized with the complexity of a topic or the weight of evidence it can easily be used to mislead or to shut down further conversation...
I think that's insightful, and it ties in to your broader comments (which I agree with in whole and thus won't quote/respond to) about how wrong humanity has repeatedly been and how yes, a little knowledge is a dangerous thing. So it's not that I disagree with the spirit of what you're writing, but I challenge the specifics. At a certain point, we still need to assert things about the world. We still need to have beliefs. And you can easily, in my view, caveat yourself into irrelevance. That's especially true if you're responding to maximalist claims backed by no evidence, right? On one hand, viewers see an emotionally compelling argument, and on the other they see you, saying "hey, nobody actually knows what's true, and this is a really complicated field of study, and there are problems with this dataset, but I think it's directionally correct, so check it out... also it's like a thousand pages." It's pretty obvious to me who they're going to believe. I mention that because I think it's relevant for this specific case. I feel strongly that many parts of the progressive movement are increasingly concluding that violence is the only option. I think that this is both an immoral conclusion and, more importantly, one that will not work. In that context, I judge it more important to respond with a convincing statement - still with evidence! still with evidence that I believe to be correct! not with falsehoods! - than to respond with a more intellectually complete set of caveats, contexts, and background notes that will read to uninformed viewers as "okay, nobody actually knows, and this guy certainly doesn't, so I'm just going to go with what feels good." There's also the question of relativity of effort, which I go into more below.
An argument I've seen in the past is that some topics are almost impenetrable and for those art, philosophy, or anecdotes act as a survey that can leave you more informed than the data.
I do think this is often true. Broad macroeconomic theory comes to mind, for one. But I don't think that the question "have successful groups tended to use violence or not" is an impenetrable question that can only be answered with philosophy and anecdote. It's not physics where perfect evidence exists, but we can still make reasonable claims and support them with reasonable evidence.
Implicit in saying someone should present their own data/study is that this is the correct way of engaging in the discussion, which requires a particular temperament and education.
This piece challenged me the most because it is where I question the validity of my beliefs the most. I do believe that. I believe that presenting one's own data/study and then we all debate and see whose information is the most correct and then change our minds is the best/correct way of engaging in discussion like this. To be clear I'm not a hard-science maximalist - I'm not saying that philosophy, personal experiences, etc. have no role. But soft claims should be presented as evidence and weighed like evidence too. And... yeah, I do think that people who don't have the temperament/education to do so are wrong vastly more often than those who do. THAT DOESN'T MAKE THEM BAD PEOPLE, but I think they should strive to be more, for lack of a better word, scientific. I think we all should.
In person or when speaking to large groups, I don't argue in this way. I argue in the way that works: appeals to emotion, anecdotes, and more than anything else stories. But yes, I think that when I do that, I am worse - morally worse! - than something like this, on the internet, where we can have a purer form of debate.
It also tempts people to try to act like topic experts, looking up confirming studies for an hour or so but having no meaningful ability to understand the state of the field or assess the quality of the studies involved. I catch myself doing that more often than I should, with the excuse that I'll at least be improving the quality of the discussion. Sometimes you aren't, though, you're using rhetoric as a cudgel and making convincing but specious claims.
Two notes - if anything, making an emotionally-compelling statement with no evidence is closer to using rhetoric as a cudgel than presenting competing evidence without detailing the entire history of the field of peace studies is. More pressingly, to that point, I think you need to weigh my post against what I was responding to. This is where I disagreed with Grumble as well: how much is it really reasonable to ask me to spend the time typing out paragraphs of context to caveat my claims in response to no evidence at all? I think from a certain point of view what I did was exactly right: less work up front and now people who are interested (you, grumble) engaged and I can spend my time talking to people who might actually change their minds - or change mine.
A theme running through both your and grumble's posts is the idea of cudgeling - the idea that I was, basically, mean. And so I was. I think there's an argument to be made that being mean reduced the effectiveness of my actual point. I need to think about that more. As far as the ethical/moral dimension, goes, though, I find myself bemused. To me, tacit calls for violence or rationalizations of the same are so much more objectionable than someone responding to them by using science/fact/academia as a cudgel. Why is it that the latter is what sticks out to people and the former does not?
Fact is better than vibes I agree, but rarely do we actually have fully agreed upon facts for things on a more complex level. This is going away from my original comment so I do not intend for the...
Respectfully, I completely disagree. Fact is better than vibes. If I'm wrong, prove me wrong, right? If my data sucks, argue that too! But philosophically, I completely disagree that anecdotal experiences bear the same weight as a book from someone who's actually compiled a dataset to try to prove something. (Edit: Obviously, yes, it depends on the book and the authors. This one is from a Harvard professor, not some kook. You've been arguing in good faith so I don't expect such a facile argument from you, but adding this for posterity.)
Fact is better than vibes I agree, but rarely do we actually have fully agreed upon facts for things on a more complex level. This is going away from my original comment so I do not intend for the intricacies of the level at which I will go here to apply to the prior argument necessarily, because what prompted each response is different. Even simple 'facts' are easy to find contention. It's simple to say its a fact that X amount of burglaries occur, and cite an FBI source or local police department sources etc. if I want to constrain the argument to a locale, and even if you argue that their tally is 100% correct, that the police or FBI encountered or discovered that exact amount of burglaries, you still can't even fully agree on the facts of that number because there can be disputes as to how it comes across. Well the local police department puts more resources into patrolling neighborhoods with higher reports of burglaries, so now they've discovered more burglaries. If they put fewer resources into it, does that mean there are less burglaries, or less discovery of burglaries? This is also the basis of all varying kinds of conspiracies on the less factual side of facts surrounding autism. Factually, rates of autism are increasing. Or maybe they are not, we're just putting more resources into diagnosing them.
So the reason I want to disentangle this response from the others is because I believe that this response is more so going in the direction of saying 'nothing is fact' or some interpretation along those lines, and that's not really my intention either.
Claims like "nearly every successful progressive movement resorted to violence" need to be supported, because otherwise people will believe something that is probably not true. This wasn't a statement of opinion, it was a statement of fact. And unless I'm misunderstanding you, even your argument is more like "this data is imperfect" than "here's a competing analysis that shows violence is more effective."
You're right that it wasn't competing analysis saying it was, though I personally believe that violence and nonviolence work off each other in non-discrete ways and they amplify the success of the other, which was my motivation for looking into how that source defined things as I don't believe it can be distilled into something as simple as that. I do agree that claims on some level need to be supported, especially stated as strongly as a fact, but alternatively, sources provided such as yours have their own complications. I don't know that I would have invested as much into the reply if you hadn't attempted to use it the way you did, meaning its not the source itself that I had the most issue with, it's the way it was used.
For a source like that, the degree of effort required to cite it is substantially lower than the degree of effort required to vet it. Not only is the book not publicly available for free through official means, it's extremely lengthy, and the freely provided supporting material in your link is also lengthy and because of the concepts it is addressing, it uses overly complex descriptions that abstract away the simplifications and assumptions it makes which make it a more laborious read.
But philosophically, I completely disagree that anecdotal experiences bear the same weight as a book from someone who's actually compiled a dataset to try to prove something. (Edit: Obviously, yes, it depends on the book and the authors. This one is from a Harvard professor, not some kook. You've been arguing in good faith so I don't expect such a facile argument from you, but adding this for posterity.)
I don't know how I feel about this, if only because I almost fell for the same trap. I still don't know about the potential source I was going to cite, but there was a published critical response to the book you cited by a professor at Cambridge University, Christopher Finlay (now with Durham University), and my initial thought was, well he's a political science professor at Cambridge University, it must be reputable. I tried to look into him a little more and I read a little bit of something else by him and I just came away skeptical of him, not that I know for sure he doesn't have valid things to say, I just didn't know if I understood what I had read so I didn't want to just rely on him for his status.
So instead I set out on the more laborious process of downloading the book cited and illustrating with a specific example why I think that the data is flawed on a fundamental level. I could have simply argued against it without citing anything, but then your cudgel of 'actual data' wins out, because I would have no data. That's where I think the problem comes in with using 'data' as a cudgel, because you didn't vet it, but you made me vet it in order to respond. I personally think using data, science, and facts in this manner contributes to anti-science rhetoric, because its unrealistic to expect most people to be able to devote the energy and efforts needed to do what I did. I was only able to bring myself to do it because I'm unemployed and have no life. I recognize that your response was to someone claiming a fact without evidence, so I realize you didn't just say it unprompted or for no reason, but I don't know that it's the right type of response for that circumstance.
I agree completely that facts and "facts" can be weaponized. No notes on that. However, I did vet it. I believe it's correct. Reasonable disagreements about what belongs in the dataset don't...
I agree completely that facts and "facts" can be weaponized. No notes on that. However,
That's where I think the problem comes in with using 'data' as a cudgel, because you didn't vet it, but you made me vet it in order to respond.
I did vet it. I believe it's correct. Reasonable disagreements about what belongs in the dataset don't invalidate the entire work, and again, scholarly consensus agrees with it. You're totally right that the gish gallop is a real thing, but I think you'd agree that that's clearly not what's happening here. You're raising more good points about real concerns, but I don't think they're super relevant to this discussion specifically. If I had dumped a bunch of shitty opinion pieces from brietbart or something, for sure, but I posted one book/site from a respected liberal scholar at a respected liberal institution. I don't think the existence of the gish gallop means that every source requiring the reader to do some work is that. I admit that the line is blurry, though.
For a source like that, the degree of effort required to cite it is substantially lower than the degree of effort required to vet it. Not only is the book not publicly available for free through official means, it's extremely lengthy, and the freely provided supporting material in your link is also lengthy and because of the concepts it is addressing, it uses overly complex descriptions that abstract away the simplifications and assumptions it makes which make it a more laborious read.
Honestly, friend, I'm not even sure what you're arguing anymore. Yeah, it's a complex source, but it's a complex topic. I tried to link the website instead of the book specifically so that people could at least see something; what more could I reasonably have done? That's a serious question - this is important to me, I want to convince people, what more could I reasonably have done? Echoing your exact concerns about having to expend more work to disprove something than to prove it, I chose not to put in the effort to quote at length out of the book/website because what I was arguing against was no source whatsoever.
Edit: having considered it more, I could have been nicer. I could have added a caveat that sociopolitical questions are always debatable. I'm not 100% convinced that would have been more compelling, more convincing, but it's worth giving more thought at the very least.
I didn't know that term so thanks for enlightening me of that. You're correct, I agree that isn't what was happening here. My apologies for assuming you didn't vet it, I made an assumption that...
You're totally right that the gish gallop is a real thing, but I think you'd agree that that's clearly not what's happening here.
I didn't know that term so thanks for enlightening me of that. You're correct, I agree that isn't what was happening here. My apologies for assuming you didn't vet it, I made an assumption that because there wasn't much of any extrapolation on that data within the comment that linked to it that you didn't vet it, so clearly that assumption was wrong.
To try to simplify what I saw, I saw a source that seemed presented as authoritative and comprehensive and not necessarily vetted (what I thought at the time). I did not see anyone responding to that data (other than the parent commenter you were refuting) or discussing the validity of it at all, no conversation about the validity of it, it was just sitting there as though it was the be-all-end-all of the argument. I found this to be inherently worse than a fully unsubstantiated claim presented as fact, because at least it was clear to everyone there was no source presented for that claim and reasonable to assume it was that person's belief that it was fact rather than assuming it was backed by good data.
So my perspective was that it accomplished what it was appearing to be designed to do, which was eliminate the opposing argument by being too difficult to investigate the validity of the source/data and establish a counter-narrative as fact, so rather than the prior comments unsubstantiated claim of violence being apart of most progressive movements being the final statement of fact on the matter, it replaced that with a new claim that was substantiated by data that was presented as more accurate and comprehensive than it was. It's "The actual data", the one and only authoritative set of data, so unless someone can prove it wrong with other data, and somehow prove that that data is better data, then obviously the conclusion must be right, nonviolence is more successful.
But what isn't inherently obvious about data like that from a distance is that the amount of simplifications that they make to make those claims are so great to not really be factual in any objective sense of the word. I literally picked out the very first example in the book, I didn't go cherry picking through it to find that. The very first thing I read was seemingly quite favorable conditions to their core contention, so it wouldn't necessarily be the most compelling one for me to use to disprove it, but even then I found it so flawed that I figured even that was good enough to use. I have no doubt that I could find issue with nearly every single one of the cases they go through based on that initial one. I felt confident that highlighting that one example would illuminate how simplifying the data in the way that they attempted to do just simply doesn't make sense and is fundamentally flawed to simplify it in that way.
So yes I agree, I don't believe that to be gish galloping at all, nor do I think it was necessarily malicious or anything of that sort. But that book is almost 300 pages long, and it's not nearly the same as looking up crime statistics or such which have way more research and authoritative sources with much less simplification so it's easier to digest the argument and refute it. To drop that as "The actual data" and not cover anything about what is within it or what that data actually is, I thought created a barrier too high that even if someone was inclined to debate it, they wouldn't because the cudgel of 'fact' made it so there was only a very high effort way to do so. Even if someone presented multiple notable examples that refuted it, those would only be considered as anecdotal cases rather than the much greater amount of data points covered in that book.
Yeah, it's a complex source, but it's a complex topic. I tried to link the website instead of the book specifically so that people could at least see something; what more could I reasonably have done? That's a serious question - this is important to me, I want to convince people, what more could I reasonably have done? Echoing your exact concerns about having to expend more work to disprove something than to prove it, I chose not to put in the effort to quote at length out of the book/website because what I was arguing against was no source whatsoever.
I agree with you that you were arguing against a claim that had no source whatsoever and therefore it isn't really fair that you should have to put in that much more effort to argue against it. I did mention this earlier in this comment, and perhaps this is unique to me and not something that applies to others, a claim stated as fact without any source at all is less concerning to me than a claim purported as fact with a source that is presented as authoritative but isn't as good as people may believe it is because I think that people are more willing to take in and believe the latter than they are the former. I simply view someone who makes that claim without any source as it being their belief that it is fact and I guess that is how I am able to find it less concerning as I presume that is what other people do when encountering such claims. It's probably more specific to the context of this site, claims stated as fact without substantiation in other contexts may be more concerning to me as I would be worried about the capability of the audience for those type of claims more than I am here.
Edit: having considered it more, I could have been nicer. I could have added a caveat that sociopolitical questions are always debatable. I'm not 100% convinced that would have been more compelling, more convincing, but it's worth giving more thought at the very least.
I really didn't see it as mean. I think it was incorrect of me to say that phrasing was disingenuous, I did think it at the time as I didn't reasonably believe that the data was all that strong but I understand now that you were trying to elevate the discussion. I just thought it shut down the conversation too easily in a way that wasn't befitting the veracity of the data.
Very interesting. That's reasonable. Clearly this can happen and can be dangerous, I'm forced to agree with you there. I guess I feel like everything is a matter of degree, and referencing a...
a claim stated as fact without any source at all is less concerning to me than a claim purported as fact with a source that is presented as authoritative but isn't as good as people may believe it is because I think that people are more willing to take in and believe the latter than they are the former.
Very interesting. That's reasonable. Clearly this can happen and can be dangerous, I'm forced to agree with you there. I guess I feel like everything is a matter of degree, and referencing a known, non-crank academic work isn't the same as referencing a paper about vaccines being fake or something and hoping nobody checks it. Inherent in that though is your earlier point about the complexity of the work and that few people reasonably could check it. Hmm.
As a note, I think LLMs are helping here. They are imperfect and require a certain amount of base knowledge to use, but you could ask one to, if not summarize a source, tell you where the source lies in the canon. Meaning if you gave one the infamous debunked autism/vaccine paper, it'd tell you that it's been retracted. That requires you to trust the model of course but in a situation where you're otherwise unable to assess a claim it's much better than nothing.
simplifying the data in the way that they attempted to do just simply doesn't make sense and is fundamentally flawed to simplify it in that way.
As you know, I disagree on the specifics. My question is where do we draw the line? Any attempt to understand the original question will by necessity involve simplification. The reaction you cited from Martin, for example, notes that chenoweth's data was simplified and then explicitly says that it made sense for chenoweth to do so. Not that everything is fine because Martin said so; my point is that if even mainstream critical views recognize the necessity of the simplification, that suggests that it really is unavoidable.
I get where you're coming from and I understand the connections to the whole epistemic bullying with complex sources thing. But at the end of the day we have to make claims and we have to try to understand the world. We have to use imperfect, simplified data.
Non-violence is the preferable route and violence should obviously only be a last resort, but I'm a bit skeptical of that book's claims, tbh. Or at least I'm not sure how much it applies in this...
Non-violence is the preferable route and violence should obviously only be a last resort, but I'm a bit skeptical of that book's claims, tbh. Or at least I'm not sure how much it applies in this case.
It appears to be 15+ years old, so it won't have the full perspective of how badly Occupy Wall Street ultimately failed, or how BLM ultimately failed. Even the civil rights movement was only partially successful, and that success came at the cost of decades and centuries of millions of deaths and immense suffering, and the success only ultimately occurred due to numerous riots and a fear that the country would be destroyed after MLK's assassination.
I think it does apply more to cultural change, like you said, but even something like gay marriage being accepted took decades of tons of deaths and suffering just to reach that. Women's rights and civil rights (which I presume the book considers successes) are currently being actively rolled back, and protests are being met with violet opposition.
In regards to the AI companies currently upending the fabric of society, I don't think the young people having their futures stolen are going to be particularly receptive to the idea that if we just keep asking nicely we may be able to partially change things in 50 years or so, and that's assuming the damage can even be undone at all. On top of that, there have been practically no consequences of any kind for the perpetrators of said awful things. Why would they want to essentially roll over and take it and sacrifice their lives just because school textbooks (often controlled/written by close allies of Epstein) told them non-violent solutions are the only way? The rule of law has broken down and isn't respected by the leaders of country themselves, and those leaders constantly show that might makes right and there are no consequences as long as you succeed, so why wouldn't they also believe that?
None of this is to say that I condone or encourage violence, only that it's inevitable when you give people no other effective choice. And we are getting dangerously close to that point, if we're not there already. I am still holding on to a small glimmer of hope that we can see peaceful change, but that glimmer grows a little dimmer every day. I genuinely hope to be proven wrong and we're able to turn things around.
I understand, but the failure of several high-profile progressive initiatives doesn't mean that nonviolence works more often than violence. Anecdotes aren't data! I could just as easily point to...
It appears to be 15+ years old, so it won't have the full perspective of how badly Occupy Wall Street ultimately failed, or how BLM ultimately failed. Even the civil rights movement was only partially successful, and that success came at the cost of decades and centuries of millions of deaths and immense suffering, and the success only ultimately occurred due to numerous riots and a fear that the country would be destroyed after MLK's assassination.
I understand, but the failure of several high-profile progressive initiatives doesn't mean that nonviolence works more often than violence. Anecdotes aren't data! I could just as easily point to high profile progressive successes: legalization of gay marriage, the inflation reduction act, American care act etc, none of which was included in the dataset either.
because school textbooks (often controlled/written by close allies of Epstein)
????????????? This is a pretty radical ad hominem. Maybe true, I guess, no idea, but I don't think it's supporting your overall argument.
None of this is to say that I condone or encourage violence,
I can't tell you what you believe, obviously, but at these words, at the very least, condone violence. I suspect that you're critical of Republican dogwhistles? This is a dogwhistle:
These billionaires play a direct role in making that happen, and I'm far more concerned about that violence and destroying of lives than I am about one person retaliating against someone responsible for it. When you push people too far, this result is inevitable. Historically, it's also been one of the only ways to have an effect on the wealthy and powerful, so this is nothing new. Our country was founded on it, after all.
Re: the textbooks - see Robert Maxwell's connection to McGraw Hill textbook publishing, Epstein, and of course Ghislaine. The ties are suspicious, to say the least. This also doesn't fit the...
Re: the textbooks - see Robert Maxwell's connection to McGraw Hill textbook publishing, Epstein, and of course Ghislaine. The ties are suspicious, to say the least. This also doesn't fit the definition of an ad hominem. The point is students are largely taught a singular viewpoint in school in most cases, and the company responsible for that viewpoint has a vested interest in making sure that it's the only one even considered valid in any way. Even if there is no substantial connection there, the association with Epstein et al is enough for many to at least call into doubt what those textbooks said, especially in light of how influential Epstein et al have been in controlling the narratives in society these days (see gamergate, the recent trans panic, etc)
Speaking of ad hominem, if you're that concerned about them, I'd appreciate an actual reply to the substance of my comment rather than a few lines about alleged dog whistles. If anything, that's closer to an ad hominem. I am also concerned far more about the police murdering innocent people than I am about one person assaulting a single cop. I don't approve of either, but the scale and severity of the two are disproportionate. It's also factually true that this country was founded on violence, which it continues to praise to this day. Being able to see the direction the winds of history are blowing and how those winds pick up speed is not the same as agreeing with said winds.
Edit: in hindsight, I think this is probably where I call it quits on Tildes. This site is just turning into another HN/Reddit, and I don't really like putting in the effort into writing comments just for some AI proponent to write dismissive, insulting comments that show they clearly did not even bother to read a few sentences into my comment. I guess that's the nature of social media, though. The internet was a mistake. Leaving this here for posterity.
To be frank, this is not at all surprising and in my experience that's the direction practically every progressive space online has taken under the guise of "well what's next, you want literal...
To be frank, this is not at all surprising and in my experience that's the direction practically every progressive space online has taken under the guise of "well what's next, you want literal nazis colonizing our discussion?" Whatever is the progressive stance must be followed or you get shunned or at least it's the cause of a huge drama and rift in the community. AI just happens to be the topic where you disagree with that stance. Frankly it's made some unrelated hobby websites incredibly insufferable (thankfully most can still be enjoyed, just without participating in the forums).
Well, you know the saying "you can't be neutral on a moving train". it's clear who's driving the train at this point. I don't think calling it a "culture war" is proper when this tehnology is...
both conceptually fascinating and value-neutral.
Well, you know the saying "you can't be neutral on a moving train". it's clear who's driving the train at this point.
But instead too many people in left-leaning spaces treat anything less than "fuck this devilry and fuck anyone who uses it" as the equivalent of a Silicon Valley techbro chud. Just one more divisive kneejerk culture war.
I don't think calling it a "culture war" is proper when this tehnology is keeping the US GDP above water while displacing millions of jobs and is being weilded by the POTUS for dire effects. You can be complely disenganged with AI and still be affected by it. That starts to extend beyond a "culture war".
The other aspect is hinted in the first paragraph; I don't think this administration is going to do much to regulate the tech. So when the soapbox, ballot box, and jury box all fail... I don't condone it, but I simply see this as an inevitability.
Loved this quote: I'm not sure why anyone needs to spend so much time and effort building an "AI policy" when the answer is simple: give working people a way forward. People think cryptocurrency...
Loved this quote:
The telescope, whose invention allowed astronomers to gaze at the moons of Jupiter, did not displace laborers in large numbers—instead, it enabled us to perform new and previously unimaginable tasks. This contrasts with the arrival of the power loom, which replaced hand-loom weavers performing existing tasks and therefore prompted opposition as weavers found their incomes threatened. Thus, it stands to reason that when technologies take the form of capital that replaces workers, they are more likely to be resisted.
I'm not sure why anyone needs to spend so much time and effort building an "AI policy" when the answer is simple: give working people a way forward. People think cryptocurrency is dumb, for instance, but it didn't garner significant political opposition until it started to spike GPU and electricity prices. LLMs are doing that on a whole new order of magnitude. Of course people will oppose a policy that will take their job, make remaining jobs more miserable, and drive up the cost of living. Until AI companies meaningfully address that concern, they're going to grow more and more unpopular.
Politicians talk all the time about "creating jobs" and sometimes this happens, but at scale, creating new jobs is apparently easier said than done and people continue to worry.
Politicians talk all the time about "creating jobs" and sometimes this happens, but at scale, creating new jobs is apparently easier said than done and people continue to worry.
It's actually not as hard as they make it. But it takes time and requires them to go against the wishes of donators and lobbyists who want to offshore as much as possible. The incentives simply...
It's actually not as hard as they make it. But it takes time and requires them to go against the wishes of donators and lobbyists who want to offshore as much as possible. The incentives simply don't align.
I just read the quoted parts, but if I got the gist of this piece right, it's a populist article directed at an AI-enamoured audience, trying to paint AI-critical people as uninformed and...
I just read the quoted parts, but if I got the gist of this piece right, it's a populist article directed at an AI-enamoured audience, trying to paint AI-critical people as uninformed and unthinking when in reality the opposite is more true (save of course the marginal lunatic faction that will always exist).
It's a perfectly legitimate position to say the current forms of publicly available AI are 'manufactured by out-of-touch billionaires and pushed onto an unwilling public to achieve sinister aims'. Previously, disruptive technology was accepted despite being disruptive because it solved real problems and/or created real efficiency on a societal scale. By contrast, AI is being force-fed to gazillions of people at enormous cost, people who want nothing to do with it it because it does not solve anything for them and instead creates a bunch of new issues and inefficiencies.
If the purpose of Big AI was to genuinely help society, the models would be tailored to address specific issues and be much, much more effective at doing so. Instead, because the companies behind these models want to rule over the rest of society, they have chosen to try to make "everything machines" that are shitty at almost everything they try to do and that have to indiscriminately devour all publicly available data in order to function - and all restricted (copyrighted) data on top of that. The latter is stealing. Why would anyone who actually wants to help go about it in this manner? It's either strikingly incompetent and morally callous, or it's driven by a desire to dominate and oppress.
It doesn't take a rocket scientist to see what's what, just like it was easy to see the current US president, before he was elected (for the first time but especially the second!) as somebody I don't trust enough to even mow my lawn, not to mention granting him any sort of leadership position. It's not that hard, people.
Two things can be true at the same time. But my comment isn't about the people, it's about the companies. Why is it not enough for them that some people find their product useful? Why are they...
Two things can be true at the same time.
But my comment isn't about the people, it's about the companies. Why is it not enough for them that some people find their product useful? Why are they doing this Clippy on steroids thing? It's not populism to ask why the emperor doesn't seem to be wearing much more than a pair of briefs - fancy as those briefs may be from some people's perspective.
Got it. I misunderstood - rereading your original post, it's clear that you didn't mean everyone using AI is being force-fed, but rather that some people are.
Got it. I misunderstood - rereading your original post, it's clear that you didn't mean everyone using AI is being force-fed, but rather that some people are.
What I took from the article is that, while it’s possible to be a smart skeptic of AI, this is not the way to bet. There are inevitably going to be a lot of uninformed people complaining about AI...
What I took from the article is that, while it’s possible to be a smart skeptic of AI, this is not the way to bet. There are inevitably going to be a lot of uninformed people complaining about AI who know very little about it but are sure it’s bad. Compare with populist beliefs about vaccines or the pandemic or 9/11 or child abuse or foreign aid or trade.
This seems to be true of many hot-button topic these days. Uninformed people on both sides make lots of noise while saying things that are wildly wrong about the specifics.
Of course, by taking a position it’s possible to be “directionally accurate” by coincidence. I don’t really consider that a “legitimate position.” There is more to making an argument than being on the right side. You also have to avoid repeating falsehoods.
There are already hoards of uninformed people acting as AI boosters and unintentionally sowing destruction in workplaces and universities etc. Does the article mention them?
There are inevitably going to be a lot of uninformed people complaining about AI who know very little about it but are sure it’s bad.
There are already hoards of uninformed people acting as AI boosters and unintentionally sowing destruction in workplaces and universities etc. Does the article mention them?
Yes, there are people like that too and that’s bad. That’s not populism, though? The article isn’t about the managers. Although, I suppose the OpenClaw craze is a kind of influencer-driven...
Yes, there are people like that too and that’s bad. That’s not populism, though? The article isn’t about the managers.
Although, I suppose the OpenClaw craze is a kind of influencer-driven populism. And there do seem to be lots of ordinary people using ChatGPT in inappropriate ways?
I was going to say if the article isn't populism, then it's propaganda. But I wanted to fact-check myself and glanced at some of the other articles this person wrote and she seems to be coming...
That’s not populism, though?
I was going to say if the article isn't populism, then it's propaganda. But I wanted to fact-check myself and glanced at some of the other articles this person wrote and she seems to be coming from a good place.
It's just very poor logic to name one stance, and not the other, as "something populism". Critical and supportive arguments can both be populist. Trying to cement this term to describe one side only seems deliberate and manipulative (almost like some big foot in the AI camp asked her to do so), but maybe it's just a case of a young journalist trying to become relevant by twisting language in hopes that it takes off so that she'll be able to say "I coined the term 'AI populism' " on her socials bio. Or something?
I think she sees populism on the anti-AI side and the people she interacts with who are pro-AI don’t come across as populist. It sounds like you want to find sinister motives for that.
I think she sees populism on the anti-AI side and the people she interacts with who are pro-AI don’t come across as populist. It sounds like you want to find sinister motives for that.
well of course. the gamble with AI has paid off handsomely so far. But I'm not here to gamble. You don't need to be well informed of the tech to know a data center in your area will double your...
this is not the way to bet.
well of course. the gamble with AI has paid off handsomely so far. But I'm not here to gamble.
There are inevitably going to be a lot of uninformed people complaining about AI who know very little about it but are sure it’s bad.
You don't need to be well informed of the tech to know a data center in your area will double your energy bill, despite it being of no use to you. nor do you need to know of the underlying tech to be affected by some layoff that blames AI. It's bad to you because it's actively making your direct life worse for no returns.
This seems to be true of many hot-button topic these days. Uninformed people on both sides make lots of noise while saying things that are wildly wrong about the specifics.
It doesn't help that there isn't any real "AI expert" out there. From those we see researching or working directly with it, you'll see extreme statements on both sides. The tech is in the wild west.
From the ethos side in terms of philosophy, futurism, and economic, I haven't seen much pro-ai arguments. At least not ones that line up with reality. UBI is a pro-AI argument, but very few seem to think we're on that trajectory, for example.
Has anyone had their energy bill double due to a data center? What is that based on? You don’t need to know anything to repeat rumors. If you want to avoid spreading misinformation, these things...
Has anyone had their energy bill double due to a data center? What is that based on?
You don’t need to know anything to repeat rumors. If you want to avoid spreading misinformation, these things do need to be checked.
I’m not going to check all of these, but I did check the first anecdote in the first link: With ChatGPT’s help, I was able to find the rate schedules for the city of Manassas. In 2016 it was...
I’m not going to check all of these, but I did check the first anecdote in the first link:
John Steinbach was shocked to receive a $281 electricity bill in January 2026—a huge spike from the roughly $100 he’d paid the previous month. “It’s just so far beyond any bill that I’ve ever had,” he says. Steinbach, who has lived in his Manassas, Va., home for nearly 40 years, worries his rates will keep climbing as the outsized electricity demand from AI data centers grows. “They’re building them like it’s ‘Field of Dreams’—build it and the electricity will come—but we don’t see how that’s going to happen.”
With ChatGPT’s help, I was able to find the rate schedules for the city of Manassas. In 2016 it was $13.59 per month plus $0.0830 per kWh and the current rate schedule is $16.17 per month and $0.0984 per kWh, or about an 18% increase over a decade.
So, something doesn’t add up?
Here is a news story about how Manassas is considering a 10% increase.
But Council member Mark Wolfe said the data center power crunch isn’t driving the city’s proposed rate increase.
Instead, he said the 10% increase in rates is needed to keep the city’s electric system financially sustainable after years of flat rates and sharply rising costs in both materials and labor.
The city went without any increase in electric rates from 2017 through 2023, Wolfe said. The increase is needed now “to reestablish that our utility enterprise operations are sustained by the utility rate revenue.”
Also, for a Californian these rates look extremely low. We are paying about .30 per kilowatt-hour, three times as much.
Yes, clearly the impact of a recent data center. My point was that dismissing thos as some "rumor" is incredibly disingenuous (and bonus points for trying to fact check someone else's lived...
So, something doesn’t add up?
Yes, clearly the impact of a recent data center. My point was that dismissing thos as some "rumor" is incredibly disingenuous (and bonus points for trying to fact check someone else's lived experience with the very thing impacting their life).
for a Californian these rates look extremely low. We are paying about .30 per kilowatt-hour, three times as much.
Yes, that's an issue as well. Such a big issue that some govenor candidates are running on taking PG&E down a notch. I saw right around 2023 when by bill nearly tripled despite no significant Useage increases (heck, it would have gone down because I had less people in my house that year).
That's not due to data centers, but it is a big issue in the state.
I think the reporter should have asked to see their electric bills and studied them. Although the quote was included to show someone’s “lived experience” there is also a factual component to it. A...
I think the reporter should have asked to see their electric bills and studied them. Although the quote was included to show someone’s “lived experience” there is also a factual component to it. A person can angry but wrong about their power bill.
There could be innocent explanations for the discrepancy. Maybe that house is near Manassas but not in the area where the electricity rate is set by the city? But as it is I’m doubtful that their electricity bill went up due to an enormous, sudden rate increase and the high bill might be due to some other reason like more usage due to cold weather.
Also, I never argued that data center power usage isn’t a problem. I am skeptical that any retail customer’s electricity rate doubled like you suggested. That’s the part I’m discounting as rumor.
In 2026, the politics of AI has a new meta: “caring a lot about AI” is no longer correlated with “knowing a lot about AI.” AI is rising in salience faster than any other issue among US voters. Politicians gearing up for the 2026 midterms and 2028 primaries won’t lag far behind. That means AI policy is no longer the remit of a few wonky technocrats. From now until forever, most people regulating, protesting, and talking about AI will not be interested in AI per se, but rather how it impacts their preexisting belief systems and political agendas. These forces are stronger, more diffuse, and more volatile than we have seen in AI policy before. And the curve is just about to shoot straight up.
I define AI populism as a worldview in which AI is viewed not only as a normal technology but as an elite political project to be resisted. It regards AI as a thing manufactured by out-of-touch billionaires and pushed onto an unwilling public to achieve sinister aims like “capitalist efficiency” (layoffs) and “population management” (surveillance). AI populists don’t really care whether ChatGPT is personally useful, or if Waymos eke out some safety gains: AI’s utility as a tool is immaterial relative to the unwelcome societal change it represents.
Among the public, AI populism shows up as individual attempts to block AI encroachment; for example, data center NIMBYism, AI witchhunts among creatives, and in the extreme, assassination attempts like what happened to Altman this week.
[...]
What seems likely is that the anti-elite and nihilistic attitudes that have dominated US political culture in the last few years are transmuting into anger at AI billionaires. Young people are particularly incensed. Gen Z already grew up in a world that they felt was shrinking, where grift and shitcoins and sports gambling looked like the only paths up. Now, they’re being told AI is the reason they can’t get a job—and potentially never will. Just as the United Healthcare CEO seemed like a justified target to many disillusioned and radicalized young people, so will AI executives be to many more.
I've been taking a trade-skills night classes with a lot of people straight out of high school (18-20). It seems like more than half of them were high performing students who all elected to put...
I don’t know what exactly motivated Altman’s assailants, of course, just as I don’t know what specific thing radicalized Luigi Mangione or Tyler Robinson. But the 20-year-old Molotov-thrower had joined a Pause AI Discord and penned a Substack post on existential risk, writing that AI executives are “sociopaths/psychopaths” and “gambling with your future and the lives of your children… These people are almost nothing like you.” We know less about the second set of attackers, except that they are also young: 23 and 25.
I've been taking a trade-skills night classes with a lot of people straight out of high school (18-20). It seems like more than half of them were high performing students who all elected to put uni on hold to do gig work all day and weld pipes till 11.30pm. And speaking to them, there is a sense that they don't even have faith in high-school exams, so a degree is out of the question. Their experience of education was watching every standard and metric go down the toilet because of LLMs and they could at least make a somewhat informed decision on what direction to go.
Shift that to people in their mid 20's and I can't begin to imagine the absolute dread of making the "smart" choices and graduating into instant redundancy. Many people made deliberate choices to lean into AI, how many positions are actually out there if the technology should at best replace 10% of workers today. And its clear companies would prefer no workers all together. To top it all off, it seems like Sam Altman has a habit of only being right about the bad parts of AI. Massive job losses. Major roll out of disruptive data centers. Resource shortages and outright wars. Bad for regular people. Great for investors though. Can see why they keep giving him money.
Haven't seen much of that abundance come around though. But with this trend, I can see why people are panicking. His vision of the AI apocalypse has already started. Hell the Iran war kicked off with some model painting a girls school as a valid target. No one else is taking responsibility or will be held accountable so it's all on the technology. What's to stop the government from gunning a regular person down and saying that the AI did it. We already have video of ICE murdering two people and there hasn't been a peep about that since.
I've always been vocally anti-AI. Probably started as far back as 2017 when we tried to use early TensorFlow systems for Big Data analytics. And the brunt of my arguments have always been that it doesn't work like a reasonable tool should.
But now its a just another turd in a shitstorm. A data center is drawing more power in an energy crisis because a needless war done on behalf of a rouge nation that has effectively captured the US government in service of overrunning the middle east, that will lead to an even greater migration crisis and be used as fuel for far right movements. The power draw will also put more pressure on the climate crisis that has been ignored for several decades at this point and is already destabilizing agriculture that is going to take a hit because the agri-chem shortage from the above mentioned war. The data center also has water usage and so that's going to drive up two utilities and possibly municipal taxes to treat the waste and service the infrastructure. And if tax/rate-payers are subsidizing this tech, it means businesses are likely forced to increase prices...
It's a very hopeless situation all round and people are going to lash out when there 's no obvious thing to do.
The core idea behind Abundance is increasing state capacity to do things as well as supply of infrastructure and programs (housing, energy, transportation, healthcare). In some cases, that means...
Exemplary
The core idea behind Abundance is increasing state capacity to do things as well as supply of infrastructure and programs (housing, energy, transportation, healthcare). In some cases, that means simplifying regulations (zoning is a big one) or eliminating them (certificate of need for hospitals). In other cases, it means rebuilding a civil service that is competent and capable of building projects without having to hire consultants. Philosophically, it also means asking why a process needs to be the way it is (and not just going to “because that’s how it is” or “it protects the environment” without specificity and recognizing that the general public cares more about results versus a process.
To give an example related to my career field, in the U.S., nuclear regulations fall under 10 CFR (Energy), 40 CFR (Environment), and 49 CFR (Transportation). In some cases, there are multiple sets of radionuclides to keep track of (for say, shipping versus disposal) and regulations can be contradictory. Can we simplify it to make things easier to understand, easier to administer, and reduce the likelihood of violations?
I'm aware of the definition, having read the book. Obviously abundance and trickle-down economics are different. But especially if you focus on the "roll back onerous regulations" and "build...
I'm aware of the definition, having read the book.
Obviously abundance and trickle-down economics are different. But especially if you focus on the "roll back onerous regulations" and "build faster" parts of abundance, there are a lot of similarities.
It's a shame that it feels like the core theories of abundance are more or less co-opted by the people who also tried to make trickle-down economics a thing. They hear "reduce regulations" and...
It's a shame that it feels like the core theories of abundance are more or less co-opted by the people who also tried to make trickle-down economics a thing.
They hear "reduce regulations" and think "yeah, we need smaller government" despite the core values requiring more government intervention to realize its means. They hear "reduce the beauracracy" and think "okay, we just need to layoff a lot of people", never considering that some processes need less admins and more on-the-ground workers to succeed.
I think that's where the "trenchcoat" comes in, because these people (perhaps maliciously) interpret the philosophy wrong from the get-go and use it to justify stuff they already wanted to do.
I agree there is some overlap (and a risk for co-opting), but I personally don’t see strong similarities. What concerns do you see in “build faster,” and what would you propose instead?
I agree there is some overlap (and a risk for co-opting), but I personally don’t see strong similarities. What concerns do you see in “build faster,” and what would you propose instead?
Is that inherently suspicious to you? It seems normal that some ideas of reagonaomics/supply-side economics were good, and some were bad, and the policy ideas that come today will pick and choose.
But especially if you focus on the "roll back onerous regulations" and "build faster" parts of abundance, there are a lot of similarities.
Is that inherently suspicious to you? It seems normal that some ideas of reagonaomics/supply-side economics were good, and some were bad, and the policy ideas that come today will pick and choose.
I'm not sure that rolling back the regulations that prevent buildings full of single room occupancy units or actual boarding houses equates to trickle down economics. One reason behind...
I'm not sure that rolling back the regulations that prevent buildings full of single room occupancy units or actual boarding houses equates to trickle down economics. One reason behind homelessness is lack of small scale cheap housing compatible with earning minimum wage.
I have a hard time generating empathy for Sam Altman; perhaps he should pour a billion dollars into research on replacing me with a computer that could be more effective at the job.
I do think that this hatred has been misdirected at people that could realistically be harmed by their actions, though. I randomly ran into this artist (xcancel mirror) who lost access to a freelancing site as people were incorrectly claiming that the artist had used AI to generate their commissions. Directing attention and hatred towards CEOs, the rise of right wing nationalism, the economic systems trapping people in poverty, etc. is great; aiming it at people who stand to lose everything is not.
Experienced this firsthand the other day. ResetEra, a large progressive forum I've been on since it was originally NeoGAF in 2007, has been trending strongly in this populist anti-AI direction -- thanks in part to moderators distorting their policies to shut down more informed or educational discussion while turning a blind eye to ragebait and witchhunts, which drives away experts while emboldening absolutists. It reached a new low the other day when posts on the attacks on Altman and that Indiana politician became filled with comments glorifying the violence, calling for more, and attacking anyone who disagreed. I reported the post the day it went up, but no action was taken for days as the murder-fantasies went on for page after page. When I posted in the site's meta-discussion thread calling out the mods for tolerating violent rhetoric, they permabanned me instead.
It's such a frustrating dynamic, because the core problem with AI is not the technology itself, but capitalism writ large. Generative AI as a technology is both conceptually fascinating and value-neutral. If people had no fear of becoming destitute or perverse incentives drowning out creative works, it would just be another creative tool on par with the synthesizer, allowing people to extend their labor and explore their creativity more freely (that was the vibe in the early days of AI Dungeon and DALL-E 2, before the ChatGPT-driven rush to commercialization). Movements like this at best throw the baby out with the bathwater, and at worst discredit the legitimate grievances with misinformation and inchoate violence, turning control of this technology firmly over to megacorporate techno-fascists. It's why critics should be the most engaged with the space, so they understand what they're criticizing and can better recognize both how to effectively regulate it and how to turn aspects of it to the advantage of regular people (open source being the biggest opportunity here). But instead too many people in left-leaning spaces treat anything less than "fuck this devilry and fuck anyone who uses it" as the equivalent of a Silicon Valley techbro chud. Just one more divisive kneejerk culture war.
This is certainly not the only issue people have with generative AI. It fundamentally disgusts and puts off many people, including myself. It's the opposite of creativity and the opposite of humanity, which is an essential component of art. There are core issues with the technology itself, these issues are just greatly exacerbated by capitalism. AI "art" would still be mindless regurgitations of training data. LLMs would still make things up.
With regards to the violence being celebrated, this is just the inevitable outcome of these billionaires systematically working to destroy the lives of millions and supporting (both directly and indirectly) politicians and policies that are destabilizing society. I'm far more concerned about the millions that could/will die as a result of their actions - look at the cuts to USAID alone. Or the latest war in Iran. Or Gaza. The list goes on.
These billionaires play a direct role in making that happen, and I'm far more concerned about that violence and destroying of lives than I am about one person retaliating against someone responsible for it. When you push people too far, this result is inevitable. Historically, it's also been one of the only ways to have an effect on the wealthy and powerful, so this is nothing new. Our country was founded on it, after all. Pretty much every successful progressive movement (amongst others) has relied on violence as a tool to achieve their goals, because there's usually no other choice. Is it good that things have gone this far? No, but it's not shocking to me at all. It's also not shocking to me that people also hate the people who use AI, since they're directly and indirectly funding said billionaires - collaborators, perhaps.
I disagree with much of what you said, but with this especially. The actual data suggests that nonviolent paths are more than twice as successful. And when we're talking about cultural change and not a complete overthrow of government, the rate of nonviolent success is even higher!
It's rather disingenuous to phrase it as "actual data" as though there's some universally recognized or agreed upon data points that cleanly define and disentangle violent and non-violent activities.
Here's a bit of that 'actual data', since conveniently, the person who came up with the framework for that data, sells it in a book which I didn't pay for and I suspect most others here wouldn't want to pay for to verify the data.
In 1975, Indonesia invaded East Timor.
https://en.wikipedia.org/wiki/Indonesian_invasion_of_East_Timor
The following quotes are from the book, not from Wikipedia, but I wanted to link the Wikipedia page as a baseline for people to read about one of the data points without having to pay for it.
So I guess this must count as a data point against the success of violence.
So two decades later, nonviolence succeeded...
So Indonesian President (and he was basically a dictator) who controlled Indonesia for 31 years and was 77 years old, embroiled in issues beyond East Timor, was finally defeated by nonviolence. I guess the thousands of people who died fighting in the resistance died for nothing and if they had only resisted peacefully in the beginning, they would have been able to achieve success...
Just to reiterate, that quote is from the book. So nonviolence won, but violent resistance was an important symbol for resistance overall. But nonviolence gets the point, there's no assists in this data.
Now the following is pulled from the Wikipedia page linked above
Go to the page to check out the equipment if you wish, but the critical point I'd like to highlight here is that violence won the fight to begin with. The Indonesian dictator didn't take over East Timor with beautiful and persuasive rhetoric, but with the backing of the US and violent force.
Additionally, the data used in this book started in the 20th century, and I discovered this article published in SAGE
https://www.bmartin.cc/pubs/21cs.pdf
Where it covers a response to the 'actual data', so it's an interesting perspective where they are responding to a critical response to the data, so you get a broader picture of what is contentious about that data without a fixed framing of the direct response. In this article, it mentions the critical response goes back to the 19th century, and it discusses the flaws in the critical response about using data from the 19th century. However what I find enlightening about this is that in that critical response, they claim that if you use data from the 19th century, then violent responses have more success than nonviolent. Again, that page covers the flaws of using data from that time period, but I do think it's interesting while those flaws may be the reason why the Erica Chenoweth piece started where it did, but it's also convenient that they just happen to start their dataset at a point where it favors their argument.
Here's another quote from the Chenoweth 'actual data' book
I find that to be a fairly reasonable argument. What this doesn't account for however are the myriad of circumstances and motivations that lead to popular support. There's no reconciliation of how violence can play a part in that. So is it a knock against the success of violence if a cult in Waco, Texas, fails to hold their freedom? Perhaps they would have been more successful in a non-violent approach. Of course I'm intentionally choosing an incident that was a relatively small group of people that failed to achieve what they had wanted in a violent encounter, because I think it highlights the flaws in what incidents you count. I don't think this incident was counted and I didn't dig into the book to find out, I picked it on my own.
I think the idea when people compare violent and nonviolent activities is that there may be some similar level of participation, even if that's not the normal case. In essence, the sentimental force behind the violent movements are also in existence behind the nonviolent ones.
This is a good post and I think your core point - that much depends on where we set boundaries and all analysis is inherently biased by where we observe from - is good. It is not, however, disingenuous to refer to a flawed dataset (as all datasets are!) as "actual data" when the alternative is 100% vibes. (I also laughed at you calling out how 'convenient' it was that the author's data is... in their book).
Nevertheless, good post.
Here's where I think it's disingenuous, and I don't mean it to be an attack on you, but it comes across as a cudgel of 'science' or 'fact'. But the data in this case is just made up by a few people.
To be fair, all data is on some level just made up of course. If you are tallying points in a basketball game, the ball going through the hoop counts for points and was part of the design of the game but that also makes it universally recognized on some level. How to tally the data of scoring points in a basketball game is pretty straightforward subsequently, and it would be pretty straightforward to present it as 'actual data'. But then there's someone who passes the ball to the person who puts the ball through the basket, and that person gets an "assist". It's data, but it's more made up because of how it's defined and by who. The NBA counts assists differently than other leagues, or even over historical NBA. Even so, it's still a widely recognized stat that at least by context someone can often determine what definition of assist goes with the data, and the non-specific definitions are widely known on some level by people who follow the sport at least.
There could be a dataset for players who picks their nose on the court but no one is tracking that. But my point is, how you present what qualifies as 'actual data' matters. If I say 'the data shows the team that picks their nose on the court most wins', and then I go selectively looking through games, and then also choosing what counts as 'nose picking', and present it as 'actual data', in some sense it's true that it's data, it's bad data, but if not for the comical premise, I'm giving it more authority than it actually has by presenting it as 'actual data' because I'm the only one tracking the data. It's one guy (me) who selectively went through things and came up with my own criteria and judgements and chose 'nose picking', it's not a wide group of professionals in the NBA or basketball scene who defined a 'nose picking' stat.
I think on a 'data' level, that's similar to an anecdote. What makes an anecdote less useful in certain contexts is that it's one person's experience or one single event that isn't necessarily representative of all events. I do believe the book had two authors, and perhaps there would be more people involved than that, but on the scale of what we're talking about here, I think it deserves more than just a few people to have some level of agreement of definitions on the subject matter to have more weight behind it. It's not data to be used as a cudgel against philosophical arguments or anecdotal experiences. To be more widely recognized and accepted is where I would draw the line on presenting it more authoritatively.
That was the intent! Claims like "nearly every successful progressive movement resorted to violence" need to be supported, because otherwise people will believe something that is probably not true. This wasn't a statement of opinion, it was a statement of fact. And unless I'm misunderstanding you, even your argument is more like "this data is imperfect" than "here's a competing analysis that shows violence is more effective."
And look, it's not just the one book. This is the consensus of the field as a whole. Additional works have, just like you, questioned some specific cases, added caveats, or argued that context is more important than chernowith suggests, but nobody in the field is seriously arguing that most successful progressive movements resorted to violence. You can ask an LLM to summarize the major criticisms of chernowith or look for papers citing hers on Google scholar or something and check for yourself.
Respectfully, I completely disagree. Fact is better than vibes. If I'm wrong, prove me wrong, right? If my data sucks, argue that too! But philosophically, I completely disagree that anecdotal experiences bear the same weight as a book from someone who's actually compiled a dataset to try to prove something. (Edit: Obviously, yes, it depends on the book and the authors. This one is from a Harvard professor, not some kook. You've been arguing in good faith so I don't expect such a facile argument from you, but adding this for posterity.)
We have to strive for better as a society.
In full seriousness - you've clearly given a lot of thought to what's legitimate and what's not. Do you really feel that my random vibe on something is equivalent to published data, even imperfect data?
Re: the middle part, again, philosophically I agree. Not much more to say there.
I might be wrong, but I don't think @Grumble4681 is defending "nearly every successful progressive movement resorted to violence"? They're saying your initial response to that, while providing data, implied an authority or broadness that wasn't warranted.
Data is great, but if it isn't contextualized with the complexity of a topic or the weight of evidence it can easily be used to mislead or to shut down further conversation.
You can reach a local minima, where the data that is easily available is insufficient. An example would be WEIRD populations in psychology. The mistake isn't showing how college kids behaved in contrived settings, it was attempting to generalize any of it, at least without heavily emphasizing the limitations.
As a process I think science almost always trundles along to being less wrong. Metascience has done it's best to handle the many very subtle forms of bias, and things have gotten better to the point of reasonably questioning how much of historical data should just get tossed out.
But it's also easy to see, from snapshotting different points in the past, just how wrong you would be from arguing from the best data, ala "a little knowledge is a dangerous thing."
An argument I've seen in the past is that some topics are almost impenetrable, and for those art, philosophy, or anecdotes act as a survey that can leave you more informed than the data. You might not know the rates or trends of arson or assault, but you leave with an awareness of how sub-violent voter intimidation has played out. You carve out awareness of "positive" and "negative" freedoms, like if women/minorities not using voice chat in games because of what people say to them is depriving them of their own free speech*.
I prefer data-driven arguments, and appreciate someone willing to make them or strongman an opposing view. That said, I think it's also good to be aware that academia can be oppressive/alienating.
Implicit in saying someone should present their own data/study is that this is the correct way of engaging in the discussion, which requires a particular temperament and education. A former housemate was involved in direct action activism that uncomfortably bordered on threats/violence (smear posters put up outside of workplaces/schools for a slumlord). Is that experience relevant or something that should be ignored in lieu of research? Maybe it's less useful in making claims [on some topic], but you have to keep in mind that that's how a lot of people engage, so if the goal of a conversation is more than just talking to or convincing like-minded folks other camps need to be considered.
It also tempts people to try to act like topic experts, looking up confirming studies for an hour or so but having no meaningful ability to understand the state of the field or assess the quality of the studies involved. I catch myself doing that more often than I should, with the excuse that I'll at least be improving the quality of the discussion. Sometimes you aren't, though, you're using rhetoric as a cudgel and making convincing but specious claims.
Good/great addition to the discussion. I'll engage in a couple of places.
I think that's insightful, and it ties in to your broader comments (which I agree with in whole and thus won't quote/respond to) about how wrong humanity has repeatedly been and how yes, a little knowledge is a dangerous thing. So it's not that I disagree with the spirit of what you're writing, but I challenge the specifics. At a certain point, we still need to assert things about the world. We still need to have beliefs. And you can easily, in my view, caveat yourself into irrelevance. That's especially true if you're responding to maximalist claims backed by no evidence, right? On one hand, viewers see an emotionally compelling argument, and on the other they see you, saying "hey, nobody actually knows what's true, and this is a really complicated field of study, and there are problems with this dataset, but I think it's directionally correct, so check it out... also it's like a thousand pages." It's pretty obvious to me who they're going to believe. I mention that because I think it's relevant for this specific case. I feel strongly that many parts of the progressive movement are increasingly concluding that violence is the only option. I think that this is both an immoral conclusion and, more importantly, one that will not work. In that context, I judge it more important to respond with a convincing statement - still with evidence! still with evidence that I believe to be correct! not with falsehoods! - than to respond with a more intellectually complete set of caveats, contexts, and background notes that will read to uninformed viewers as "okay, nobody actually knows, and this guy certainly doesn't, so I'm just going to go with what feels good." There's also the question of relativity of effort, which I go into more below.
I do think this is often true. Broad macroeconomic theory comes to mind, for one. But I don't think that the question "have successful groups tended to use violence or not" is an impenetrable question that can only be answered with philosophy and anecdote. It's not physics where perfect evidence exists, but we can still make reasonable claims and support them with reasonable evidence.
This piece challenged me the most because it is where I question the validity of my beliefs the most. I do believe that. I believe that presenting one's own data/study and then we all debate and see whose information is the most correct and then change our minds is the best/correct way of engaging in discussion like this. To be clear I'm not a hard-science maximalist - I'm not saying that philosophy, personal experiences, etc. have no role. But soft claims should be presented as evidence and weighed like evidence too. And... yeah, I do think that people who don't have the temperament/education to do so are wrong vastly more often than those who do. THAT DOESN'T MAKE THEM BAD PEOPLE, but I think they should strive to be more, for lack of a better word, scientific. I think we all should.
In person or when speaking to large groups, I don't argue in this way. I argue in the way that works: appeals to emotion, anecdotes, and more than anything else stories. But yes, I think that when I do that, I am worse - morally worse! - than something like this, on the internet, where we can have a purer form of debate.
Two notes - if anything, making an emotionally-compelling statement with no evidence is closer to using rhetoric as a cudgel than presenting competing evidence without detailing the entire history of the field of peace studies is. More pressingly, to that point, I think you need to weigh my post against what I was responding to. This is where I disagreed with Grumble as well: how much is it really reasonable to ask me to spend the time typing out paragraphs of context to caveat my claims in response to no evidence at all? I think from a certain point of view what I did was exactly right: less work up front and now people who are interested (you, grumble) engaged and I can spend my time talking to people who might actually change their minds - or change mine.
A theme running through both your and grumble's posts is the idea of cudgeling - the idea that I was, basically, mean. And so I was. I think there's an argument to be made that being mean reduced the effectiveness of my actual point. I need to think about that more. As far as the ethical/moral dimension, goes, though, I find myself bemused. To me, tacit calls for violence or rationalizations of the same are so much more objectionable than someone responding to them by using science/fact/academia as a cudgel. Why is it that the latter is what sticks out to people and the former does not?
Fact is better than vibes I agree, but rarely do we actually have fully agreed upon facts for things on a more complex level. This is going away from my original comment so I do not intend for the intricacies of the level at which I will go here to apply to the prior argument necessarily, because what prompted each response is different. Even simple 'facts' are easy to find contention. It's simple to say its a fact that X amount of burglaries occur, and cite an FBI source or local police department sources etc. if I want to constrain the argument to a locale, and even if you argue that their tally is 100% correct, that the police or FBI encountered or discovered that exact amount of burglaries, you still can't even fully agree on the facts of that number because there can be disputes as to how it comes across. Well the local police department puts more resources into patrolling neighborhoods with higher reports of burglaries, so now they've discovered more burglaries. If they put fewer resources into it, does that mean there are less burglaries, or less discovery of burglaries? This is also the basis of all varying kinds of conspiracies on the less factual side of facts surrounding autism. Factually, rates of autism are increasing. Or maybe they are not, we're just putting more resources into diagnosing them.
So the reason I want to disentangle this response from the others is because I believe that this response is more so going in the direction of saying 'nothing is fact' or some interpretation along those lines, and that's not really my intention either.
You're right that it wasn't competing analysis saying it was, though I personally believe that violence and nonviolence work off each other in non-discrete ways and they amplify the success of the other, which was my motivation for looking into how that source defined things as I don't believe it can be distilled into something as simple as that. I do agree that claims on some level need to be supported, especially stated as strongly as a fact, but alternatively, sources provided such as yours have their own complications. I don't know that I would have invested as much into the reply if you hadn't attempted to use it the way you did, meaning its not the source itself that I had the most issue with, it's the way it was used.
For a source like that, the degree of effort required to cite it is substantially lower than the degree of effort required to vet it. Not only is the book not publicly available for free through official means, it's extremely lengthy, and the freely provided supporting material in your link is also lengthy and because of the concepts it is addressing, it uses overly complex descriptions that abstract away the simplifications and assumptions it makes which make it a more laborious read.
I don't know how I feel about this, if only because I almost fell for the same trap. I still don't know about the potential source I was going to cite, but there was a published critical response to the book you cited by a professor at Cambridge University, Christopher Finlay (now with Durham University), and my initial thought was, well he's a political science professor at Cambridge University, it must be reputable. I tried to look into him a little more and I read a little bit of something else by him and I just came away skeptical of him, not that I know for sure he doesn't have valid things to say, I just didn't know if I understood what I had read so I didn't want to just rely on him for his status.
So instead I set out on the more laborious process of downloading the book cited and illustrating with a specific example why I think that the data is flawed on a fundamental level. I could have simply argued against it without citing anything, but then your cudgel of 'actual data' wins out, because I would have no data. That's where I think the problem comes in with using 'data' as a cudgel, because you didn't vet it, but you made me vet it in order to respond. I personally think using data, science, and facts in this manner contributes to anti-science rhetoric, because its unrealistic to expect most people to be able to devote the energy and efforts needed to do what I did. I was only able to bring myself to do it because I'm unemployed and have no life. I recognize that your response was to someone claiming a fact without evidence, so I realize you didn't just say it unprompted or for no reason, but I don't know that it's the right type of response for that circumstance.
I agree completely that facts and "facts" can be weaponized. No notes on that. However,
I did vet it. I believe it's correct. Reasonable disagreements about what belongs in the dataset don't invalidate the entire work, and again, scholarly consensus agrees with it. You're totally right that the gish gallop is a real thing, but I think you'd agree that that's clearly not what's happening here. You're raising more good points about real concerns, but I don't think they're super relevant to this discussion specifically. If I had dumped a bunch of shitty opinion pieces from brietbart or something, for sure, but I posted one book/site from a respected liberal scholar at a respected liberal institution. I don't think the existence of the gish gallop means that every source requiring the reader to do some work is that. I admit that the line is blurry, though.
Honestly, friend, I'm not even sure what you're arguing anymore. Yeah, it's a complex source, but it's a complex topic. I tried to link the website instead of the book specifically so that people could at least see something; what more could I reasonably have done? That's a serious question - this is important to me, I want to convince people, what more could I reasonably have done? Echoing your exact concerns about having to expend more work to disprove something than to prove it, I chose not to put in the effort to quote at length out of the book/website because what I was arguing against was no source whatsoever.
Edit: having considered it more, I could have been nicer. I could have added a caveat that sociopolitical questions are always debatable. I'm not 100% convinced that would have been more compelling, more convincing, but it's worth giving more thought at the very least.
I didn't know that term so thanks for enlightening me of that. You're correct, I agree that isn't what was happening here. My apologies for assuming you didn't vet it, I made an assumption that because there wasn't much of any extrapolation on that data within the comment that linked to it that you didn't vet it, so clearly that assumption was wrong.
To try to simplify what I saw, I saw a source that seemed presented as authoritative and comprehensive and not necessarily vetted (what I thought at the time). I did not see anyone responding to that data (other than the parent commenter you were refuting) or discussing the validity of it at all, no conversation about the validity of it, it was just sitting there as though it was the be-all-end-all of the argument. I found this to be inherently worse than a fully unsubstantiated claim presented as fact, because at least it was clear to everyone there was no source presented for that claim and reasonable to assume it was that person's belief that it was fact rather than assuming it was backed by good data.
So my perspective was that it accomplished what it was appearing to be designed to do, which was eliminate the opposing argument by being too difficult to investigate the validity of the source/data and establish a counter-narrative as fact, so rather than the prior comments unsubstantiated claim of violence being apart of most progressive movements being the final statement of fact on the matter, it replaced that with a new claim that was substantiated by data that was presented as more accurate and comprehensive than it was. It's "The actual data", the one and only authoritative set of data, so unless someone can prove it wrong with other data, and somehow prove that that data is better data, then obviously the conclusion must be right, nonviolence is more successful.
But what isn't inherently obvious about data like that from a distance is that the amount of simplifications that they make to make those claims are so great to not really be factual in any objective sense of the word. I literally picked out the very first example in the book, I didn't go cherry picking through it to find that. The very first thing I read was seemingly quite favorable conditions to their core contention, so it wouldn't necessarily be the most compelling one for me to use to disprove it, but even then I found it so flawed that I figured even that was good enough to use. I have no doubt that I could find issue with nearly every single one of the cases they go through based on that initial one. I felt confident that highlighting that one example would illuminate how simplifying the data in the way that they attempted to do just simply doesn't make sense and is fundamentally flawed to simplify it in that way.
So yes I agree, I don't believe that to be gish galloping at all, nor do I think it was necessarily malicious or anything of that sort. But that book is almost 300 pages long, and it's not nearly the same as looking up crime statistics or such which have way more research and authoritative sources with much less simplification so it's easier to digest the argument and refute it. To drop that as "The actual data" and not cover anything about what is within it or what that data actually is, I thought created a barrier too high that even if someone was inclined to debate it, they wouldn't because the cudgel of 'fact' made it so there was only a very high effort way to do so. Even if someone presented multiple notable examples that refuted it, those would only be considered as anecdotal cases rather than the much greater amount of data points covered in that book.
I agree with you that you were arguing against a claim that had no source whatsoever and therefore it isn't really fair that you should have to put in that much more effort to argue against it. I did mention this earlier in this comment, and perhaps this is unique to me and not something that applies to others, a claim stated as fact without any source at all is less concerning to me than a claim purported as fact with a source that is presented as authoritative but isn't as good as people may believe it is because I think that people are more willing to take in and believe the latter than they are the former. I simply view someone who makes that claim without any source as it being their belief that it is fact and I guess that is how I am able to find it less concerning as I presume that is what other people do when encountering such claims. It's probably more specific to the context of this site, claims stated as fact without substantiation in other contexts may be more concerning to me as I would be worried about the capability of the audience for those type of claims more than I am here.
I really didn't see it as mean. I think it was incorrect of me to say that phrasing was disingenuous, I did think it at the time as I didn't reasonably believe that the data was all that strong but I understand now that you were trying to elevate the discussion. I just thought it shut down the conversation too easily in a way that wasn't befitting the veracity of the data.
Very interesting. That's reasonable. Clearly this can happen and can be dangerous, I'm forced to agree with you there. I guess I feel like everything is a matter of degree, and referencing a known, non-crank academic work isn't the same as referencing a paper about vaccines being fake or something and hoping nobody checks it. Inherent in that though is your earlier point about the complexity of the work and that few people reasonably could check it. Hmm.
As a note, I think LLMs are helping here. They are imperfect and require a certain amount of base knowledge to use, but you could ask one to, if not summarize a source, tell you where the source lies in the canon. Meaning if you gave one the infamous debunked autism/vaccine paper, it'd tell you that it's been retracted. That requires you to trust the model of course but in a situation where you're otherwise unable to assess a claim it's much better than nothing.
As you know, I disagree on the specifics. My question is where do we draw the line? Any attempt to understand the original question will by necessity involve simplification. The reaction you cited from Martin, for example, notes that chenoweth's data was simplified and then explicitly says that it made sense for chenoweth to do so. Not that everything is fine because Martin said so; my point is that if even mainstream critical views recognize the necessity of the simplification, that suggests that it really is unavoidable.
I get where you're coming from and I understand the connections to the whole epistemic bullying with complex sources thing. But at the end of the day we have to make claims and we have to try to understand the world. We have to use imperfect, simplified data.
Thanks for the discussion.
Non-violence is the preferable route and violence should obviously only be a last resort, but I'm a bit skeptical of that book's claims, tbh. Or at least I'm not sure how much it applies in this case.
It appears to be 15+ years old, so it won't have the full perspective of how badly Occupy Wall Street ultimately failed, or how BLM ultimately failed. Even the civil rights movement was only partially successful, and that success came at the cost of decades and centuries of millions of deaths and immense suffering, and the success only ultimately occurred due to numerous riots and a fear that the country would be destroyed after MLK's assassination.
I think it does apply more to cultural change, like you said, but even something like gay marriage being accepted took decades of tons of deaths and suffering just to reach that. Women's rights and civil rights (which I presume the book considers successes) are currently being actively rolled back, and protests are being met with violet opposition.
In regards to the AI companies currently upending the fabric of society, I don't think the young people having their futures stolen are going to be particularly receptive to the idea that if we just keep asking nicely we may be able to partially change things in 50 years or so, and that's assuming the damage can even be undone at all. On top of that, there have been practically no consequences of any kind for the perpetrators of said awful things. Why would they want to essentially roll over and take it and sacrifice their lives just because school textbooks (often controlled/written by close allies of Epstein) told them non-violent solutions are the only way? The rule of law has broken down and isn't respected by the leaders of country themselves, and those leaders constantly show that might makes right and there are no consequences as long as you succeed, so why wouldn't they also believe that?
None of this is to say that I condone or encourage violence, only that it's inevitable when you give people no other effective choice. And we are getting dangerously close to that point, if we're not there already. I am still holding on to a small glimmer of hope that we can see peaceful change, but that glimmer grows a little dimmer every day. I genuinely hope to be proven wrong and we're able to turn things around.
I understand, but the failure of several high-profile progressive initiatives doesn't mean that nonviolence works more often than violence. Anecdotes aren't data! I could just as easily point to high profile progressive successes: legalization of gay marriage, the inflation reduction act, American care act etc, none of which was included in the dataset either.
????????????? This is a pretty radical ad hominem. Maybe true, I guess, no idea, but I don't think it's supporting your overall argument.
I can't tell you what you believe, obviously, but at these words, at the very least, condone violence. I suspect that you're critical of Republican dogwhistles? This is a dogwhistle:
Re: the textbooks - see Robert Maxwell's connection to McGraw Hill textbook publishing, Epstein, and of course Ghislaine. The ties are suspicious, to say the least. This also doesn't fit the definition of an ad hominem. The point is students are largely taught a singular viewpoint in school in most cases, and the company responsible for that viewpoint has a vested interest in making sure that it's the only one even considered valid in any way. Even if there is no substantial connection there, the association with Epstein et al is enough for many to at least call into doubt what those textbooks said, especially in light of how influential Epstein et al have been in controlling the narratives in society these days (see gamergate, the recent trans panic, etc)
Speaking of ad hominem, if you're that concerned about them, I'd appreciate an actual reply to the substance of my comment rather than a few lines about alleged dog whistles. If anything, that's closer to an ad hominem. I am also concerned far more about the police murdering innocent people than I am about one person assaulting a single cop. I don't approve of either, but the scale and severity of the two are disproportionate. It's also factually true that this country was founded on violence, which it continues to praise to this day. Being able to see the direction the winds of history are blowing and how those winds pick up speed is not the same as agreeing with said winds.
Edit: in hindsight, I think this is probably where I call it quits on Tildes. This site is just turning into another HN/Reddit, and I don't really like putting in the effort into writing comments just for some AI proponent to write dismissive, insulting comments that show they clearly did not even bother to read a few sentences into my comment. I guess that's the nature of social media, though. The internet was a mistake. Leaving this here for posterity.
To be frank, this is not at all surprising and in my experience that's the direction practically every progressive space online has taken under the guise of "well what's next, you want literal nazis colonizing our discussion?" Whatever is the progressive stance must be followed or you get shunned or at least it's the cause of a huge drama and rift in the community. AI just happens to be the topic where you disagree with that stance. Frankly it's made some unrelated hobby websites incredibly insufferable (thankfully most can still be enjoyed, just without participating in the forums).
Well, you know the saying "you can't be neutral on a moving train". it's clear who's driving the train at this point.
I don't think calling it a "culture war" is proper when this tehnology is keeping the US GDP above water while displacing millions of jobs and is being weilded by the POTUS for dire effects. You can be complely disenganged with AI and still be affected by it. That starts to extend beyond a "culture war".
The other aspect is hinted in the first paragraph; I don't think this administration is going to do much to regulate the tech. So when the soapbox, ballot box, and jury box all fail... I don't condone it, but I simply see this as an inevitability.
Loved this quote:
I'm not sure why anyone needs to spend so much time and effort building an "AI policy" when the answer is simple: give working people a way forward. People think cryptocurrency is dumb, for instance, but it didn't garner significant political opposition until it started to spike GPU and electricity prices. LLMs are doing that on a whole new order of magnitude. Of course people will oppose a policy that will take their job, make remaining jobs more miserable, and drive up the cost of living. Until AI companies meaningfully address that concern, they're going to grow more and more unpopular.
Politicians talk all the time about "creating jobs" and sometimes this happens, but at scale, creating new jobs is apparently easier said than done and people continue to worry.
It's actually not as hard as they make it. But it takes time and requires them to go against the wishes of donators and lobbyists who want to offshore as much as possible. The incentives simply don't align.
I just read the quoted parts, but if I got the gist of this piece right, it's a populist article directed at an AI-enamoured audience, trying to paint AI-critical people as uninformed and unthinking when in reality the opposite is more true (save of course the marginal lunatic faction that will always exist).
It's a perfectly legitimate position to say the current forms of publicly available AI are 'manufactured by out-of-touch billionaires and pushed onto an unwilling public to achieve sinister aims'. Previously, disruptive technology was accepted despite being disruptive because it solved real problems and/or created real efficiency on a societal scale. By contrast, AI is being force-fed to gazillions of people at enormous cost, people who want nothing to do with it it because it does not solve anything for them and instead creates a bunch of new issues and inefficiencies.
If the purpose of Big AI was to genuinely help society, the models would be tailored to address specific issues and be much, much more effective at doing so. Instead, because the companies behind these models want to rule over the rest of society, they have chosen to try to make "everything machines" that are shitty at almost everything they try to do and that have to indiscriminately devour all publicly available data in order to function - and all restricted (copyrighted) data on top of that. The latter is stealing. Why would anyone who actually wants to help go about it in this manner? It's either strikingly incompetent and morally callous, or it's driven by a desire to dominate and oppress.
It doesn't take a rocket scientist to see what's what, just like it was easy to see the current US president, before he was elected (for the first time but especially the second!) as somebody I don't trust enough to even mow my lawn, not to mention granting him any sort of leadership position. It's not that hard, people.
What do you make of all the people who do find AI useful?
Two things can be true at the same time.
But my comment isn't about the people, it's about the companies. Why is it not enough for them that some people find their product useful? Why are they doing this Clippy on steroids thing? It's not populism to ask why the emperor doesn't seem to be wearing much more than a pair of briefs - fancy as those briefs may be from some people's perspective.
Got it. I misunderstood - rereading your original post, it's clear that you didn't mean everyone using AI is being force-fed, but rather that some people are.
myopic at best, and actively lying to themselves at worst as the profit incentive is to say it's great.
What I took from the article is that, while it’s possible to be a smart skeptic of AI, this is not the way to bet. There are inevitably going to be a lot of uninformed people complaining about AI who know very little about it but are sure it’s bad. Compare with populist beliefs about vaccines or the pandemic or 9/11 or child abuse or foreign aid or trade.
This seems to be true of many hot-button topic these days. Uninformed people on both sides make lots of noise while saying things that are wildly wrong about the specifics.
Of course, by taking a position it’s possible to be “directionally accurate” by coincidence. I don’t really consider that a “legitimate position.” There is more to making an argument than being on the right side. You also have to avoid repeating falsehoods.
There are already hoards of uninformed people acting as AI boosters and unintentionally sowing destruction in workplaces and universities etc. Does the article mention them?
Yes, there are people like that too and that’s bad. That’s not populism, though? The article isn’t about the managers.
Although, I suppose the OpenClaw craze is a kind of influencer-driven populism. And there do seem to be lots of ordinary people using ChatGPT in inappropriate ways?
I was going to say if the article isn't populism, then it's propaganda. But I wanted to fact-check myself and glanced at some of the other articles this person wrote and she seems to be coming from a good place.
It's just very poor logic to name one stance, and not the other, as "something populism". Critical and supportive arguments can both be populist. Trying to cement this term to describe one side only seems deliberate and manipulative (almost like some big foot in the AI camp asked her to do so), but maybe it's just a case of a young journalist trying to become relevant by twisting language in hopes that it takes off so that she'll be able to say "I coined the term 'AI populism' " on her socials bio. Or something?
I think she sees populism on the anti-AI side and the people she interacts with who are pro-AI don’t come across as populist. It sounds like you want to find sinister motives for that.
well of course. the gamble with AI has paid off handsomely so far. But I'm not here to gamble.
You don't need to be well informed of the tech to know a data center in your area will double your energy bill, despite it being of no use to you. nor do you need to know of the underlying tech to be affected by some layoff that blames AI. It's bad to you because it's actively making your direct life worse for no returns.
It doesn't help that there isn't any real "AI expert" out there. From those we see researching or working directly with it, you'll see extreme statements on both sides. The tech is in the wild west.
From the ethos side in terms of philosophy, futurism, and economic, I haven't seen much pro-ai arguments. At least not ones that line up with reality. UBI is a pro-AI argument, but very few seem to think we're on that trajectory, for example.
Has anyone had their energy bill double due to a data center? What is that based on?
You don’t need to know anything to repeat rumors. If you want to avoid spreading misinformation, these things do need to be checked.
https://www.consumerreports.org/data-centers/ai-data-centers-impact-on-electric-bills-water-and-more-a1040338678/
https://www.congress.gov/crs-product/R48646
https://www.cmu.edu/work-that-matters/energy-innovation/data-center-growth-could-increase-electricity-bills
https://stateimpactcenter.org/insights/data-centers-straining-the-grid-and-your-wallet
https://hls.harvard.edu/today/how-data-centers-may-lead-to-higher-electricity-bills/
A few real world examples, and many studies and projections from pretty much all sides of the aisle. I don't think this is just a "rumor" anymore.
I’m not going to check all of these, but I did check the first anecdote in the first link:
With ChatGPT’s help, I was able to find the rate schedules for the city of Manassas. In 2016 it was $13.59 per month plus $0.0830 per kWh and the current rate schedule is $16.17 per month and $0.0984 per kWh, or about an 18% increase over a decade.
So, something doesn’t add up?
Here is a news story about how Manassas is considering a 10% increase.
Also, for a Californian these rates look extremely low. We are paying about .30 per kilowatt-hour, three times as much.
Yes, clearly the impact of a recent data center. My point was that dismissing thos as some "rumor" is incredibly disingenuous (and bonus points for trying to fact check someone else's lived experience with the very thing impacting their life).
Yes, that's an issue as well. Such a big issue that some govenor candidates are running on taking PG&E down a notch. I saw right around 2023 when by bill nearly tripled despite no significant Useage increases (heck, it would have gone down because I had less people in my house that year).
That's not due to data centers, but it is a big issue in the state.
I think the reporter should have asked to see their electric bills and studied them. Although the quote was included to show someone’s “lived experience” there is also a factual component to it. A person can angry but wrong about their power bill.
There could be innocent explanations for the discrepancy. Maybe that house is near Manassas but not in the area where the electricity rate is set by the city? But as it is I’m doubtful that their electricity bill went up due to an enormous, sudden rate increase and the high bill might be due to some other reason like more usage due to cold weather.
Also, I never argued that data center power usage isn’t a problem. I am skeptical that any retail customer’s electricity rate doubled like you suggested. That’s the part I’m discounting as rumor.
From the article:
[...]
I've been taking a trade-skills night classes with a lot of people straight out of high school (18-20). It seems like more than half of them were high performing students who all elected to put uni on hold to do gig work all day and weld pipes till 11.30pm. And speaking to them, there is a sense that they don't even have faith in high-school exams, so a degree is out of the question. Their experience of education was watching every standard and metric go down the toilet because of LLMs and they could at least make a somewhat informed decision on what direction to go.
Shift that to people in their mid 20's and I can't begin to imagine the absolute dread of making the "smart" choices and graduating into instant redundancy. Many people made deliberate choices to lean into AI, how many positions are actually out there if the technology should at best replace 10% of workers today. And its clear companies would prefer no workers all together. To top it all off, it seems like Sam Altman has a habit of only being right about the bad parts of AI. Massive job losses. Major roll out of disruptive data centers. Resource shortages and outright wars. Bad for regular people. Great for investors though. Can see why they keep giving him money.
Haven't seen much of that abundance come around though. But with this trend, I can see why people are panicking. His vision of the AI apocalypse has already started. Hell the Iran war kicked off with some model painting a girls school as a valid target. No one else is taking responsibility or will be held accountable so it's all on the technology. What's to stop the government from gunning a regular person down and saying that the AI did it. We already have video of ICE murdering two people and there hasn't been a peep about that since.
I've always been vocally anti-AI. Probably started as far back as 2017 when we tried to use early TensorFlow systems for Big Data analytics. And the brunt of my arguments have always been that it doesn't work like a reasonable tool should.
But now its a just another turd in a shitstorm. A data center is drawing more power in an energy crisis because a needless war done on behalf of a rouge nation that has effectively captured the US government in service of overrunning the middle east, that will lead to an even greater migration crisis and be used as fuel for far right movements. The power draw will also put more pressure on the climate crisis that has been ignored for several decades at this point and is already destabilizing agriculture that is going to take a hit because the agri-chem shortage from the above mentioned war. The data center also has water usage and so that's going to drive up two utilities and possibly municipal taxes to treat the waste and service the infrastructure. And if tax/rate-payers are subsidizing this tech, it means businesses are likely forced to increase prices...
It's a very hopeless situation all round and people are going to lash out when there 's no obvious thing to do.
Good lord... is "abundance" just trickle-down economics wearing a progressive skin suit?
The core idea behind Abundance is increasing state capacity to do things as well as supply of infrastructure and programs (housing, energy, transportation, healthcare). In some cases, that means simplifying regulations (zoning is a big one) or eliminating them (certificate of need for hospitals). In other cases, it means rebuilding a civil service that is competent and capable of building projects without having to hire consultants. Philosophically, it also means asking why a process needs to be the way it is (and not just going to “because that’s how it is” or “it protects the environment” without specificity and recognizing that the general public cares more about results versus a process.
To give an example related to my career field, in the U.S., nuclear regulations fall under 10 CFR (Energy), 40 CFR (Environment), and 49 CFR (Transportation). In some cases, there are multiple sets of radionuclides to keep track of (for say, shipping versus disposal) and regulations can be contradictory. Can we simplify it to make things easier to understand, easier to administer, and reduce the likelihood of violations?
I'm aware of the definition, having read the book.
Obviously abundance and trickle-down economics are different. But especially if you focus on the "roll back onerous regulations" and "build faster" parts of abundance, there are a lot of similarities.
It's a shame that it feels like the core theories of abundance are more or less co-opted by the people who also tried to make trickle-down economics a thing.
They hear "reduce regulations" and think "yeah, we need smaller government" despite the core values requiring more government intervention to realize its means. They hear "reduce the beauracracy" and think "okay, we just need to layoff a lot of people", never considering that some processes need less admins and more on-the-ground workers to succeed.
I think that's where the "trenchcoat" comes in, because these people (perhaps maliciously) interpret the philosophy wrong from the get-go and use it to justify stuff they already wanted to do.
I agree there is some overlap (and a risk for co-opting), but I personally don’t see strong similarities. What concerns do you see in “build faster,” and what would you propose instead?
Is that inherently suspicious to you? It seems normal that some ideas of reagonaomics/supply-side economics were good, and some were bad, and the policy ideas that come today will pick and choose.
I'm not sure that rolling back the regulations that prevent buildings full of single room occupancy units or actual boarding houses equates to trickle down economics. One reason behind homelessness is lack of small scale cheap housing compatible with earning minimum wage.