This is one of Eliezer's worse essays, in my opinion, which I say while both having a high opinion of him and completely agreeing with the premise. (Basically: we shouldn't firebomb Sam Altman's...
This is one of Eliezer's worse essays, in my opinion, which I say while both having a high opinion of him and completely agreeing with the premise. (Basically: we shouldn't firebomb Sam Altman's house).
What I disagree with the most is his repeated framing of predictability as the thing that distinguishes legitimate state-violence from illegitimate vigilante violence. 1) much state violence is not predictable, and people can't always avoid violence just by following the law. Ask anyone who's experienced bad policing. 2) much vigilante violence is predictable, and that doesn't make it okay. 3) More to the point, if you believe that ASI will kill everyone, then legitimacy is probably completely irrelevant. In that case, the real argument should be that vigilante violence won't work. Eliezer does touch on this, but if that's what he's thinking the entire essay should be about that.
My guess would be that this essay falls into Yudkowsky's fatal flaw. His self-image is "guy with unique thoughts." (To be fair to him, he's often right about that.) To remain true to that self-image, he tries to take basically everything from first principles. Sometimes that leads to genuinely novel insights, but more often it ends up like this - where there's already a billion pages of political philosophy exploring the differences between state and nonstate violence, and this new argument ended up being both unnecessary and distracting from his main points.
Maybe I would need to dig deeper to be more sure of this, but just from the content of the essay, I would assume that Yudkowsky agrees or mostly agrees with all three of your numbered points,...
Maybe I would need to dig deeper to be more sure of this, but just from the content of the essay, I would assume that Yudkowsky agrees or mostly agrees with all three of your numbered points, though I'm less sure on 3).
I don't even think he fully went all the way to designate state violence as legitimate if predictable, but simply named predictability as one possible prerequisite and as making a moral difference.
Also, we can't look into the insides of the guy, but I reckon there's something true to your last paragraph, well put.
Yeah, I also think Yudkowsky would agree with those points. Whatever his faults, he's an intensely moral and thoughtful guy. I just don't think he argued them very well, if that makes sense.
Maybe I would need to dig deeper to be more sure of this, but just from the content of the essay, I would assume that Yudkowsky agrees or mostly agrees with all three of your numbered points, though I'm less sure on 3).
Yeah, I also think Yudkowsky would agree with those points. Whatever his faults, he's an intensely moral and thoughtful guy. I just don't think he argued them very well, if that makes sense.
Interesting, that is not at all what I gleaned from your original comment. Thanks for clarifying. I do agree that he doesn't much argue them at all, instead just barely more than name-checking...
Interesting, that is not at all what I gleaned from your original comment.
Thanks for clarifying.
I do agree that he doesn't much argue them at all, instead just barely more than name-checking them (with the assumption that readers will mostly agree on this) and move on to his conclusions from these points in respect to AI.
I hate this framing. I'm going to back up from the AI angle, but the concept of state based violence as legitimate (via predictability and institutional source) and non-state violence as...
I hate this framing. I'm going to back up from the AI angle, but the concept of state based violence as legitimate (via predictability and institutional source) and non-state violence as illegitimate is bunk. That framework does so much heavy lifting for justifying institutional violence as it morally encodes it and presents it as something neutral and procedural. It allows for unbalanced presentation of events, i.e. hostages vs. prisoners, terrorists vs. soldiers, rockets vs. airstrikes. I think we've spent the last few years watching the consequences of that perspective bleed out throughout the world: invasions, bombings, and genocide.
It feels primed to stoked the neoliberal fear of change. Keep to the laws we know, the violence we know, because a different violence will be worse. And then descends into the usual fear mongering of life or death rhetoric and threats to justify the state violence. I'm not going to be calling for the death of anyone specifically, but this perspective is why we don't see change in our institutions. MLK only makes gains if there is the thread of a Malcom X. The state, the monopoly, and the institution only make change when there is a threat behind the complaint. It's worth considering that changes were made at United Healthcare following the murder of Brian Thomson. We can have think piece after think piece of how we need to regulate AI, but it might actually take some localized violence for the likes of OpenAI to get aboard that train. If you can't just steamroll the populace without consequences, you're more likely to play ball.
He didn't say predictable violence was legitimate and unpredictable violence was illegitimate. His argument is that predictable violence is useful, to enforce policy and unpredictable violence...
He didn't say predictable violence was legitimate and unpredictable violence was illegitimate. His argument is that predictable violence is useful, to enforce policy and unpredictable violence isn't.
It's an argument for pavlovian conditioning across the board. If you knew you would always get a speeding ticket every time you sped, almost no one would speed.
It doesn't matter how many molotovs get thrown at AI ceos. It won't stop AI development. There are trillions of dollars on the line. They'll just build fireproof mansions.
His argument is that the only way to actually stop this, globally, 100% of the time, is a wide ranging treaty that says if you keep pushing the limit outside of the agreement, you get a missile shot at your data center, similar to what we do with nuclear proliferation.
I think that makes sense, although I dont know if agree that we're in as dire a situation as he makes us out to be.
I see two arguments in your comment - that the distinction between state violence as legitimate and nonstate violence as legitimate is nonsense and then separately that MLK only wins if Malcom X...
I see two arguments in your comment - that the distinction between state violence as legitimate and nonstate violence as legitimate is nonsense and then separately that MLK only wins if Malcom X exists - that states only bend under the threat of force.
Regarding the legitimacy of state vs nonstate violence, I think you're absolutely correct the the framework itself helps states launder their actions into neutrality/morality. It's a good point. For the rest, are you using legitimacy to mean something like moral/good or a more traditional definition? I'm not really sure how you're framing it and I don't want to strawman you.
Regarding this argument,
It feels primed to stoked the neoliberal fear of change. Keep to the laws we know, the violence we know, because a different violence will be worse. And then descends into the usual fear mongering of life or death rhetoric and threats to justify the state violence. I'm not going to be calling for the death of anyone specifically, but this perspective is why we don't see change in our institutions. MLK only makes gains if there is the thread of a Malcom X. The state, the monopoly, and the institution only make change when there is a threat behind the complaint. It's worth considering that changes were made at United Healthcare following the murder of Brian Thomson. We can have think piece after think piece of how we need to regulate AI, but it might actually take some localized violence for the likes of OpenAI to get aboard that train. If you can't just steamroll the populace without consequences, you're more likely to play ball.
I disagree pretty categorically. I don't think vigilante actions are justified either morally or practically, and I think that your core argument - that localized nonstate violence is sometimes not only justified but necessary - collapses because there have been many, many progressive changes that did not rely on violence whatsoever.
the usual fear mongering of life or death rhetoric
Aren't you arguing in this specific post that the death of CEOs is sometimes justified?
I'm not going to be calling for the death of anyone specifically, but this perspective is why we don't see change in our institutions.
But we do see (positive) change in our institutions. Look at the sweeping change in Americans' views towards LGBT people, culminating in the recognition of same-sex marriage. Look at the passage of the American Care Act. No violence necessary. There are many examples of progressive changes in the last few decades. No question that there's been backsliding, but if the crux of your argument is that violence is sometimes necessary, why was it not necessary here?
MLK only makes gains if there is the thread of a Malcom X. The state, the monopoly, and the institution only make change when there is a threat behind the complaint.
It's very difficult to conclusively prove that MLK can make gains without Malcolm X, but generally scholars believe nonviolence movements are historically more successful than violence movements. See Chenoweth's work, for instance. Yes, one of the most common counterpoints to some of (but not all!) of Chenoweth's examples is exactly what you said re: theorizing that violent organizations present a counterpoint/lend a threat to the nonviolent organizations. But I think your position is overstated pretty strongly here.
The state, the monopoly, and the institution only make change when there is a threat behind the complaint.
Maybe, but I think it's wrong to imply that the threat needs to be violence. A threat to a politician can (and should) be something like voting them out, not shooting them.
It's worth considering that changes were made at United Healthcare following the murder of Brian Thomson... If you can't just steamroll the populace without consequences, you're more likely to play ball.
Respectfully, the fact that things changed in no way, shape, or form justifies his murder. Not only does it not work from a moral perspective, but it doesn't work from a practical perspective. The insurance system hasn't changed. United Healthcare barely changed. Here's what changed:
United Healthcare now requires ~30% fewer prior authorizations, yes. But even before the murder, they only required prior authorization on less than 5% of claims, so reducing that tiny slice of the pie by ~30% is functionally nothing.
The system that produced the incentives for UH to require prior authorizations and deny claims hasn't changed in the slightest.
I disagree that the text argues this: Separately and personally I would rather say that state based violence lends the state's legitimacy to the violence. How much legitimacy that state has to...
I disagree that the text argues this:
the concept of state based violence as legitimate (via predictability and institutional source) and non-state violence as illegitimate is bunk.
Separately and personally I would rather say that state based violence lends the state's legitimacy to the violence. How much legitimacy that state has to dish out said violence in the first place is an entirely different matter. I believe that some state violence can be very legitimate and the state's monopoly on violence can much reduce the level of overall violence. At the same time, this concentration of power is obviously rife for abuse and introduces a risk of tyranny in itself if the culture and institutions around the executive are not trustworthy, as we can often see.
I generally agree with you, but from my understanding basically all major machine learning breakthroughs were contingent on an incredible amount of computing power. This seems to be why the part...
Surely, secretly programming ASI is easier to do than secretly developing a nuclear weapon.
I generally agree with you, but from my understanding basically all major machine learning breakthroughs were contingent on an incredible amount of computing power.
This seems to be why the part of the essay focusing on treaties mostly just mentions hardware control, considering top-of-the-line hardware necessary to creating superintelligence.
There are not actually many companies or factories that can produce this kind of hardware.
This seems to be very close to the control of nuclear weapons by controlling weapons-grade uranium and processing facilities, although I agree it will be quite a bit harder.
The only issue I see with this is while GPUs are some of the most sophisticated technology to ever be created, and honestly much harder to create than enriching weapons grade uranium, they also...
The only issue I see with this is while GPUs are some of the most sophisticated technology to ever be created, and honestly much harder to create than enriching weapons grade uranium, they also have far more uses than frontier ai research. They run regular ai tasks, they render graphics, they get put in consoles and gaming PCs. Short of shutting down all those other non frontier model research use cases, I think it would be hard to control one and not the other.
You're right, of course, some of that would be caught within the blast radius. Though to cherry-pick one of the examples, I think something like normal gaming GPUs are not particularly useful for...
You're right, of course, some of that would be caught within the blast radius.
Though to cherry-pick one of the examples, I think something like normal gaming GPUs are not particularly useful for frontier AI research, and also I don't necessarily think we need to make the more advanced chips available to many consumer tasks.
For what it's worth, I agree with you, but Yudkowsky certainly would not, and that's the key part of your post. Yudkowsky is certain that ASI means human extinction, and so he would respond to...
I don't think the big issue with ASI is human extinction,
For what it's worth, I agree with you, but Yudkowsky certainly would not, and that's the key part of your post. Yudkowsky is certain that ASI means human extinction, and so he would respond to basically every element of your post above detailing how it's nearly impossible with "sure, but it's that or we all die." (I know this from his other essays).
I mention that because the framing is tricky - people often end up talking past one another. For me (and you, it seems), the actual feasibility of implementation is what matters, but for Yudkowsky and other anti-extinctionists, feasibility is sort of irrelevant because it's that or certainly die.
Let me assure you that it is not the setting that is making him unreasonable. Yud is famously fixated on this topic for some 30 years, and has almost never had a reasonable take on any related topic.
I think this setting could be artificially narrowing the author’s perceptions of reasonable options.
Let me assure you that it is not the setting that is making him unreasonable. Yud is famously fixated on this topic for some 30 years, and has almost never had a reasonable take on any related topic.
The discourse in the article is mostly on Twitter or Twitter clones. These are famously reductive forums for philosophical debate. I think this setting could be artificially narrowing the author’s perceptions of reasonable options.
For people who heard something about Yudkowsky calling for airstrikes on datacenters, this article seems like a useful clarification: … …
For people who heard something about Yudkowsky calling for airstrikes on datacenters, this article seems like a useful clarification:
If an ASI ban is to accomplish anything at all, it has to be effective everywhere.
…
ASI is a product that kills people standing on the other side of the planet. Driving an AI company out of just your own city will not protect your family from death. It won't even protect your city from job losses, earlier in the timeline.
And similarly: To impede one executive, one researcher, or one company, does not change where AI is heading.
…
Even if you're desperate, an outburst of violence usually will not actually solve your problems!
This is one of Eliezer's worse essays, in my opinion, which I say while both having a high opinion of him and completely agreeing with the premise. (Basically: we shouldn't firebomb Sam Altman's house).
What I disagree with the most is his repeated framing of predictability as the thing that distinguishes legitimate state-violence from illegitimate vigilante violence. 1) much state violence is not predictable, and people can't always avoid violence just by following the law. Ask anyone who's experienced bad policing. 2) much vigilante violence is predictable, and that doesn't make it okay. 3) More to the point, if you believe that ASI will kill everyone, then legitimacy is probably completely irrelevant. In that case, the real argument should be that vigilante violence won't work. Eliezer does touch on this, but if that's what he's thinking the entire essay should be about that.
My guess would be that this essay falls into Yudkowsky's fatal flaw. His self-image is "guy with unique thoughts." (To be fair to him, he's often right about that.) To remain true to that self-image, he tries to take basically everything from first principles. Sometimes that leads to genuinely novel insights, but more often it ends up like this - where there's already a billion pages of political philosophy exploring the differences between state and nonstate violence, and this new argument ended up being both unnecessary and distracting from his main points.
Maybe I would need to dig deeper to be more sure of this, but just from the content of the essay, I would assume that Yudkowsky agrees or mostly agrees with all three of your numbered points, though I'm less sure on 3).
I don't even think he fully went all the way to designate state violence as legitimate if predictable, but simply named predictability as one possible prerequisite and as making a moral difference.
Also, we can't look into the insides of the guy, but I reckon there's something true to your last paragraph, well put.
Yeah, I also think Yudkowsky would agree with those points. Whatever his faults, he's an intensely moral and thoughtful guy. I just don't think he argued them very well, if that makes sense.
Interesting, that is not at all what I gleaned from your original comment.
Thanks for clarifying.
I do agree that he doesn't much argue them at all, instead just barely more than name-checking them (with the assumption that readers will mostly agree on this) and move on to his conclusions from these points in respect to AI.
I hate this framing. I'm going to back up from the AI angle, but the concept of state based violence as legitimate (via predictability and institutional source) and non-state violence as illegitimate is bunk. That framework does so much heavy lifting for justifying institutional violence as it morally encodes it and presents it as something neutral and procedural. It allows for unbalanced presentation of events, i.e. hostages vs. prisoners, terrorists vs. soldiers, rockets vs. airstrikes. I think we've spent the last few years watching the consequences of that perspective bleed out throughout the world: invasions, bombings, and genocide.
It feels primed to stoked the neoliberal fear of change. Keep to the laws we know, the violence we know, because a different violence will be worse. And then descends into the usual fear mongering of life or death rhetoric and threats to justify the state violence. I'm not going to be calling for the death of anyone specifically, but this perspective is why we don't see change in our institutions. MLK only makes gains if there is the thread of a Malcom X. The state, the monopoly, and the institution only make change when there is a threat behind the complaint. It's worth considering that changes were made at United Healthcare following the murder of Brian Thomson. We can have think piece after think piece of how we need to regulate AI, but it might actually take some localized violence for the likes of OpenAI to get aboard that train. If you can't just steamroll the populace without consequences, you're more likely to play ball.
He didn't say predictable violence was legitimate and unpredictable violence was illegitimate. His argument is that predictable violence is useful, to enforce policy and unpredictable violence isn't.
It's an argument for pavlovian conditioning across the board. If you knew you would always get a speeding ticket every time you sped, almost no one would speed.
It doesn't matter how many molotovs get thrown at AI ceos. It won't stop AI development. There are trillions of dollars on the line. They'll just build fireproof mansions.
His argument is that the only way to actually stop this, globally, 100% of the time, is a wide ranging treaty that says if you keep pushing the limit outside of the agreement, you get a missile shot at your data center, similar to what we do with nuclear proliferation.
I think that makes sense, although I dont know if agree that we're in as dire a situation as he makes us out to be.
I see two arguments in your comment - that the distinction between state violence as legitimate and nonstate violence as legitimate is nonsense and then separately that MLK only wins if Malcom X exists - that states only bend under the threat of force.
Regarding the legitimacy of state vs nonstate violence, I think you're absolutely correct the the framework itself helps states launder their actions into neutrality/morality. It's a good point. For the rest, are you using legitimacy to mean something like moral/good or a more traditional definition? I'm not really sure how you're framing it and I don't want to strawman you.
Regarding this argument,
I disagree pretty categorically. I don't think vigilante actions are justified either morally or practically, and I think that your core argument - that localized nonstate violence is sometimes not only justified but necessary - collapses because there have been many, many progressive changes that did not rely on violence whatsoever.
Aren't you arguing in this specific post that the death of CEOs is sometimes justified?
But we do see (positive) change in our institutions. Look at the sweeping change in Americans' views towards LGBT people, culminating in the recognition of same-sex marriage. Look at the passage of the American Care Act. No violence necessary. There are many examples of progressive changes in the last few decades. No question that there's been backsliding, but if the crux of your argument is that violence is sometimes necessary, why was it not necessary here?
It's very difficult to conclusively prove that MLK can make gains without Malcolm X, but generally scholars believe nonviolence movements are historically more successful than violence movements. See Chenoweth's work, for instance. Yes, one of the most common counterpoints to some of (but not all!) of Chenoweth's examples is exactly what you said re: theorizing that violent organizations present a counterpoint/lend a threat to the nonviolent organizations. But I think your position is overstated pretty strongly here.
Maybe, but I think it's wrong to imply that the threat needs to be violence. A threat to a politician can (and should) be something like voting them out, not shooting them.
Respectfully, the fact that things changed in no way, shape, or form justifies his murder. Not only does it not work from a moral perspective, but it doesn't work from a practical perspective. The insurance system hasn't changed. United Healthcare barely changed. Here's what changed:
Yes, we need consequences - but "consequences" here should mean legal consequences, not "you might get shot."
I disagree that the text argues this:
Separately and personally I would rather say that state based violence lends the state's legitimacy to the violence. How much legitimacy that state has to dish out said violence in the first place is an entirely different matter. I believe that some state violence can be very legitimate and the state's monopoly on violence can much reduce the level of overall violence. At the same time, this concentration of power is obviously rife for abuse and introduces a risk of tyranny in itself if the culture and institutions around the executive are not trustworthy, as we can often see.
I generally agree with you, but from my understanding basically all major machine learning breakthroughs were contingent on an incredible amount of computing power.
This seems to be why the part of the essay focusing on treaties mostly just mentions hardware control, considering top-of-the-line hardware necessary to creating superintelligence.
There are not actually many companies or factories that can produce this kind of hardware.
This seems to be very close to the control of nuclear weapons by controlling weapons-grade uranium and processing facilities, although I agree it will be quite a bit harder.
The only issue I see with this is while GPUs are some of the most sophisticated technology to ever be created, and honestly much harder to create than enriching weapons grade uranium, they also have far more uses than frontier ai research. They run regular ai tasks, they render graphics, they get put in consoles and gaming PCs. Short of shutting down all those other non frontier model research use cases, I think it would be hard to control one and not the other.
You're right, of course, some of that would be caught within the blast radius.
Though to cherry-pick one of the examples, I think something like normal gaming GPUs are not particularly useful for frontier AI research, and also I don't necessarily think we need to make the more advanced chips available to many consumer tasks.
For what it's worth, I agree with you, but Yudkowsky certainly would not, and that's the key part of your post. Yudkowsky is certain that ASI means human extinction, and so he would respond to basically every element of your post above detailing how it's nearly impossible with "sure, but it's that or we all die." (I know this from his other essays).
I mention that because the framing is tricky - people often end up talking past one another. For me (and you, it seems), the actual feasibility of implementation is what matters, but for Yudkowsky and other anti-extinctionists, feasibility is sort of irrelevant because it's that or certainly die.
Let me assure you that it is not the setting that is making him unreasonable. Yud is famously fixated on this topic for some 30 years, and has almost never had a reasonable take on any related topic.
I so, so much agree with you here.
For people who heard something about Yudkowsky calling for airstrikes on datacenters, this article seems like a useful clarification:
…
…