Klein continues, “Is this policy debate about anything? If it’s so fucking big but nobody can quite explain what it is we need to do or talk about — except for maybe export chip controls — are we just not thinking creatively enough?” Buchanan, who describes a reluctant and slow process of engagement with AI by the Biden administration, responds, “I think there should be an intellectual humility here. Before you take a policy action, you have to have some understanding of what it is you’re doing and why.” That they find themselves at the same impasse as a congressional subcommittee from the Eisenhower administration is a little bit funny, but it’s also important. They’re describing — or at least responding to — the same sort of story in the same sort of way: an imminent and unstoppable big deal that changes everything and about which nothing can realistically be done.
AGI, like G-less AI, automation, and even mechanization, are indeed stories, but they’re also sequels: This time, the technology isn’t just inconceivable and inevitable; it’s anthropomorphized and given a will of its own. If mechanization conjured images of factories, automation conjured images of factories without people, and AI conjured humanoid machine assistants, AGI and ASI conjure an economy, and a wider world, in which humans are either made limitlessly rich and powerful by superhuman machines or dominated and subjugated (or perhaps even killed) by them (Industrial Revolution 3: The Robot Awakens). In imagining centralized machine authoritarianism in the future, AGI creates a sort of authoritarian, exclusionary discourse now. A narrative emerges in which the decisions of AGI stakeholders — AI firms, their investors, and maybe a few government leaders — are all that matter. The rest of us inhabit the roles of subject and audience but not author.
Even in its more optimistic usage, the term AGI still functions as a rhetorical black hole, ultimately consuming any larger argument into which it is incorporated with its overpowering premises: It’s coming; there’s a before and after; it’ll change everything; there’s nothing we can do about it; maybe, actually, it’ll be so smart that problem-solving will no longer be our problem. (This perhaps explains why interventions like Aschenbrenner’s, and their counterparts in media and elsewhere online, tend to skip ahead to final-battle geopolitical war-gaming with China for control over the technology — at least it’s something to talk about. If AGI is an enthusiastic exercise in sci-fi world-building, war is the afterthought of a plot.) Aschenbrenner concluded his manifesto with a tellingly claustrophobic reformulation of Pascal’s Wager, the philosopher’s 17th-century argument that you may as well believe in God: “At this point, you may think that I and all the other SF-folk are totally crazy. But consider, just for a moment: What if they’re right?”
I can't really understand the over arching message of this article, i couldn't find a connecting message in the different sections, closest thing i think it is, is that it's a history of ideas of...
I can't really understand the over arching message of this article, i couldn't find a connecting message in the different sections, closest thing i think it is, is that it's a history of ideas of AGI that were in the past and ideas about AGI now?
The article felt half-written to me. "Musings on AGI: Why I don't think it'll be as they hype, unless it ends up being as they hype" I think might be a good title for this. Musings, but little in...
The article felt half-written to me. "Musings on AGI: Why I don't think it'll be as they hype, unless it ends up being as they hype" I think might be a good title for this.
Musings, but little in the way of conclusions. I left with "Maybe it will be good, maybe it will be bad, maybe it is just around the corner, or maybe it'll take longer". Slightly fustrating. So I'm glad to see your take on it. heh
Ah that's a better take on this than mine. Thanks. Having a article not give any conclusions or news feels out of the norm for me, but it feels like how conversations sometimes go, just throwing...
Ah that's a better take on this than mine. Thanks.
Having a article not give any conclusions or news feels out of the norm for me, but it feels like how conversations sometimes go, just throwing whatever we think out there.
Maybe one of the things we say has an effect on someone, creates a new thought, a new question, but maybe it doesn't, and that's not a problem.
One sentence stood out for me through this article
It’s fair to say we’re in uncharted technological territory, but that is also a truism. Where else would we be?
I don't know why, but my brain had a take on this : "Of course it's uncharted territory, of course it's unprecedented times, life's not a repeat of the past, time marches on and we enter new territory constantly, old tactics may work or they may not, things change."
Indeed. We're living in a Gutenberg moment with AI. It's unprecedented, and our public discourse lacks imagination and breadth about both the dangers as well as promises of AI. I see many "AI,...
I don't know why, but my brain had a take on this : "Of course it's uncharted territory, of course it's unprecedented times, life's not a repeat of the past, time marches on and we enter new territory constantly, old tactics may work or they may not, things change."
Indeed. We're living in a Gutenberg moment with AI. It's unprecedented, and our public discourse lacks imagination and breadth about both the dangers as well as promises of AI.
I see many "AI, bad!" stances. And it's like, okay, but AI is here already, and it's not going away, and AGI is possibly imminent and will change everything, so we should make the best of it.
I find that many people, including liberals, are actually quite conservative. Maybe it's their way of denying their own mortality, of thinking that the past can somehow be saved, returned to, and made eternal. I often think of NIMBYs in California who try to keep cities and neighborhoods exactly the way they were when they were young. But stability is momentary and an illusion: we will die, and so too will the universe someday. Time moves only forward, and we must fully and bravely embrace the challenges and mysteries of the future.
I think the basic idea is that AI is making policy discussions hard, because we don’t know what the future will bring. That is, there are people inside and outside government who write articles...
I think the basic idea is that AI is making policy discussions hard, because we don’t know what the future will bring. That is, there are people inside and outside government who write articles about what the government ought to be doing, and now they have a harder time figuring out what to recommend because it’s hard to figure out what AI will do. Is it going to change everything?
It’s kind of a specialized audience. Most of us don’t have any influence on anyone in power anyway, so our policy discussions are pretty much just entertainment.
(Not to mention that Trump is in power and he only listens to bad ideas.)
The article also points out that there have been similar worries before. There were people worried in a similar way about automation changing everything.
Your explanation made the article make sense! But we still discuss because if's fun, in a way. :) Looking to the past i feel a similar-ish thing could be the industrial revolution, i definitely...
Your explanation made the article make sense!
It’s kind of a specialized audience. Most of us don’t have any influence on anyone in power anyway, so our policy discussions are pretty much just entertainment.
But we still discuss because if's fun, in a way. :)
Looking to the past i feel a similar-ish thing could be the industrial revolution, i definitely don't think it will be the same scale, but it changed many things about living standards and way of life.
Back then diseases spread and new theories were founded. Maybe with the problems that will be created by AI we will develop new theories about mental problems?
At it's current state of regurgitating information i feel governments can make good policies for them, But if a new technology is developed (or this one develops into one) that can create new ideas, can have creativity and thinking closer to how our brains work, we'd need newer policies.
It's exploring how the AI future is used as a political cudgel. AI is like the prospect of fusion power - if energy will be Too Cheap To Meter in 5 years, why on earth should we try to build...
It's exploring how the AI future is used as a political cudgel. AI is like the prospect of fusion power - if energy will be Too Cheap To Meter in 5 years, why on earth should we try to build renewables or improve the emissions of coal/gas plants? Just pour the money into speeding up fusion. And what do coal/gas companies think of this? They think fuck yeah, fusion distraction.
https://archive.is/v8ye1
I can't really understand the over arching message of this article, i couldn't find a connecting message in the different sections, closest thing i think it is, is that it's a history of ideas of AGI that were in the past and ideas about AGI now?
I'd be grateful if someone can explain it to me.
The article felt half-written to me. "Musings on AGI: Why I don't think it'll be as they hype, unless it ends up being as they hype" I think might be a good title for this.
Musings, but little in the way of conclusions. I left with "Maybe it will be good, maybe it will be bad, maybe it is just around the corner, or maybe it'll take longer". Slightly fustrating. So I'm glad to see your take on it. heh
Ah that's a better take on this than mine. Thanks.
Having a article not give any conclusions or news feels out of the norm for me, but it feels like how conversations sometimes go, just throwing whatever we think out there.
Maybe one of the things we say has an effect on someone, creates a new thought, a new question, but maybe it doesn't, and that's not a problem.
One sentence stood out for me through this article
I don't know why, but my brain had a take on this : "Of course it's uncharted territory, of course it's unprecedented times, life's not a repeat of the past, time marches on and we enter new territory constantly, old tactics may work or they may not, things change."
Indeed. We're living in a Gutenberg moment with AI. It's unprecedented, and our public discourse lacks imagination and breadth about both the dangers as well as promises of AI.
I see many "AI, bad!" stances. And it's like, okay, but AI is here already, and it's not going away, and AGI is possibly imminent and will change everything, so we should make the best of it.
I find that many people, including liberals, are actually quite conservative. Maybe it's their way of denying their own mortality, of thinking that the past can somehow be saved, returned to, and made eternal. I often think of NIMBYs in California who try to keep cities and neighborhoods exactly the way they were when they were young. But stability is momentary and an illusion: we will die, and so too will the universe someday. Time moves only forward, and we must fully and bravely embrace the challenges and mysteries of the future.
I think the basic idea is that AI is making policy discussions hard, because we don’t know what the future will bring. That is, there are people inside and outside government who write articles about what the government ought to be doing, and now they have a harder time figuring out what to recommend because it’s hard to figure out what AI will do. Is it going to change everything?
It’s kind of a specialized audience. Most of us don’t have any influence on anyone in power anyway, so our policy discussions are pretty much just entertainment.
(Not to mention that Trump is in power and he only listens to bad ideas.)
The article also points out that there have been similar worries before. There were people worried in a similar way about automation changing everything.
Your explanation made the article make sense!
But we still discuss because if's fun, in a way. :)
Looking to the past i feel a similar-ish thing could be the industrial revolution, i definitely don't think it will be the same scale, but it changed many things about living standards and way of life.
Back then diseases spread and new theories were founded. Maybe with the problems that will be created by AI we will develop new theories about mental problems?
At it's current state of regurgitating information i feel governments can make good policies for them, But if a new technology is developed (or this one develops into one) that can create new ideas, can have creativity and thinking closer to how our brains work, we'd need newer policies.
It's exploring how the AI future is used as a political cudgel. AI is like the prospect of fusion power - if energy will be Too Cheap To Meter in 5 years, why on earth should we try to build renewables or improve the emissions of coal/gas plants? Just pour the money into speeding up fusion. And what do coal/gas companies think of this? They think fuck yeah, fusion distraction.