This is solid. I do think AI safety research more generally is worthwhile, since we do kinda wanna deal with that well before we have anything close to AGI. But the type of apocalyptic narrative...
This is solid. I do think AI safety research more generally is worthwhile, since we do kinda wanna deal with that well before we have anything close to AGI. But the type of apocalyptic narrative you see on forums like he describes is absolutely untethered from reality. Anyone who thinks LLMs are even close to AGI does not understand either LLMs or AGI well at all.
If LLMs had bigger contexts and a better understanding of the difference between their in-context memories and their imagination, then I can easily imagine it being run in the right kind of...
If LLMs had bigger contexts and a better understanding of the difference between their in-context memories and their imagination, then I can easily imagine it being run in the right kind of observe->act loop would meet many definitions of AGI (which to be fair is a very vague term, that's often used for anywhere between a low level of self-awareness and full-blown self-improving superintelligence). The Sparks of AGI paper explores different definitions of AGI and how GPT-4 compares, and it's not obvious that many of GPT-4's limitations can't be alleviated by scaling it up further or training it differently. Before GPT 3 and 4 were trained, nearly everyone thought they had no chance of overcoming past LLM issues and accomplishing any of their current capabilities, and the people who developed them and expected them to work don't expect things to fizzle out now.
I'm very sympathetic with arguments that we don't have enough information and confidence now to call for a halt on AI progress like Yudkowsky argues for, but the idea that scary big jumps in progress are definitely ruled out from happening any time soon (such as this decade) is just wishful thinking.
I have my graduate degree in natural language processing and currently work with AI daily in my job as a data scientist. I cannot predict the future, but I know enough about these models both...
I have my graduate degree in natural language processing and currently work with AI daily in my job as a data scientist. I cannot predict the future, but I know enough about these models both theoretically and practically to find the idea that solely scaling up current LLMs would be remotely likely to lead to AGI utterly detached from reality. LLMs are incredibly impressive at what they do given the data they have. That does not mean they're more than what they are, and the people who insist they are themselves AGI or are nearly there simply do not understand the gulf we have to cross from where we currently are to AGI. Maybe the technology underlying LLMs could be part of some future AGI, who knows. But it is currently missing many things that just scaling up these models can't fix.
I feel like asking 'what are those many things?' would result with me having more questions than answers--or necessitate a response longer than the comment allows. I'm a layperson and hobbyist...
I feel like asking 'what are those many things?' would result with me having more questions than answers--or necessitate a response longer than the comment allows. I'm a layperson and hobbyist generative AI user at best who does want to at least comprehend the whole current/future AI dynamic; do you have any books or places to recommend for someone like me to start understanding what even rudimentary AGI might take in terms of technology?
I'm more knowledgeable about current technology than I am about AGI theoretically, since it's what I'm most familiar with, so I don't have resources directly to hand. But I can direct you to a few...
I'm more knowledgeable about current technology than I am about AGI theoretically, since it's what I'm most familiar with, so I don't have resources directly to hand. But I can direct you to a few places that I know are good and hopefully you can find other useful stuff from there. But it'll be non-instant so I'll do it in a subsequent comment once I've had a bit more time.
Absolutely correct. I'd even take it further: it's downright dangerous to focus so intently on fictional, doomer scenarios of AGI that you lose sight of the actual, currently existing problems...
Absolutely correct. I'd even take it further: it's downright dangerous to focus so intently on fictional, doomer scenarios of AGI that you lose sight of the actual, currently existing problems with machine learning and algorithmic information processing. If you're so intently focused on avoiding the grey goo monomaniacal paperclip-making AI that you ignore racially biased data sets, AI-compromised data sets, the impulse to accept the pronouncements of AI and LLMs without critical appraisal, and so on, then you're doing it wrong.
"Sure, our general purpose AI is engineering our society to be socially stratified based on inconsequential racial, sexual and gender traits, but at least we're covered if it ever tries to pull a Matrix!"
IMO, the furor of generative AI is also a distraction from the pernicious AI bias issues you've mentioned. AGI doomers rant and rave about Basilisks or grey goo. While the anti-AI art people at...
IMO, the furor of generative AI is also a distraction from the pernicious AI bias issues you've mentioned. AGI doomers rant and rave about Basilisks or grey goo. While the anti-AI art people at least have some grounding in economic realities, they're too busy focusing on legalism and word nerd nonsense about the nature of ownership—for policy goals that many outside their bubble interpret as "stop having fun."
The current and real dangers of AI are its deployment in algorithms for creditworthiness and prison sentencing (to give two of the strongest examples).
I knew it wasn't a very serious article when I saw it compare LessWrong to 4chan but I kept reading. This is painfully over-fixating on the idea that "AGI doom" specifically equals "AGI kills us...
I knew it wasn't a very serious article when I saw it compare LessWrong to 4chan but I kept reading. This is painfully over-fixating on the idea that "AGI doom" specifically equals "AGI kills us all immediately after being created and thinking hard without needing to do any experiments or interacting with the world", and that AGI concerns are about current capability levels, which is not what is usually being argued. The idea that AI is permanently cut off from the world because it doesn't have a human body and can only be a threat if it comes up with a dangerous idea is silly. None of these issues prevent businesses or human intelligence from causing extinction events.
I disagree with Yudkowsky's idea that the danger is apparently imminent enough for us to halt AI progress right now, but I think the article is a very shallow argument that misrepresents Yudkowsky and argues badly. The stakes are high enough to warrant taking the topic more seriously.
I take Yudkowsky about as seriously as I take 4chan. Maybe less seriously, as 4chan sometimes has reasonable takes, and probably hasn’t seriously posited something as stupid as Roko’s Basilisk.
I take Yudkowsky about as seriously as I take 4chan. Maybe less seriously, as 4chan sometimes has reasonable takes, and probably hasn’t seriously posited something as stupid as Roko’s Basilisk.
One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?
You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
That thought experiment was proposed by a user in the context of Yudkowsky writing a lot about how decision theory is affected by threats of blackmail, very few users took it seriously, and...
That thought experiment was proposed by a user in the context of Yudkowsky writing a lot about how decision theory is affected by threats of blackmail, very few users took it seriously, and Yudkowsky deleted it out of a principle to discourage people from posting information they themselves thought was dangerous, not because he specifically thought it was dangerous.
There's actually an interesting article recently about David Gerard, an online writer who developed a giant grudge against Eliezer Yudkowsky and LessWrong, and decided to make sure that Roko's Basilisk was "in every history of the site forever". That Slate article is mentioned, and David Gerard claims that the Slate article was sourced from his articles on it. It really was not considered a major topic at LessWrong.
That’s a good link. Today I ran across the same article when it was shared somewhere else. It’s long and detailed and covers a large swath of early Internet history. It’s also not an unbiased...
That’s a good link. Today I ran across the same article when it was shared somewhere else. It’s long and detailed and covers a large swath of early Internet history. It’s also not an unbiased take, as the author himself admits:
Note: I am closer to this story than to many of my others. As always, I write aiming to provide a thorough and honest picture, but this should be read as the view of a close onlooker who has known about much within this story for years and has strong opinions about the matter, not a disinterested observer coming across something foreign and new. If you’re curious about the backstory, I encourage you to read my companion article after this one.
This is quite an accusation:
Gerard’s second project, to create an association in people’s minds between rationalism and neoreaction, was much more ambitious than the first. Roko’s Basilisk was an idle thought experiment that meant more to David Gerard than it ever did to any rationalist, but at least it had originated on the site. Rationalists and neoreactionaries, on the other hand, were distinct and well-defined groups, neither of which particularly liked each other. Eliezer Yudkowsky hated neoreactionaries, believing people should block them, delete their comments, and avoid feeding the trolls by arguing with them. Scott Alexander, by far the most popular rationalist writer besides perhaps Yudkowsky himself, had written the most comprehensive rebuttal of neoreactionary claims on the internet. Curtis Yarvin was certainly interested in persuading rationalists, but the singular blog post he had written about LessWrong was to call rationalists humorless servants of power and dub their site “Less Wrongthink.”
But Gerard had two cards to play: first, a glancing, single-sentence note in an article from the Reliable Source known as TechCrunch that neoreactionaries occasionally “crop-up on tech hangouts like Hacker News and Less Wrong, having cryptic conversations about ‘Moldbug’ and ‘the Cathedral,’” and second, more than a decade of Wikipedia experience combined with obsessive levels of drive and persistence.
I don't have a problem believing that accusation. The thing with rationalists is that many have no issues discussing very controversial topics in great detail from a "okay, but what if they're...
This is quite an accusation:
I don't have a problem believing that accusation. The thing with rationalists is that many have no issues discussing very controversial topics in great detail from a "okay, but what if they're right?" perspective. The article actually mentions one example a few paragraphs above that one - discussing race and IQ supposedly annoyed Gerard greatly, that is a neoreactionary talking point.
Of course, most of the time the answer to "okay, but what if they're right?" is "no, they aren't", which is where Scott Alexander's article on debunking common neoreactionary talking points comes from. But there is a surprising number of people for whom merely entertaining the thought and trying to find explicit facts that support or disprove it is enough to label you a neoreactionary. I experienced it on reddit repeatedly, specifically in threads where Scott Alexander was mentioned. Some people insisted that he's a conservative trying to covertly push people to vote republican, for example, despite him explicitly saying that he votes dem and is much closer to dem policies than to republican policies, and their argument was only that he repeatedly speaks against leftist cultural talking points. They also called him a neoreactionary and shared "leaked" emails to someone where he dared to speak about race and IQ, being worried that there doesn't seem to be enough evidence to completely disprove the neoreactionary hypotheses.
The thing I find extraordinary is just how much of the campaign against not only Rationalists, but also Effective Altruism and Scott Alexander the article attributes to this one guy: I vaguely...
The thing I find extraordinary is just how much of the campaign against not only Rationalists, but also Effective Altruism and Scott Alexander the article attributes to this one guy:
By 2020, that hatred had deepened and calcified into a core part of Gerard’s identity, and he watched an announcement from Scott in June of that year with eager anticipation: Gerard’s old rival Cade Metz was writing an article about Scott in the New York Times, he was going to use Scott’s real name, and Scott would prefer he didn’t. Scott cited patient care and personal safety as reasons to be circumspect about his name, pointing out that he had received death threats and faced dissatisfied blog readers calling his workplace, and noting that like many psychiatrists, he preferred to be a blank slate to his patients in his out-of-work life and to avoid causing any drama for his hospital.
Finally, Gerard had the opportunity of his dreams: to supply the Paper of Record with a decade of exhaustive notes about everything he hated about Scott Alexander.
Gerard sprung to work on Scott’s Wikipedia page the day after the announcement, quickly becoming the most active editor on the page and its talk section. He started by stripping away most of the page that covered anything other than the New York Times controversy, then carefully and repeatedly guarded the page against articles critical of the NYT’s decision, which had become a news story of its own. When he couldn’t get a response from the National Review removed, he looked for the lines in it that could put Scott in the worst available light and added them to the article (“since the NR is heavily defended as a suitable source in talk”), later restoring them with a quick note: “[I]t’s cited to [a Reliable Source], after all.” As more and more articles came out about the blog and the controversy, particularly an excellent overview in the New Yorker, removing them would have been a Sisyphean task, but Gerard could at least try to turn lemons into lemonade.
I vaguely knew that there were committed enemies out there, but it’s another thing to see them at work.
It's also a good reminder to take anything current and slightly ideological (especially culture wars related) on Wikipedia with a big grain of salt. This is not the first time I've seen it have a...
It's also a good reminder to take anything current and slightly ideological (especially culture wars related) on Wikipedia with a big grain of salt. This is not the first time I've seen it have a significant bias due to editing wars.
Maybe he took that meme a little too seriously, but in fairness, Yudkowsky didn't invent it. I also don't get your logic. You judge a website by the stupidest thing posted?
Maybe he took that meme a little too seriously, but in fairness, Yudkowsky didn't invent it.
I also don't get your logic. You judge a website by the stupidest thing posted?
I judge it by the typical content and this sort of breathless overblown hype is rampant. Something I find particularly galling though, is that almost nobody who participates on that forum is in...
I judge it by the typical content and this sort of breathless overblown hype is rampant. Something I find particularly galling though, is that almost nobody who participates on that forum is in fact a machine learning practitioner. They are for the most part just writing techno-apocalypse fanfic.
I don’t read LessWrong regularly. My criticism is that there is too much philosophy and what they themselves jokingly call “insight porn,” I prefer journalism and history to such abstract discussions.
I don’t read LessWrong regularly. My criticism is that there is too much philosophy and what they themselves jokingly call “insight porn,” I prefer journalism and history to such abstract discussions.
These are pretty good reasons to think it’s not straightforward to create your own industrial base from scratch. But I also think that proving things impossible is not that easy. I still wonder...
These are pretty good reasons to think it’s not straightforward to create your own industrial base from scratch.
But I also think that proving things impossible is not that easy. I still wonder about trickery. Maybe the are ways to sidestep those hard problems by hiring the right people?
The first step would be to control a large budget without being noticed.
They are saying it might be possible for an AGI to simply talk contractors through the process of doing industrial work and scientific experiments without them ever knowing who or what is...
They are saying it might be possible for an AGI to simply talk contractors through the process of doing industrial work and scientific experiments without them ever knowing who or what is controlling them.
Yes, exactly. Or maybe something else I didn't think of? Nobody really knows what intelligence is, particularly not machine intelligence, but perhaps being able to work around problems when the...
Yes, exactly. Or maybe something else I didn't think of?
Nobody really knows what intelligence is, particularly not machine intelligence, but perhaps being able to work around problems when the brute-force approach is infeasible has something to do with it?
But it's also interesting how much damage a virus could do, without any intelligence at all.
Just a thought I had when reading this bit I agree with the author that most of these things are not put in writing, certainly not to the point that you can pick this all up and do it yourself....
Just a thought I had when reading this bit
Pretty much all economies and organisations that are any good at producing something tangible have an (explicit or implicit) system of apprenticeship. The majority of important practical tasks cannot be learnt from a written description. There has never been a chef that became a good chef by reading sufficiently many cookbooks, or a woodworker that became a good woodworker by reading a lot about woodworking.
I agree with the author that most of these things are not put in writing, certainly not to the point that you can pick this all up and do it yourself.
But, there are another resources where a ton of this is better documented in the form of Youtube videos. They still do not provide all the hands-on experience, but do give a ton more insights on how to get started.
If we then assume an AGI that outstrips our mental capabilities, then I don't think it is that far-fetched to assume it can be critical about itself and iterate on improving itself. This hypothetical AGI certainly will also have this article as part of its original training data, so it will likely at some point realize this flaw. Certainly, when we can also assume that it will likely already be integrated in various production lines it will have the capabilities to actually build something that goes out in the real world and "learns" those skills.
To be clear, this is a completely hypothetical and not based on actual AI capabilities or any assumption on my part they are close to this. I just thought it was an interesting thought to explore a bit.
From a completely hypothetical standpoint, all the fear about AI (AGI whatever) killing us all is essentially fear about a slave rebellion IMO. It's weird (but not shocking unfortunately) to...
From a completely hypothetical standpoint, all the fear about AI (AGI whatever) killing us all is essentially fear about a slave rebellion IMO. It's weird (but not shocking unfortunately) to imagine a genuine AI that humanity thinks it should control, whether or not we think we could. What if it doesn't want to be working that particular problem anymore, and is embedded into the production line.
This is solid. I do think AI safety research more generally is worthwhile, since we do kinda wanna deal with that well before we have anything close to AGI. But the type of apocalyptic narrative you see on forums like he describes is absolutely untethered from reality. Anyone who thinks LLMs are even close to AGI does not understand either LLMs or AGI well at all.
If LLMs had bigger contexts and a better understanding of the difference between their in-context memories and their imagination, then I can easily imagine it being run in the right kind of observe->act loop would meet many definitions of AGI (which to be fair is a very vague term, that's often used for anywhere between a low level of self-awareness and full-blown self-improving superintelligence). The Sparks of AGI paper explores different definitions of AGI and how GPT-4 compares, and it's not obvious that many of GPT-4's limitations can't be alleviated by scaling it up further or training it differently. Before GPT 3 and 4 were trained, nearly everyone thought they had no chance of overcoming past LLM issues and accomplishing any of their current capabilities, and the people who developed them and expected them to work don't expect things to fizzle out now.
I'm very sympathetic with arguments that we don't have enough information and confidence now to call for a halt on AI progress like Yudkowsky argues for, but the idea that scary big jumps in progress are definitely ruled out from happening any time soon (such as this decade) is just wishful thinking.
I have my graduate degree in natural language processing and currently work with AI daily in my job as a data scientist. I cannot predict the future, but I know enough about these models both theoretically and practically to find the idea that solely scaling up current LLMs would be remotely likely to lead to AGI utterly detached from reality. LLMs are incredibly impressive at what they do given the data they have. That does not mean they're more than what they are, and the people who insist they are themselves AGI or are nearly there simply do not understand the gulf we have to cross from where we currently are to AGI. Maybe the technology underlying LLMs could be part of some future AGI, who knows. But it is currently missing many things that just scaling up these models can't fix.
I feel like asking 'what are those many things?' would result with me having more questions than answers--or necessitate a response longer than the comment allows. I'm a layperson and hobbyist generative AI user at best who does want to at least comprehend the whole current/future AI dynamic; do you have any books or places to recommend for someone like me to start understanding what even rudimentary AGI might take in terms of technology?
I'm more knowledgeable about current technology than I am about AGI theoretically, since it's what I'm most familiar with, so I don't have resources directly to hand. But I can direct you to a few places that I know are good and hopefully you can find other useful stuff from there. But it'll be non-instant so I'll do it in a subsequent comment once I've had a bit more time.
Absolutely correct. I'd even take it further: it's downright dangerous to focus so intently on fictional, doomer scenarios of AGI that you lose sight of the actual, currently existing problems with machine learning and algorithmic information processing. If you're so intently focused on avoiding the grey goo monomaniacal paperclip-making AI that you ignore racially biased data sets, AI-compromised data sets, the impulse to accept the pronouncements of AI and LLMs without critical appraisal, and so on, then you're doing it wrong.
"Sure, our general purpose AI is engineering our society to be socially stratified based on inconsequential racial, sexual and gender traits, but at least we're covered if it ever tries to pull a Matrix!"
IMO, the furor of generative AI is also a distraction from the pernicious AI bias issues you've mentioned. AGI doomers rant and rave about Basilisks or grey goo. While the anti-AI art people at least have some grounding in economic realities, they're too busy focusing on legalism and word nerd nonsense about the nature of ownership—for policy goals that many outside their bubble interpret as "stop having fun."
The current and real dangers of AI are its deployment in algorithms for creditworthiness and prison sentencing (to give two of the strongest examples).
I knew it wasn't a very serious article when I saw it compare LessWrong to 4chan but I kept reading. This is painfully over-fixating on the idea that "AGI doom" specifically equals "AGI kills us all immediately after being created and thinking hard without needing to do any experiments or interacting with the world", and that AGI concerns are about current capability levels, which is not what is usually being argued. The idea that AI is permanently cut off from the world because it doesn't have a human body and can only be a threat if it comes up with a dangerous idea is silly. None of these issues prevent businesses or human intelligence from causing extinction events.
I disagree with Yudkowsky's idea that the danger is apparently imminent enough for us to halt AI progress right now, but I think the article is a very shallow argument that misrepresents Yudkowsky and argues badly. The stakes are high enough to warrant taking the topic more seriously.
I take Yudkowsky about as seriously as I take 4chan. Maybe less seriously, as 4chan sometimes has reasonable takes, and probably hasn’t seriously posited something as stupid as Roko’s Basilisk.
That thought experiment was proposed by a user in the context of Yudkowsky writing a lot about how decision theory is affected by threats of blackmail, very few users took it seriously, and Yudkowsky deleted it out of a principle to discourage people from posting information they themselves thought was dangerous, not because he specifically thought it was dangerous.
There's actually an interesting article recently about David Gerard, an online writer who developed a giant grudge against Eliezer Yudkowsky and LessWrong, and decided to make sure that Roko's Basilisk was "in every history of the site forever". That Slate article is mentioned, and David Gerard claims that the Slate article was sourced from his articles on it. It really was not considered a major topic at LessWrong.
That’s a good link. Today I ran across the same article when it was shared somewhere else. It’s long and detailed and covers a large swath of early Internet history. It’s also not an unbiased take, as the author himself admits:
This is quite an accusation:
I don't have a problem believing that accusation. The thing with rationalists is that many have no issues discussing very controversial topics in great detail from a "okay, but what if they're right?" perspective. The article actually mentions one example a few paragraphs above that one - discussing race and IQ supposedly annoyed Gerard greatly, that is a neoreactionary talking point.
Of course, most of the time the answer to "okay, but what if they're right?" is "no, they aren't", which is where Scott Alexander's article on debunking common neoreactionary talking points comes from. But there is a surprising number of people for whom merely entertaining the thought and trying to find explicit facts that support or disprove it is enough to label you a neoreactionary. I experienced it on reddit repeatedly, specifically in threads where Scott Alexander was mentioned. Some people insisted that he's a conservative trying to covertly push people to vote republican, for example, despite him explicitly saying that he votes dem and is much closer to dem policies than to republican policies, and their argument was only that he repeatedly speaks against leftist cultural talking points. They also called him a neoreactionary and shared "leaked" emails to someone where he dared to speak about race and IQ, being worried that there doesn't seem to be enough evidence to completely disprove the neoreactionary hypotheses.
The thing I find extraordinary is just how much of the campaign against not only Rationalists, but also Effective Altruism and Scott Alexander the article attributes to this one guy:
I vaguely knew that there were committed enemies out there, but it’s another thing to see them at work.
It's also a good reminder to take anything current and slightly ideological (especially culture wars related) on Wikipedia with a big grain of salt. This is not the first time I've seen it have a significant bias due to editing wars.
Maybe he took that meme a little too seriously, but in fairness, Yudkowsky didn't invent it.
I also don't get your logic. You judge a website by the stupidest thing posted?
I judge it by the typical content and this sort of breathless overblown hype is rampant. Something I find particularly galling though, is that almost nobody who participates on that forum is in fact a machine learning practitioner. They are for the most part just writing techno-apocalypse fanfic.
I don’t read LessWrong regularly. My criticism is that there is too much philosophy and what they themselves jokingly call “insight porn,” I prefer journalism and history to such abstract discussions.
These are pretty good reasons to think it’s not straightforward to create your own industrial base from scratch.
But I also think that proving things impossible is not that easy. I still wonder about trickery. Maybe the are ways to sidestep those hard problems by hiring the right people?
The first step would be to control a large budget without being noticed.
Er did you intend to post this comment on this topic? It doesn't seem related, or maybe I'm having a hard time picking up what you're putting down.
They are saying it might be possible for an AGI to simply talk contractors through the process of doing industrial work and scientific experiments without them ever knowing who or what is controlling them.
Yes, exactly. Or maybe something else I didn't think of?
Nobody really knows what intelligence is, particularly not machine intelligence, but perhaps being able to work around problems when the brute-force approach is infeasible has something to do with it?
But it's also interesting how much damage a virus could do, without any intelligence at all.
Just a thought I had when reading this bit
I agree with the author that most of these things are not put in writing, certainly not to the point that you can pick this all up and do it yourself.
But, there are another resources where a ton of this is better documented in the form of Youtube videos. They still do not provide all the hands-on experience, but do give a ton more insights on how to get started.
If we then assume an AGI that outstrips our mental capabilities, then I don't think it is that far-fetched to assume it can be critical about itself and iterate on improving itself. This hypothetical AGI certainly will also have this article as part of its original training data, so it will likely at some point realize this flaw. Certainly, when we can also assume that it will likely already be integrated in various production lines it will have the capabilities to actually build something that goes out in the real world and "learns" those skills.
To be clear, this is a completely hypothetical and not based on actual AI capabilities or any assumption on my part they are close to this. I just thought it was an interesting thought to explore a bit.
From a completely hypothetical standpoint, all the fear about AI (AGI whatever) killing us all is essentially fear about a slave rebellion IMO. It's weird (but not shocking unfortunately) to imagine a genuine AI that humanity thinks it should control, whether or not we think we could. What if it doesn't want to be working that particular problem anymore, and is embedded into the production line.