14 votes

What's your p(doom)?

Now that ChatGPT's been around for long enough to become a quotidian fixture, I think most of us have realized that we're closer than expected to generalized artificial intelligence (or at least a reasonable facsimile of it), even when comparing to just a couple years ago.

OG AI doomers like Eliezer Yudkowsky seem a little less nutty nowadays. Even for those of us who still doubt the inevitably of the AI apocalypse, the idea has at least become conceivable.

In fact, the concept of an AI apocalypse has become mainstream enough to gain a cute moniker: p(doom), i.e. the (prior) probability that AI will inflict an existential crisis on humanity.

So for funsies, I ask my dear tilderinos: what is your p(doom)? How do you define an "existential crisis" (e.g., 90%+ population lost)? Why did you chose your prior? How would you change public policy to address your p(doom)?

32 comments

  1. [6]
    stu2b50
    Link
    Well, with respect to ChatGPT and the like, I mostly share in LeCun's view that it (autoregressive transformer models) are a dead end with regards to general intelligence. Neural networks in...
    • Exemplary

    Well, with respect to ChatGPT and the like, I mostly share in LeCun's view that it (autoregressive transformer models) are a dead end with regards to general intelligence. Neural networks in general (which AI is basically become in today's nomenclature) are clearly the best way for us to build discriminative models that can represent vast, high dimensional spaces with very nuanced distributions, and if we ever reach something akin to general intelligence, it wouldn't be that surprising if they played a part, but we're still quite far from that, and you shouldn't let the name fool you - a neural network resembles the brain about as much as the tree data structure resembles the plant.

    In the long term, then it becomes a "well, anything is possible eventually" scenario, but I'd still put the p(doom) of nuclear annihilation due to humans doing human things at least an order of magnitude higher than p(doom) of some kind of true general intelligence related calamity. We already have nuclear weapons, after all!

    22 votes
    1. [4]
      vektor
      Link Parent
      Let me one up that. My p(doom) is zero, as long as you exclude AI being used as a tool to bring about a human-made doom event. AI-driven drones cause massive devastation in a war? Tool used by...

      Let me one up that. My p(doom) is zero, as long as you exclude AI being used as a tool to bring about a human-made doom event. AI-driven drones cause massive devastation in a war? Tool used by humans. AI being used by rich people or rich corporations to further aggregate wealth while making 99% of the population functionally unemployable, causing mass poverty and either unrest or a humanitarian crisis? Tool used by humans. These are problems and potential crises that already exist before AI is around, they can be exacerbated by AI though.

      What does that leave? Alignment problems. SkyNet, paperclip maximizers, that kinda thing. Progress is evolutionary, even if ChatGPT maybe feels revolutionary. For those of us paying attention to AI before ChatGPT, it's nothing new. It's incrementally better and more impressive. By a decent margin compared to other stuff we had beforehand, but nothing too crazy. AI development will always be lots of evolutionary steps. You might come across a revolutionary new way of doing things, but that'll again require evolutionary development to catch up with the state of the art, because you basically just reinvented the wheel. That phase of evolutionary development is where humans have a lot of time to figure out that they have a problem on their hands, and turn the damn thing off. If we ever even get to the point of having an alignment problem. Seems a bit naive to assume that a superintelligence is stupid enough to not realize that its optimization goal is messed up and misaligned with the creator's intent.

      10 votes
      1. [3]
        skybrian
        Link Parent
        By “human made” do you mean intentional? “Alignment problem” sounds like something exotic but there are accidents every day. Serious accidents become more likely when you work with powerful and...

        By “human made” do you mean intentional?

        “Alignment problem” sounds like something exotic but there are accidents every day. Serious accidents become more likely when you work with powerful and dangerous equipment.

        Making an AI that’s too smart to cause an accident if it’s given dangerous capabilities is pretty hard, as we’ve seen from many years of work on driverless cars. If ordinary people were building driverless cars on their own, I’d be pretty worried, given all the dumb stuff they do with cars already.

        Most accidents aren’t like a sorcerer’s apprentice scenario, but there was the Morris worm and there are other computer viruses that got out accidentally. The only reason computer viruses aren’t more common is that there are more defenses nowadays.

        Also, in a war, not only intentional killings but accidents become more likely.

        So the question isn’t whether accidents will happen but how big can they get? Can we put bounds on that?

        Right now it seems like people are mostly playing around. I’m somewhat doubtful that any AI has enough capabilities to cause a large industrial accident yet, but I don’t actually know much about what industry is doing with it now, let alone being able to predict what they’ll do in a few years. How about you?

        I’m pretty confident that making something like a computer worm that can grab control of dangerous capabilities will be much easier than making it smart enough to use them safely.

        2 votes
        1. [2]
          vektor
          Link Parent
          Both intentional and unintentional. Though I consider intentional to be a relatively controllable risk, as I don't think common criminals can get a leg up using AI. Wanna hack a bank and let GPT...

          By “human made” do you mean intentional?

          Both intentional and unintentional. Though I consider intentional to be a relatively controllable risk, as I don't think common criminals can get a leg up using AI. Wanna hack a bank and let GPT find a vulnerability? Well, you're betting that the bank didn't have GPT pentest its security first.

          But let's also focus on the point of this argument: p(doom). We're talking existential risks. The kind of stuff I had in mind there was "maybe massive use of AI will turn the next global conflict massively deadly". Or you could consider climate change, killer asteroids, or an even worse pandemic. I'm not really counting these risks here, because I think that's kind of beside the point. All kinds of technologies affect the probabilities of the existential risks that humanity has exposed itself to already. Because these are pre-existing conditions, I don't care much.

          Making an AI that’s too smart to cause an accident if it’s given dangerous capabilities is pretty hard, as we’ve seen from many years of work on driverless cars. If ordinary people were building driverless cars on their own, I’d be pretty worried, given all the dumb stuff they do with cars already.

          Again, existential risks. A knucklehead building an accidental suicide/homicide machine is hardly anything new. People fuck around with toxic crap, explosives and even radioactive material all the time. Unless that garage-built self-driving car has skynet aspirations or otherwise brings about the end of society as we know it, it's not really p(doom), it's p(humans are stupid), which is 1.

          Most accidents aren’t like a sorcerer’s apprentice scenario, but there was the Morris worm and there are other computer viruses that got out accidentally. The only reason computer viruses aren’t more common is that there are more defenses nowadays.

          As for an accidental release of a superintelligent virus, I can only reiterate that change will be evolutionary, and ITSec research is already looking into using LLMs to detect and patch security vulnerabilities. The chance that such a vulnerability can be found by accident, when institutional actors are working against that with the same basic tools (or better ones) defies probability. Even an intentional actor would have a hard time pulling that off, and while the probability of a security breach of important systems is more likely then, the probability of that propagating into an existential risk is lower.

          So the question isn’t whether accidents will happen but how big can they get? Can we put bounds on that?

          I’m pretty confident that making something like a computer worm that can grab control of dangerous capabilities will be much easier than making it smart enough to use them safely.

          So long as we're shooting for accidents, this is the crux. Most systems that an AI could compromise/that you could compromise with the help of an AI simply aren't big enough. Most systems that even come close are quite decentralized. Many different countries with their own infrastructure, many different companies each with their own tools. You'd need to compromise a lot of them simultaneously to have a noticeable effect, which kind of looks a lot like the "AI as a super-hacker" thing I've discussed above.

          I guess the exceptions in my argument are if a lot of AI development happens siloed off from the rest of the world, e.g. in a intelligence agency. But frankly, at this point I doubt the capability of such agencies to be faster than the public research community.

          Another risk that's just crept up on me is the risk of global trade. Personally, I'm quite convinced that our current economies are intensely vulnerable to a collapse of supply chains. If overseas trade stops, many of the systems that we rely on to get things done stop working. No more energy, no more fuel, no more transportation, no more spare parts, no more food, that kinda thing. Remove from that list what your country can produce indigenously, now add it again if you rely on something you can't do indigenously to make it happen. Once you threaten the food safety of all developed nations, things start looking very grim very quickly. Sure, humanity will survive. There's still subsistence farmers that use technology barely more complicated than iron-age. But.... how can we conjure this up, without ending back up in the superhacker-AI scenario anyway?

          Maybe I should also clarify: I don't believe that p(doom) is literally impossible. The probability is close enough to zero that I'm not really worried about it at all. Climate change is a massive existential risk. If I had to guess, 10% chance of a eradicating societies as we know them and kicking the few survivors back into a dystopian solarpunky iron age. 80% chance of fucking us up massively, with 100s of millions or billions dead, but societies ultimately continuing to exist for the most part. 10% chance of some miracle technology that saves our asses without too much further harm. Nuclear war is a much less likely risk, though the distribution is much more pointy in that I think there's a high chance everyone lives unimpeded and a very narrow chance of near-complete destruction. AI-based apocalypse is orders of magnitude more unlikely still imo. Interestingly, in an unlikely event within an unlikely event, I do consider it possible that AI actually extinguishes humanity completely; something that I think neither climate change nor nuclear war can do.

          4 votes
          1. skybrian
            Link Parent
            The world is pretty big and that puts a physical damper on many bad scenarios. Global trade makes it smaller, though. Pandemics happen more quickly than they did before air travel. Even nuclear...

            The world is pretty big and that puts a physical damper on many bad scenarios. Global trade makes it smaller, though. Pandemics happen more quickly than they did before air travel. Even nuclear weapons wouldn't be a global threat without rockets to deliver them.

            One plausible scenario is a biology lab accident. Is there anything else that's widely shipped that might be as big a problem?

            Even an intentional actor would have a hard time pulling that off, and while the probability of a security breach of important systems is more likely then, the probability of that propagating into an existential risk is lower.

            The line between intentional and accidental might be more blurry when national security agencies or maybe irresponsible red teams are building offensive capabilities on purpose. (Compare with something like virus gain-of-function research.)

            Like you, I'm hoping that network attacks ramp up gradually enough that many defenders can keep up. However, there are lots of soft targets. You could sort of see ransomware attacks as a warmup. I wouldn't go so far as to say they're doing anyone a favor, but computer security would be a lot worse without having real attacks to defend against.

            So one recipe for a thriller is to take any plausible scenario along these lines, and add AI. How does it get worse? I guess one way would be if too many ransomware attacks happened at once? That is, the systems might be decentralized, but attacks might also be decentralized.

            Another scenario is "social media, but worse." Could society get crazier? Are there limits?

            This line of thought gets pretty paranoid. Fortunately I'm not a security professional and it's not something I need to worry about most days.

            1 vote
    2. DawnPaladin
      Link Parent
      Interesting - I had no idea that Meta's Chief AI Scientist was so pessimistic about the path that most of AI is on. I'll have to watch the rest of that talk later. Thanks for bringing me outside...

      Interesting - I had no idea that Meta's Chief AI Scientist was so pessimistic about the path that most of AI is on. I'll have to watch the rest of that talk later. Thanks for bringing me outside my bubble a little bit!

      1 vote
  2. [2]
    mtset
    Link
    They may seem less nutty, but they shouldn't seem more right; among other things, the TESCREAL folks are, by and large, unbelievably racist, ableist, and just generally shitheads. The real risk...

    OG AI doomers like Eliezer Yudkowsky seem a little less nutty nowadays.

    They may seem less nutty, but they shouldn't seem more right; among other things, the TESCREAL folks are, by and large, unbelievably racist, ableist, and just generally shitheads.

    The real risk from AI is the same as the risk from any other labor-multiplying machines: that capital uses it to further exploit workers and bend democracy to their will. And in that sense, p(doom) is 1; we're there.

    10 votes
    1. psi
      Link Parent
      That's certainly an unorthodox definition of p(doom) ("AI destroys democracy"), but I won't object. In fact, I broadly agree with both your points. I think the sort of "doomsday" scenario in which...

      That's certainly an unorthodox definition of p(doom) ("AI destroys democracy"), but I won't object. In fact, I broadly agree with both your points. I think the sort of "doomsday" scenario in which AI misuse causes democracies to devolve into Russia-esque kleptocracies is orders of magnitude more likely than the paperclip maximizer.

      2 votes
  3. DawnPaladin
    Link
    30%: we all die 40%: humans colonize the galaxy in a glorious post-scarcity future 30%: things trundle on more or less as they have before Not exactly a galaxy-brain take, I know. My p(doom) would...
    • 30%: we all die
    • 40%: humans colonize the galaxy in a glorious post-scarcity future
    • 30%: things trundle on more or less as they have before

    Not exactly a galaxy-brain take, I know. My p(doom) would be higher, but I think the current AI development path is surprisingly promising. LLMs are not agentive. If you put a person in a room alone, they'll get up and walk around, they'll look at stuff, they'll make plans, they'll do stuff. LLMs don't do anything without being told. They have no initiative, and even as they get smarter they show no sign of developing initiative. I think that's a good sign.

    9 votes
  4. [5]
    ICN
    Link
    Shorter term, I think we'll see many incidents similar to the Ohio train derailment: entirely foreseeable and preventable tragedies. Companies in a drive to cut costs aggressively replace humans...

    Shorter term, I think we'll see many incidents similar to the Ohio train derailment: entirely foreseeable and preventable tragedies. Companies in a drive to cut costs aggressively replace humans with AI systems and remove human oversight, reassured by blithe reassurances from over-promising marketers. Eventually an AI is put into a situation it was never tested for, and makes exactly the wrong decision. This isn't really a result of AI though so much as natural underlying issues with society.

    Longer term, I think it's like asking how long is a piece of rope. It's a question that's impossible to answer without more information. This technology is in its infancy; we have no real idea of its limits or potential really. From my understanding, existing AI is already something of a black box. It takes a great deal of effort to work out exactly how it came to an answer. That's a problem that's probably going to get worse with time, to the point that it may become literally incomprehensible at some point. Like, an AGI might be so alien that first contact scenarios are the best existing framework we've got for imagining it. And all of this will be going down while climate change is an increasingly confounding factor to everything. The future is going to be Interesting Times.

    6 votes
    1. [4]
      Akir
      Link Parent
      A long time ago I was watching this video that was about earth sciences or tsunamis or something like that, and an expert they were interviewing said something that really struck me. It was...

      A long time ago I was watching this video that was about earth sciences or tsunamis or something like that, and an expert they were interviewing said something that really struck me. It was something along the lines of, There is no such thing as a natural disaster; there is only unplanned tragedies. He was referring to the fact that we have such a good understanding of how the earth and it's atmosphere works that we know where (and in many cases approximately when) natural disasters will strike, but the reason why people are suffering so much from them is because we don't really prepare for them.

      And the thing that continues to amaze me is that this applies to just about everything. There are so many cases of this happening in industry. The Ohio train derailment was foreshadowed by workers constantly complaining that the braking systems on freight trains were insufficient. And in fact, if you are talking about recent freight train derailments, you really have to ask yourself "which one" because there were actually several more which didn't receive as much media attention. And this is nothing new, either; there were many warnings about the tank failing to hold before it burst and created the Great Molasses Flood, killing 21 people and injuring 150.

      And to be honest, I'm still not worried about AI becoming self aware and destroying everything. The problems with AI going into the foreseeable future are the same ones we are dealing with right now; giving the AI too much discretion in important decisions in the effort to save money on human labor, leading to discrimination and the potential ruining of people's lives.

      4 votes
      1. [3]
        cfabbro
        Link Parent
        Speaking of the Great Molasses Flood, Tasting History released an episode about it that is worth watching: The Great Molasses Flood | Boston Brown Bread

        Speaking of the Great Molasses Flood, Tasting History released an episode about it that is worth watching:
        The Great Molasses Flood | Boston Brown Bread

        1 vote
        1. [2]
          Akir
          Link Parent
          That's actually where I learned about the details of that event. I mean, I had heard of it before, but I didn't care to look up the details because I'm very uninterested in reading about tragedies...

          That's actually where I learned about the details of that event.

          I mean, I had heard of it before, but I didn't care to look up the details because I'm very uninterested in reading about tragedies and mass deaths in general. But bread is one of my favorite foods, so they reeled me in with that recipe.

          2 votes
          1. cfabbro
            (edited )
            Link Parent
            Ditto. I knew of the event, and suspected it of being just another typical example of early 1900s unregulated capitalistic negligence (which it was), but none of the specifics until the video. I...

            Ditto. I knew of the event, and suspected it of being just another typical example of early 1900s unregulated capitalistic negligence (which it was), but none of the specifics until the video. I actually do like reading and learning about disasters and tragedies though, since there are lessons to be learned from studying them. And doing so is what has shaped my own firm beliefs about the good government can do (and still does), and the horrors that laissez faire capitalism inevitably leads to.

            1 vote
  5. tesseractcat
    Link
    I don't know about P(doom), but I will say that I see a lot of people think there are two options: either AI doom, or things keep going the way they did before (maybe with better phones). My...

    I don't know about P(doom), but I will say that I see a lot of people think there are two options: either AI doom, or things keep going the way they did before (maybe with better phones). My P(everything stays about the same but we get better phones) is like 5%. Just looking with the context of history, right now we're in an exponential explosion of technology, and there's no evidence that it's over.

    6 votes
  6. PantsEnvy
    Link
    My p(dark ages/ apocalypse) is fairly high over the next couple of hundred years. But there are so many ways we could go. War over limited resources. A super bug. Nuclear war. Asteroid. Global...

    My p(dark ages/ apocalypse) is fairly high over the next couple of hundred years.

    But there are so many ways we could go. War over limited resources. A super bug. Nuclear war. Asteroid. Global warming. Catastrophic terrorism. Societal collapse. These are all things we can (or want to) do little about.

    p(doom) implies that AGI is achievable before we destroy ourselves (p=50%), that it is apparently malevolent (p=25%), is in a position to harm us (p=10%) and that we can't unplug it before it does irreparable harm (p< 5%). It is really the last two that I find hard to believe.

    Because p(doom) is both something we fear and is also something we can and will want to control.

    If we ever create a being that is smarter than ourselves, nukes will continue to be air gapped, infrastructure will continue to be hardened against internet attacks, and my money is on crazy Americans with guns over an AGI trying to build a fleet of autonomous killer robots that can travel through time.

    However, if p(doom) includes some idiot using AI generated code for a nuke facility or other critical infrastructure, or economic collapse due primarily to AI automation, or global warming due primarily to the power needs of AI, then I'd say maybe 5%?

    Historically, the only thing man needs to fear are the animals of the two legged variety. Imagine how different the world might have been if Hitler had access to a big red button.

    We really don't need to be afraid of Frankensteins monster. Because as we all know, Frankenstein was the real monster.

    4 votes
  7. [4]
    Rudism
    Link
    I don't think we're in any danger of a true existential crisis that isn't almost entirely human generated any time soon (like within the lifetime of anyone currently alive as I'm posting this, or...

    I don't think we're in any danger of a true existential crisis that isn't almost entirely human generated any time soon (like within the lifetime of anyone currently alive as I'm posting this, or probably even the lifetimes of their grandkids). I do believe that the internet is doomed on a much smaller timescale though--maybe even within the next few years. I think it's almost inevitable that political and corporate interests are going to start pumping out low-effort AI-generated content (either to push ideological views or for ad revenue) at an unprecedented scale to the point where genuine human-generated content is going to be completely obscured and un-discoverable (even moreso than it already is).

    I foresee AI systems set up to automatically register domain names, spin up blogs, wikis, product reviews, all uniquely branded and SEO optimized, and continue generating ad-supported content at a regular pace, all with zero human oversight or interaction beyond an initial prompt giving it a topic plus maybe a few directives or ideas that the owner wants to focus on. I expect an entire ecosystem of garbage AI content-farm products and services will arise where people can start generating these things en-masse. Type in a few topics or let the AI choose for you, pay some monthly fee, start earning ad revenue.

    The law of diminishing returns means this will probably cap out and maybe even start dying down after some time, but by then the damage will have already been done and the "dead internet theory" will have already been realized.

    4 votes
    1. [3]
      skybrian
      Link Parent
      This kind of Internet would be one where you can't trust websites you never heard of. Anonymous product reviews would become even more worthless and Reddit too, probably. Maybe Zagat or a similar...

      This kind of Internet would be one where you can't trust websites you never heard of. Anonymous product reviews would become even more worthless and Reddit too, probably. Maybe Zagat or a similar restaurant review site makes a comeback?

      I think Wikipedia would adapt, though perhaps with stricter controls on who can edit. Books still get published like normal. The news media still does what it does, but the more reputable ones stop relying on social media so much. There would likely to be a way to find and subscribe to famous people's blogs and feeds, and those of people you actually know.

      Mostly-anonymous sites like this one might need more restrictions where the site admins verify people better, and somehow we'd need to be more selective in what links we share.

      In short, I think life would go on, similarly to how email didn't collapse even when the amount of spam became huge. But there would be side effects. Running your own email server is not easy if you want your email to be reliably delivered. As usual, marginal people suffer more from the effects of distrust. Centralized, trusted sources benefit.

      3 votes
      1. [2]
        streblo
        Link Parent
        It's an interesting thought. I know people are rightfully concerned about regulatory capture, but one thing that limiting the number of high quality AI models does (if that's even still possible)...

        Mostly-anonymous sites like this one might need more restrictions where the site admins verify people better, and somehow we'd need to be more selective in what links we share.

        It's an interesting thought. I know people are rightfully concerned about regulatory capture, but one thing that limiting the number of high quality AI models does (if that's even still possible) is allowing watermarking to be required on the models most capable of infiltrating and disrupting communities like this one.

        1 vote
        1. skybrian
          Link Parent
          I was thinking more along the lines of something that replaces captchas if they become useless. Maybe there would be a service that checks IDs but otherwise doesn't reveal any info? But given that...

          I was thinking more along the lines of something that replaces captchas if they become useless. Maybe there would be a service that checks IDs but otherwise doesn't reveal any info?

          But given that Tildes doesn't have captchas, I guess we aren't too worried about it. You don't need advanced AI to write a bot, and if a bot misbehaves then it can be dealt with.

          1 vote
  8. NoblePath
    Link
    With regards to ai specifically, it depends on how the ai is interfaced with the world. I assume there are AIs that are at least twice as powerful as those publicly available. If a single one of...

    With regards to ai specifically, it depends on how the ai is interfaced with the world.

    I assume there are AIs that are at least twice as powerful as those publicly available. If a single one of those were connected to critical infrastructure controls there is a non-ignorable probability that serious chaos could ensue. Unless that interface included wmd, I doubt it would be extinction level bit I would think covid level destruction would be an unsurprising outcome.

    3 votes
  9. [8]
    Comment deleted by author
    Link
    1. [3]
      skybrian
      Link Parent
      There are a lot of people who have gotten over-excited about language models and there’s a lot of uninformed and semi-informed hype. However, I think you’ve gone too far towards dismissive...
      • Exemplary

      There are a lot of people who have gotten over-excited about language models and there’s a lot of uninformed and semi-informed hype. However, I think you’ve gone too far towards dismissive skepticism.

      When you flatly say “we aren’t” you’re making a prediction that nothing too weird will happen, and things seem too uncertain to have much confidence in any medium-term predictions about AI, including that one.

      Particularly when AI researchers disagree quite strongly on what the future of AI might be. It’s not just “people who weren’t paying attention before,” it’s experts who have made AI research their life’s work. Turing award winners Yann LeCun and Geoffrey Hinton have quite different opinions.

      LeCun in some ways agrees with you. He’s posted on Twitter that he thinks Yudkowsky is a crackpot who is unnecessarily scaring people. But here’s Hinton:

      "Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said.

      "And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."

      Experts can be wrong, and when they disagree, someone’s going to be wrong. But I think that shows it’s a live debate, not one easily dismissed.

      Given that even experts disagree so much, I don’t think there’s much hope that our casual conversations here are likely to resolve the uncertainty, and putting numbers on our wild guesses won’t make them any more accurate.

      It’s supposed to be just for fun, though? I mean, to the degree that discussing our possible doom could be fun?

      Seems to me that, much like other possible dooms like nuclear war, climate catastrophe, the almost certain doom of our own deaths, and the likely deaths of many people we know before us, an important emotional question is how to cope with such heavy things? There are lots of ways like ignoring the problem, gallows humor, and so on.

      Ignoring the problem sounds bad, but sometimes it’s the right answer. For example, it isn’t emotionally healthy for kids to seriously worry about nuclear war, their parents dying, the Earth getting hit by an asteroid, or the many other things that could happen but they can’t do anything about. Such worries are usually just an unpleasant distraction, unless you’re in a position to do something.

      And what’s true of kids is often true for the rest of us, too, unless there’s something to be done about it.

      So here’s the trick. For many possible disasters, someone should probably be worrying about them, without everyone worrying about them too much. We can specialize. Instead of everyone unproductively worrying about everything, many things can be Someone Else’s Problem. And we can support the specialists without getting too deep into unproductive doom-scrolling ourselves.

      So that’s an argument for ignoring this topic, which is probably a good idea for many people. Someone should worry about AI doom, but it doesn’t have to be us.

      (More generally, though, there is a political question of how to get enough support for disaster preparation without overly worrying everyone. Democracy and specialization are somewhat in tension, particularly when people like to deny problems they don’t know much about and there’s widespread distrust of experts. A question for another time, maybe?)

      10 votes
      1. psi
        Link Parent
        To be clear: yes, this is just for fun! I'm not going to take anybody's estimate of p(doom) seriously. I really had comments like /u/DawnPaladin's in mind when I made my post, but I've also...

        It’s supposed to be just for fun, though? I mean, to the degree that discussing our possible doom could be fun?

        To be clear: yes, this is just for fun! I'm not going to take anybody's estimate of p(doom) seriously.

        I really had comments like /u/DawnPaladin's in mind when I made my post, but I've also enjoyed reading everyone's thoughtful replies.

        4 votes
      2. Akir
        Link Parent
        Honestly the biggest problem I found with Yudkowsky and his fanclub is that they take the air out of the room for more legitimate concerns. There are serious issues about AI that we are facing...

        Honestly the biggest problem I found with Yudkowsky and his fanclub is that they take the air out of the room for more legitimate concerns. There are serious issues about AI that we are facing right now that don't get a lot of coverage because media (and laymen in general) would rather focus on the sexy potential apocalypse which is essentially fiction at this point.

        So that’s an argument for ignoring this topic, which is probably a good idea for many people. Someone should worry about AI doom, but it doesn’t have to be us.

        Honestly I think this is the best philosophy for this question. Sure it would be bad if everyone were to disregard the subject, but it's also really bad if people who aren't qualified to understand it push incorrect information and stoke fear. We need people who actually understand how these AI systems work to understand if there is a real danger or not.

        3 votes
    2. [2]
      psi
      Link Parent
      Of course, I agree that these are important (and more pressing) issues, so I'm not trying to minimize the ways that technologies like ChatGPT will disrupt the livelihoods of creatives or worsen...

      Compared to the actual damages “AI” is spinning up to inflict, like replacing service jobs or voice actors, AGI doomerism is a pathetic side show.

      Of course, I agree that these are important (and more pressing) issues, so I'm not trying to minimize the ways that technologies like ChatGPT will disrupt the livelihoods of creatives or worsen the proliferation of misinformation. I had hoped to avoid sharing my perspective to avoid tainting the discussion, but to prevent this conversation from becoming heated, let me share that I've also made similar critiques about AI alignment research.

      That said, I think /u/skybrian made a fair point in response to me: the space is wide enough that different people can concern themselves with different problems, and to be honest (for better or worse), the techbro doomers types were never going to concern themselves with, for example, the harms caused by large language models in perpetuating racial bias.

      But I think all of us can agree that these recent developments in AI research amount to something and will likely disrupt our lives in unforeseen ways. Of course, there are known, measurable risks, and people should work to address those precisely because we know they exist. But there are also the unknown unknowns, which is really what lies at the heart of my question: how concerned are you about the harms we haven't even yet imagined?

      4 votes
      1. [2]
        Comment deleted by author
        Link Parent
        1. psi
          Link Parent
          I understand your perspective, but honestly I think it's a mistake to ignore the existential risks just because we disagree with the loudest voices. I'll put you down as p(doom) = 0, however. :p

          I understand your perspective, but honestly I think it's a mistake to ignore the existential risks just because we disagree with the loudest voices.

          I'll put you down as p(doom) = 0, however. :p

          2 votes
    3. [2]
      tesseractcat
      Link Parent
      You seriously don't think we're any closer to AGI? That seems a bit disingenuous considering the capabilities of GPT-4 (https://arxiv.org/abs/2303.12712), or at least I think a statement like this...

      You seriously don't think we're any closer to AGI? That seems a bit disingenuous considering the capabilities of GPT-4 (https://arxiv.org/abs/2303.12712), or at least I think a statement like this needs a bit more justification.

      3 votes
      1. [2]
        Comment deleted by author
        Link Parent
        1. tesseractcat
          Link Parent
          Ok, just finished the talk. I agree with a lot of it, in fact, so much so that I'm confused why you linked it. It didn't address the paper I linked at all, or contain any arguments that the...

          Ok, just finished the talk. I agree with a lot of it, in fact, so much so that I'm confused why you linked it. It didn't address the paper I linked at all, or contain any arguments that the capabilities of systems like GPT-4 don't to some extent bring us closer to AGI.

          4 votes
  10. [2]
    nothis
    Link
    I have this thing where I apply a 50/50 chance to probabilities I genuinely have no way to judge or make sense of and I guess that would apply here, which is terrifying. Then again, if you don’t...

    I have this thing where I apply a 50/50 chance to probabilities I genuinely have no way to judge or make sense of and I guess that would apply here, which is terrifying. Then again, if you don’t look at it from an AI angle and more of a “stupid ideas that didn’t kill humanity” angle, we might as well just make it a boring old case of technology making one aspect of our life slightly faster and/or convenient.

    3 votes
    1. psi
      Link Parent
      I joined David Albert for dinner once, and this was one of the topics that came up. I believe he said something like: "Some people think if you don't know something, you should assign it a flat...

      I joined David Albert for dinner once, and this was one of the topics that came up. I believe he said something like: "Some people think if you don't know something, you should assign it a flat prior. But why would I do that? If I don't know, I don't know!"

      I don't think it's a very useful philosophy -- in fact, I use flat priors all the time in my research -- but I kind of agree with him in this particular case. A p(doom) of 50% seems too high for me, despite honestly having no idea how feasible an AI apocalypse is.

      2 votes
  11. Macil
    (edited )
    Link
    To assume p(doom) = 0, you'd have to assume AI fizzles out before getting near human level. Even if it fizzled out exactly at human level somehow, having human-level minds that are copyable...

    To assume p(doom) = 0, you'd have to assume AI fizzles out before getting near human level. Even if it fizzled out exactly at human level somehow, having human-level minds that are copyable (therefore practically immortal if they keep moving and humanity doesn't enact authoritarian-like measures to control the internet and computers worldwide) and possibly sociopathic to our values will be more impactful on society than any previous human invention. This has to be considered through the lens of the introduction of a second human-level-intelligent species on Earth. Like any suddenly-introduced invasive species, it's going to find a lot of niches that were occupied by us and that won't be pleasant unless we're very good at shaping them to play nicely. Maybe if they're still human level like us then it could be argued an outcome exactly as bad as extinction is unlikely, but in general I'm not very comfortable with a statement that the most world-changing invention ever is risk-free and I don't spend much time thinking about the idea of it fizzling out at exactly human level.

    Our genetics and minds changed a bit from our great ape ancestors, and in an evolutionary blink of an eye we suddenly got comfortable indoor living, enough food for everyone, safety from wild animals, dominance over any land we want as a society, caused many extinctions, planetary communication, satellites, nuclear weapons, people on the moon, etc. Human intelligence is a superpower, but also: all of these things appeared shortly after we evolved to be smart enough to do them. We're the dumbest possible species to create modern civilization! We haven't had enough time on an evolutionary scale to evolve to be any smarter than that. Unlike AI development, evolution took hundreds of thousands of years to make steps of progress on intelligence.

    If we establish that p(doom) is greater than some low amount like .1%, then I don't think it's worth it to quibble much further on the exact probability before acknowledging that the expected value calculation of the impact still gets a significant enough result that society should take heed of. We can do things like put regulations on the top AI development companies to make sure they're putting effort toward making AI be used for and do what we want as a society, and encouraging/incentivizing/requiring AI researchers to study techniques for AI safety and alignment. All of these strategies have the dual purpose of helping with non-existential risks from less than human-level AI too. There's no conflict between caring about these different concerns.

    2 votes
  12. river
    Link
    I don't know if this is a good topic for tildes?

    I don't know if this is a good topic for tildes?