21 votes

Predictions of AI doom are too much like Hollywood movie plots

13 comments

  1. [3]
    DawnPaladin
    Link
    When I saw this headline, I thought, Oh, great. Somebody else who thinks that just because the AI apocalypse happened in a movie, that means it's science fiction and could never happen in real...

    When I saw this headline, I thought, Oh, great. Somebody else who thinks that just because the AI apocalypse happened in a movie, that means it's science fiction and could never happen in real life.

    I'm pleased to report that Timothy B. Lee's writing is more nuanced than that. Instead, he looks at the tropes sci-fi authors take to make their stories more coherent, and examines how those apply to the real world of machine learning.

    The tropes he looks at are:

    1. We love stories about inventions that suddenly transform the world. But stories usually ignore the months or years of offscreen labor required to make an invention work. Claims that AIs will become superintelligent in hours or days ignore just how complex cutting-edge AI training systems are, and how long each new version takes to produce. This favors a slow takeoff scenario over a fast takeoff.

    2. AI doom scenarios usually have a single malevolent villain who tries to take over the world. But ChatGPT and all its commercial rivals offer separate instances to each of their customers. This means if any single instance tries to become an evil overlord, we have lots of similarly-intelligent AI helpers to help us fight back.

    3. AIs in sci-fi are people. They have value systems, desires, and goals. But current AI systems do not set their own goals, and there is no reason to make AIs that would. AIs are more useful when they are kept on a short leash by their creators; there's no reason to give them far-ranging autonomy. A common counterpoint is that humans won't be smart enough to understand the good ideas that superhuman AIs would propose. "This is similar to the Hollywood trope of the misunderstood genius whose work is too complex for others to understand." Real geniuses are able to explain and communicate their work to others.

    This isn't a knockdown argument against AI doom; it doesn't fully address instrumental convergence or human misuse. But it made me think about ways in which sci-fi tropes have shaped my imagination about the future.

    28 votes
    1. skybrian
      Link Parent
      Yes, definitely a better argument than usual. Thanks for sharing!

      Yes, definitely a better argument than usual. Thanks for sharing!

      7 votes
    2. slashtab
      Link Parent
      This is where the challenge lies, they keep promising AGI and more autonomous AI. Then coming to 1st point, it won't be soon and society will be better equipped when AGI happens. On second point,...

      AIs are more useful when they are kept on a short leash by their creators; there's no reason to give them far-ranging autonomy.

      This is where the challenge lies, they keep promising AGI and more autonomous AI.

      Then coming to 1st point, it won't be soon and society will be better equipped when AGI happens.

      On second point, these companies being closed and for profit, wouldn't the one whose model achieves AGI first will win the race?

      Thanks for sharing, it was a good read.

  2. [5]
    Tiraon
    Link
    I actually agree with the general point that real world works slower than the fictional depictions and that the systemic nature of society means that single agent has limited impact. But what we...

    I actually agree with the general point that real world works slower than the fictional depictions and that the systemic nature of society means that single agent has limited impact.

    But what we have that we call AI we call so purely as a marketing term. We have statistical models predicting output based on their training data. As far as I can tell we have no actual AI systems capable of acting on their own initiative. We have only really started to create expert systems capable of acting in the real world(self driving as example).

    In my opinion the actual danger of LLMs("AI") is that that they further degrade the ability of average human to critically gauge the accuracy of information on which they base their decision, to further stunt declining computer literacy and to reinforce and resurface subtle, hard to catch biases present in overly broadly scrapped training data.

    It is completely possible that actual AGI will be developed someday, I have no idea. But LLMs are, as far as I can tell, purely tools and any transformations of society due to them will be done by humans.

    15 votes
    1. [4]
      skybrian
      (edited )
      Link Parent
      I think the folks predicting AI doom would agree with that. The trouble starts when we try to predict the future. It's rather difficult to put bounds on what future technologies that haven't been...

      I think the folks predicting AI doom would agree with that. The trouble starts when we try to predict the future. It's rather difficult to put bounds on what future technologies that haven't been invented yet will do.

      Even in the short term, improvements to LLM's and to systems that incorporate LLM's as a component might do something surprising.

      3 votes
      1. [3]
        raze2012
        Link Parent
        Long term is a huge question mark. But the short term is that AI is stupid, companies are stupid about what extent can be done with it (hence the rush to replace labor), and tech companies are...

        Long term is a huge question mark. But the short term is that AI is stupid, companies are stupid about what extent can be done with it (hence the rush to replace labor), and tech companies are happy to sell that based on that stupidity they manufactured until it inevitably settles.

        That's still very annoying.

        2 votes
        1. [2]
          skybrian
          Link Parent
          Short term, there's a bubble, but I don't think it will just go away after the bubble is over, and the bubble provides ample funding for research.

          Short term, there's a bubble, but I don't think it will just go away after the bubble is over, and the bubble provides ample funding for research.

          2 votes
          1. raze2012
            Link Parent
            The bubble will settle, ideally with some proper regulation to keep current ethical issues in check. I don't think AI is going away, tech companies just gotta at least not be blatant thieves....

            but I don't think it will just go away after the bubble is over

            The bubble will settle, ideally with some proper regulation to keep current ethical issues in check. I don't think AI is going away, tech companies just gotta at least not be blatant thieves.

            Non-tech companies: it'll be a mixed bag. Some will be efficient enough to replace labor (until inevitable AI tech enfhittifies), others will probably have small, limited uses assisted by workers to improve their efficiency. Some may just outright ban it depending on regulation/industry.

            1 vote
  3. [4]
    Baeocystin
    Link
    To keep it brief, I think the 2018 short story Sort by Controversial (fiction at the time, now very much not) covers the real incipient risk of LLMs, and the Universal Paperclips scenario,...

    To keep it brief, I think the 2018 short story Sort by Controversial (fiction at the time, now very much not) covers the real incipient risk of LLMs, and the Universal Paperclips scenario, although fun for storytelling purposes, is so far down the chain of risk that it baffles me that so many seem to take it so seriously.

    9 votes
    1. [3]
      skybrian
      Link Parent
      "Sort by Controversial" is a fun story that gets at some real issues and I recommend it. But I think it deliberately exaggerates the power of memes; it's still fiction.

      "Sort by Controversial" is a fun story that gets at some real issues and I recommend it. But I think it deliberately exaggerates the power of memes; it's still fiction.

      2 votes
      1. blivet
        Link Parent
        I think it does expose something real, though. I’ve had a couple of disagreements where I genuinely couldn’t believe that anyone addressing the topic in good faith could disagree with me. I...

        I think it does expose something real, though. I’ve had a couple of disagreements where I genuinely couldn’t believe that anyone addressing the topic in good faith could disagree with me. I assumed it was a misunderstanding of the issue, but when I tried to explain I was accused of being condescending. Fortunately these haven’t been terribly important disputes, but something like the “scissor statements” the story posits do seem to exist.

        6 votes
      2. Baeocystin
        Link Parent
        I mean, yeah, I don't disagree at all that it uses hyperbole to make its point clear. But I really do think it's a Real Risk in terms of civic cohesion and societal communication in general....

        I mean, yeah, I don't disagree at all that it uses hyperbole to make its point clear. But I really do think it's a Real Risk in terms of civic cohesion and societal communication in general. Mostly I wish people focused more on those kinds of AI risk than what passes for the conversation today.

        3 votes
  4. Tuaam
    Link
    Relevant XKCD. I don't think self-governing AI which takes over the world is going to be a possibility, but I do think that AI which is controlled by corporations and governments to silence...

    Relevant XKCD. I don't think self-governing AI which takes over the world is going to be a possibility, but I do think that AI which is controlled by corporations and governments to silence dissent in a totalitarian manner is far more likely. Maybe this is where we get our doom plots wrong.

    3 votes