11 votes

If you wish to make an apple pie, you must first become dictator of the universe

3 comments

  1. [2]
    Comment deleted by author
    Link
    1. skybrian
      Link Parent
      Yes, there are some well-worn science fiction tropes here. There’s a lot of power fantasy in science fiction. A nit: I don’t think that level of weather prediction is possible without also being...

      Yes, there are some well-worn science fiction tropes here. There’s a lot of power fantasy in science fiction.

      A nit: I don’t think that level of weather prediction is possible without also being able to predict everything. For example, the pandemic affected the weather because air pollution decreased a lot. You need to predict things like future pollution laws, which means predicting elections. Saying it’s “theoretically possible” doesn’t seem quite right because it will depend on random quantum effects at some point, and the theory says that’s not predictable.

      2 votes
  2. skybrian
    Link
    The headline is deliberately absurd and I think that points to an argument against there necessarily being a “will to power:” From an evolutionary standpoint, there are a variety of strategies....

    The headline is deliberately absurd and I think that points to an argument against there necessarily being a “will to power:”

    From an evolutionary standpoint, there are a variety of strategies. Sometimes animals will try to look scary in order to discourage predators, others will try to hide, there are symbiotic relationships, and there are parasites.

    It’s often assumed in existential AI safety discussions that a powerful AI wouldn’t need to worry about survival because it would be so far beyond us that it would be invulnerable. But if the AI isn’t that powerful and that invulnerable then a desire for survival works the other way. It would try to avoid seeming to be a threat. A good way to avoid being shut off is to be helpful and harmless.

    In human societies, it’s generally a good idea to collect allies and avoid making enemies. If you must make enemies, you will still want allies.

    The assumption behind some AI existential risk theorists is that this would only be temporary. The AI would become smarter rather quickly and therefore more powerful until it doesn’t need to cooperate anymore. I think that’s making assumptions about how intelligence and power go together, though. Maybe they don’t?

    Maybe a desire to avoid upsetting stable relationships would discourage the AI from seeking to become more intelligent, since that would be threatening? Just sticking to making apple pies seems safer and easier, doesn’t it?

    1 vote
  3. skybrian
    Link
    From the blog post:

    From the blog post:

    The power-hungry machine is a natural literary device to generate epic conflict, well-suited for fiction. But could there be any reason to expect this in reality? Isn’t it anthropomorphizing machines to think they will have a “will to power”?

    It turns out there is an argument that not only is power-seeking possible, but that it might be almost inevitable in sufficiently advanced AI. And this is a key part of the argument, now being widely discussed, that we should slow, pause, or halt AI development.

    What is the argument for this idea, and how seriously should we take it?