12 votes

How we could stumble into AI catastrophe

5 comments

  1. [5]
    Merry
    Link
    Discussion Questions: For those of you here who are in AI research, how seriously are safety concerns taken in your field? Are misaligned AI risks a legitimate concern? How realistic do you think...

    Discussion Questions:

    • For those of you here who are in AI research, how seriously are safety concerns taken in your field? Are misaligned AI risks a legitimate concern?

    • How realistic do you think the hypothetical scenarios presented in the article are? Are we headed towards a future with rogue AI systems taking control?

    • What do you think about using AI in the military? Is it a necessary step for modern warfare or a disaster waiting to happen?

    3 votes
    1. sparksbet
      Link Parent
      I work on AI (specifically NLP, which is where a lot of the recent AI fear has come from) Yes, but less for the reasons this article focuses on (nothing we currently have is even close to what the...

      I work on AI (specifically NLP, which is where a lot of the recent AI fear has come from)

      For those of you here who are in AI research, how seriously are safety concerns taken in your field? Are misaligned AI risks a legitimate concern?

      Yes, but less for the reasons this article focuses on (nothing we currently have is even close to what the author calls "transformative AI") and more because they make the models perform worse. This is a risk even with very simple AI applications, and of course the more a model is capable of the more problems can result from misalignment.

      To its detriment, this article doesn't focus on ways in which AI can be misaligned in biased ways. The Amazon recruitment AI incident is a famous example of this but there are a great many. Deploying AI that reinforces existing inequalities and injustices is a thing that currently happens, and this makes it a more important issue to focus on that often gets ignored by people who otherwise are very insistent about AI safety.

      How realistic do you think the hypothetical scenarios presented in the article are? Are we headed towards a future with rogue AI systems taking control?

      I think the problems with AI misalignment and many of the other mechanisms for deploying unsafe general AI are realistic in and of themselves, and I do think it's worth research ways to avoid AI misalignment. But in general I think most laypeople who focus on this type of AI safety have an overly optimistic view of AI's current capabilities snd the timescale at which we'll reach something like this, and they definitely underestimate how much harm humans can do deploying much less sophisticated AI. We have tons of AI ethics works that needs done right now and it's a shame that so few of these guys care about how much harm can be done with much weaker AI.

      6 votes
    2. pageupdraws
      Link Parent
      I consider myself something of an AI enthusiastic and worked with machine learning for about a decade professionally. Realistically I can't offer any insight into either of your first two bullet...

      I consider myself something of an AI enthusiastic and worked with machine learning for about a decade professionally. Realistically I can't offer any insight into either of your first two bullet points.

      The third question is absolutely chilling to me. I can't imagine how to prevent it but it seems obvious that even with regulation at a federal level, military ai tech will be highly desireable and within reach to bad actors in the very near future. Even if it's carefully regulated and used responsibly in your home nation, what will stop Iran, or North Korea from creating autonomous drone weaponry with increasingly sophisticated ai controlling it? Misalignment of goals with most of humanity would be built into the tech when developed by a group or nation sponsering terrorism. I really don't see how to prevent this from happening globally. I just got a new phobia I think.

      4 votes
    3. [2]
      johansolo
      Link Parent
      We already use AI in the military. IIRC the F35 and F22 do more to fly themselves than the pilot in the seat. The key up to this point is that the military has had a firm requirement of there...

      What do you think about using AI in the military? Is it a necessary step for modern warfare or a disaster waiting to happen?

      We already use AI in the military. IIRC the F35 and F22 do more to fly themselves than the pilot in the seat. The key up to this point is that the military has had a firm requirement of there being a human in the loop.

      3 votes
      1. beret4
        Link Parent
        Autonomy and AI are not the same thing. Autopilot is a set of control rules to maintain desire tracks/speed etc.

        Autonomy and AI are not the same thing. Autopilot is a set of control rules to maintain desire tracks/speed etc.