11 votes

MIT creates "World's first psychopath AI" named Norman by exposing machine learning algorithm to the darkest corners of Reddit

22 comments

  1. [6]
    Archimedes
    (edited )
    Link
    Not the first. Microsoft did it years ago with Tay (it wasn't a joke or satire either). https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/

    Not the first. Microsoft did it years ago with Tay (it wasn't a joke or satire either).

    https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/

    12 votes
    1. [2]
      Neverland
      Link Parent
      Yeah, I believe their point was to show the possible dangers of using ML to build “AI.” The question I have about General AI, is whose Intelligence are we talking about? I for one hope it’s tree...

      Yeah, I believe their point was to show the possible dangers of using ML to build “AI.” The question I have about General AI, is whose Intelligence are we talking about? I for one hope it’s tree hugging/liberal AI. I don’t think a reactionary trained AI would be all that beneficial. Along the same lines, if an AI is supposed to be moral, whose morals? The biggest problem I see in the long term with AI is defining a human moral baseline.

      Note: I am shameless in my own biases above.

      4 votes
      1. [2]
        Comment deleted by author
        Link Parent
        1. Neverland
          Link Parent
          Never before have I believed in the goal of OpenAI more than after reading that. No wonder so many smart folks whom I respect are involved with it.

          Never before have I believed in the goal of OpenAI more than after reading that. No wonder so many smart folks whom I respect are involved with it.

          3 votes
    2. [3]
      XenonNSMB
      Link Parent
      Unlike this AI, though, Tay becoming an insane racist neo-Nazi was mainly caused by 1. 4chan's organized effort to make it racist and 2. the fact that the bot had a "repeat after me" function that...

      Unlike this AI, though, Tay becoming an insane racist neo-Nazi was mainly caused by 1. 4chan's organized effort to make it racist and 2. the fact that the bot had a "repeat after me" function that let you make it say whatever you wanted.

      4 votes
      1. [2]
        Archimedes
        Link Parent
        Yeah, I can't believe Microsoft didn't foresee it getting corrupted.

        Yeah, I can't believe Microsoft didn't foresee it getting corrupted.

        5 votes
        1. Diet_Coke
          Link Parent
          They somehow assembled a team of people who had never been 14 year old boys.

          They somehow assembled a team of people who had never been 14 year old boys.

          3 votes
  2. [13]
    Soptik
    Link
    Isn't it just a joke? See About section:

    Isn't it just a joke? See About section:

    APRIL 1, 2018
    AI-Powered Psychopath
    We present you Norman, world's first ...

    10 votes
    1. [6]
      bee
      Link Parent
      Yeah, I'm pretty sure this is satire. The inkblot part makes no sense lol

      Yeah, I'm pretty sure this is satire. The inkblot part makes no sense lol

      8 votes
      1. [4]
        Neverland
        (edited )
        Link Parent
        I don't think it is satire personally, but why doesn't the inkblot make sense? It's just like tensorflow style image recognition, but trained with captions from seedy parts of Reddit, right? Edit:...

        I don't think it is satire personally, but why doesn't the inkblot make sense? It's just like tensorflow style image recognition, but trained with captions from seedy parts of Reddit, right?

        Edit: it was released on April 1, and seems to be a poignant April fools prank.

        1 vote
        1. [3]
          bee
          Link Parent
          Yes, but you need a dataset. I don't think there is a massive data set of /r/The_Donald's reactions to inkblots.

          Yes, but you need a dataset. I don't think there is a massive data set of /r/The_Donald's reactions to inkblots.

          5 votes
          1. [2]
            jeff
            Link Parent
            I don't think that's how it worked based on the description of their method. I believe they exposed the AI to "disturbing" images and their associated captions, and then asked the AI to caption...

            I don't think that's how it worked based on the description of their method. I believe they exposed the AI to "disturbing" images and their associated captions, and then asked the AI to caption inkblots based on their similarity to those previously observed images. The AI would have a library of captions associated with disturbing images, so that's the only thing it's going to respond with...something from that list of captions. It really didn't matter which previously observed image it thought an inkblot looked like, because its only experience was observing images which were likely to have disturbing captions, so whichever image it thought was a match was going to have a caption like "woman crushed by a bus", "man who fell off a building", "self-inflicted gunshot wound", or whatever.

            I think the point is that the context in which an AI is trained is going to affect the "decisions" of the AI.

            9 votes
            1. bee
              Link Parent
              Oh, ok. That makes sense, thanks!

              Oh, ok. That makes sense, thanks!

              2 votes
      2. jeff
        (edited )
        Link Parent
        I think it makes sense, and honestly doesn't seem that surprising to me. Train an AI to match inkblots with images (and their descriptions) from "nice" sources and train another AI with images...

        I think it makes sense, and honestly doesn't seem that surprising to me. Train an AI to match inkblots with images (and their descriptions) from "nice" sources and train another AI with images from "seedy" sources, and each AI would describe the inkblots differently.

        1 vote
    2. [4]
      jeff
      Link Parent
      Not so sure it's fake. It got covered by Newsweek and Fortune... http://www.newsweek.com/mit-norman-psychopath-rorschach-ai-inkblot-test-psycho-reddit-artificial-962045...
      4 votes
      1. [3]
        Neverland
        Link Parent
        I had not noticed the April 1 date. From IEEE Engineering 360:

        I had not noticed the April 1 date.

        From IEEE Engineering 360:

        Part April Fools’ joke, part lesson about the impact of data quality on artificial intelligence (AI), researchers at the Massachusetts Institute of Technology (MIT) Media Lab developed a first-of-its-kind “psychopath AI.”

        4 votes
        1. [2]
          jeff
          Link Parent
          My impression is that the theme is the joke, but the exercise in exploring the context of AI training, and the results those different contexts produce, is real. It's unfortunate that they've...

          My impression is that the theme is the joke, but the exercise in exploring the context of AI training, and the results those different contexts produce, is real.

          It's unfortunate that they've confused the whole thing by alluding to it being "part April Fools' joke".

          5 votes
          1. Neverland
            Link Parent
            I updated the post with a tag that reflects this.

            I updated the post with a tag that reflects this.

            1 vote
    3. Neverland
      Link Parent
      Oh, I didn’t notice the date. Hmm. Maybe it is.

      Oh, I didn’t notice the date. Hmm. Maybe it is.

  3. [2]
    joelthelion
    Link
    To me this does not qualify as "AI".

    “man is shot dumped from car.”
    “pregnant woman falls at construction story.”
    “a man is electrocuted and catches to death.”

    To me this does not qualify as "AI".

    1 vote
    1. Neverland
      Link Parent
      That's why I put it in quotes in the title, and tagged the post with machine learning

      That's why I put it in quotes in the title, and tagged the post with machine learning

  4. clerical_terrors
    Link
    So they trained a neural network to reproduce incel posting and everyone is going apeshit about it?

    So they trained a neural network to reproduce incel posting and everyone is going apeshit about it?