Not the first. Microsoft did it years ago with Tay (it wasn't a joke or satire either). https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
Not the first. Microsoft did it years ago with Tay (it wasn't a joke or satire either).
Yeah, I believe their point was to show the possible dangers of using ML to build “AI.” The question I have about General AI, is whose Intelligence are we talking about? I for one hope it’s tree...
Yeah, I believe their point was to show the possible dangers of using ML to build “AI.” The question I have about General AI, is whose Intelligence are we talking about? I for one hope it’s tree hugging/liberal AI. I don’t think a reactionary trained AI would be all that beneficial. Along the same lines, if an AI is supposed to be moral, whose morals? The biggest problem I see in the long term with AI is defining a human moral baseline.
Unlike this AI, though, Tay becoming an insane racist neo-Nazi was mainly caused by 1. 4chan's organized effort to make it racist and 2. the fact that the bot had a "repeat after me" function that...
Unlike this AI, though, Tay becoming an insane racist neo-Nazi was mainly caused by 1. 4chan's organized effort to make it racist and 2. the fact that the bot had a "repeat after me" function that let you make it say whatever you wanted.
I don't think it is satire personally, but why doesn't the inkblot make sense? It's just like tensorflow style image recognition, but trained with captions from seedy parts of Reddit, right? Edit:...
I don't think it is satire personally, but why doesn't the inkblot make sense? It's just like tensorflow style image recognition, but trained with captions from seedy parts of Reddit, right?
Edit: it was released on April 1, and seems to be a poignant April fools prank.
I don't think that's how it worked based on the description of their method. I believe they exposed the AI to "disturbing" images and their associated captions, and then asked the AI to caption...
I don't think that's how it worked based on the description of their method. I believe they exposed the AI to "disturbing" images and their associated captions, and then asked the AI to caption inkblots based on their similarity to those previously observed images. The AI would have a library of captions associated with disturbing images, so that's the only thing it's going to respond with...something from that list of captions. It really didn't matter which previously observed image it thought an inkblot looked like, because its only experience was observing images which were likely to have disturbing captions, so whichever image it thought was a match was going to have a caption like "woman crushed by a bus", "man who fell off a building", "self-inflicted gunshot wound", or whatever.
I think the point is that the context in which an AI is trained is going to affect the "decisions" of the AI.
I think it makes sense, and honestly doesn't seem that surprising to me. Train an AI to match inkblots with images (and their descriptions) from "nice" sources and train another AI with images...
I think it makes sense, and honestly doesn't seem that surprising to me. Train an AI to match inkblots with images (and their descriptions) from "nice" sources and train another AI with images from "seedy" sources, and each AI would describe the inkblots differently.
Not so sure it's fake. It got covered by Newsweek and Fortune... http://www.newsweek.com/mit-norman-psychopath-rorschach-ai-inkblot-test-psycho-reddit-artificial-962045...
Part April Fools’ joke, part lesson about the impact of data quality on artificial intelligence (AI), researchers at the Massachusetts Institute of Technology (MIT) Media Lab developed a first-of-its-kind “psychopath AI.”
My impression is that the theme is the joke, but the exercise in exploring the context of AI training, and the results those different contexts produce, is real. It's unfortunate that they've...
My impression is that the theme is the joke, but the exercise in exploring the context of AI training, and the results those different contexts produce, is real.
It's unfortunate that they've confused the whole thing by alluding to it being "part April Fools' joke".
Not the first. Microsoft did it years ago with Tay (it wasn't a joke or satire either).
https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
Yeah, I believe their point was to show the possible dangers of using ML to build “AI.” The question I have about General AI, is whose Intelligence are we talking about? I for one hope it’s tree hugging/liberal AI. I don’t think a reactionary trained AI would be all that beneficial. Along the same lines, if an AI is supposed to be moral, whose morals? The biggest problem I see in the long term with AI is defining a human moral baseline.
Note: I am shameless in my own biases above.
Never before have I believed in the goal of OpenAI more than after reading that. No wonder so many smart folks whom I respect are involved with it.
Unlike this AI, though, Tay becoming an insane racist neo-Nazi was mainly caused by 1. 4chan's organized effort to make it racist and 2. the fact that the bot had a "repeat after me" function that let you make it say whatever you wanted.
Yeah, I can't believe Microsoft didn't foresee it getting corrupted.
They somehow assembled a team of people who had never been 14 year old boys.
Isn't it just a joke? See About section:
Yeah, I'm pretty sure this is satire. The inkblot part makes no sense lol
I don't think it is satire personally, but why doesn't the inkblot make sense? It's just like tensorflow style image recognition, but trained with captions from seedy parts of Reddit, right?
Edit: it was released on April 1, and seems to be a poignant April fools prank.
Yes, but you need a dataset. I don't think there is a massive data set of /r/The_Donald's reactions to inkblots.
I don't think that's how it worked based on the description of their method. I believe they exposed the AI to "disturbing" images and their associated captions, and then asked the AI to caption inkblots based on their similarity to those previously observed images. The AI would have a library of captions associated with disturbing images, so that's the only thing it's going to respond with...something from that list of captions. It really didn't matter which previously observed image it thought an inkblot looked like, because its only experience was observing images which were likely to have disturbing captions, so whichever image it thought was a match was going to have a caption like "woman crushed by a bus", "man who fell off a building", "self-inflicted gunshot wound", or whatever.
I think the point is that the context in which an AI is trained is going to affect the "decisions" of the AI.
Oh, ok. That makes sense, thanks!
I think it makes sense, and honestly doesn't seem that surprising to me. Train an AI to match inkblots with images (and their descriptions) from "nice" sources and train another AI with images from "seedy" sources, and each AI would describe the inkblots differently.
Not so sure it's fake. It got covered by Newsweek and Fortune...
http://www.newsweek.com/mit-norman-psychopath-rorschach-ai-inkblot-test-psycho-reddit-artificial-962045
http://fortune.com/2018/06/07/mit-psychopath-ai-norman/
I had not noticed the April 1 date.
From IEEE Engineering 360:
My impression is that the theme is the joke, but the exercise in exploring the context of AI training, and the results those different contexts produce, is real.
It's unfortunate that they've confused the whole thing by alluding to it being "part April Fools' joke".
I updated the post with a tag that reflects this.
Seems to be
Oh, I didn’t notice the date. Hmm. Maybe it is.
To me this does not qualify as "AI".
That's why I put it in quotes in the title, and tagged the post with machine learning
So they trained a neural network to reproduce incel posting and everyone is going apeshit about it?