6 votes

Behind every robot is a human

5 comments

  1. [5]
    Algernon_Asimov
    Link
    I can't help feeling that this article is stating the obvious in such a way as to outrage people. Of course humans are involved in teaching artificial intelligences to recognise things - just like...

    I can't help feeling that this article is stating the obvious in such a way as to outrage people.

    Of course humans are involved in teaching artificial intelligences to recognise things - just like they're involved in teaching natural intelligences to recognise things. Babies and code kernels aren't born knowing what a "cat" is; they have to be shown examples of cats and told that they are cats, and shown examples of dogs, cars, and trees, and told that they are not cats. The process is similar for both types of intelligence.

    And, in a similar vein, "This call may be recorded for monitoring and training purposes." Any customer-facing entity, be it a flesh-and-blood person or a chips-and-wires robot, is going to have its performance reviewed to make sure it's doing its job right and to train it to do its job better.

    Where's the news in this article? Where's the insight? Where's the added value for users of in-home surveillance assistance devices?

    5 votes
    1. [4]
      Deimos
      Link Parent
      I didn't get the impression that it was trying to drive outrage. I think a lot of people consider AI and machine learning to be almost like forms of magic, where computers are able to learn things...

      I didn't get the impression that it was trying to drive outrage. I think a lot of people consider AI and machine learning to be almost like forms of magic, where computers are able to learn things on their own, without needing regular human input/corrections. This is exacerbated by a lot of the major stories being about things like Google's AlphaZero, where it did learn to play games better than any human can, solely from knowing the rules and not getting any input about what constitutes good moves or strategies.

      But it's not magic and most applications don't (and can't) work like that. It's basically just really, really good pattern-matching that often picks up on patterns that humans can't see. That can be both a good and bad thing, and has consequences that aren't obvious to a lot of people. For example, you can take advantage of these unseen patterns to do something like this - add extremely subtle noise to an image of a panda that doesn't change it in any perceptible way to humans, but makes the machine-learning algorithm certain that it's now a picture of a gibbon.

      I think it's important that these kind of weaknesses are explained more often. A lot of people seem to expect that we'll be able to replace human judgment with algorithms and that this will remove bias, but it's not that simple. Because the input data was created by humans, it often contains those biases, and can end up biasing the AI system in the same way.

      1 vote
      1. [3]
        Algernon_Asimov
        Link Parent
        But it didn't learn the rules of Go by itself. It might use those rules better than humans, but it still got its original input from humans. Maybe I'm taking too much for granted. I assumed it was...

        This is exacerbated by a lot of the major stories being about things like Google's AlphaZero, where it did learn to play games better than any human can, solely from knowing the rules and not getting any input about what constitutes good moves or strategies.

        But it didn't learn the rules of Go by itself. It might use those rules better than humans, but it still got its original input from humans.

        Maybe I'm taking too much for granted. I assumed it was obvious that machine intelligences had to be taught things just like organic intelligences. So, someone "revealing" this "secret" which was actually lying there in plain sight for anyone to see... feels like a conspiracy theorist to me.

        1. [2]
          Deimos
          (edited )
          Link Parent
          I have to say, it's strange to see you acting like the way AI and machine learning work should be completely obvious to everyone, when your usual stance about anything moderately technical is more...

          I have to say, it's strange to see you acting like the way AI and machine learning work should be completely obvious to everyone, when your usual stance about anything moderately technical is more on the side of "maybe that makes sense to you, but we're not all developers". Most people just think Alexa understands speech, they don't know anything about what goes into making that illusion of "understanding" possible.

          There's also a pretty large difference between being told the rules of a game and being told what constitutes a good way to play it. The original DeepMind go-playing AI (AlphaGo) knew the rules of course, but it was also given a huge amount of data (made up of games between human pro players) to learn from. It used those games to learn to recognize patterns that were "good play", and figured out how to make similar moves itself.

          AlphaZero didn't get any game data. It had no patterns to recognize or emulate. It played millions of games against itself, starting from making completely random moves, and gradually developed its own methods of deciding what good and bad moves were, without any human influence at all. If the end result plays go like a human, that's only because humans independently developed some similar ways of making those judgments.

          That kind of approach can work for games, where success is clearly defined (did the moves lead to winning the game?), but it can't really be applied to a lot of the other problems people are using machine learning for. A computer vision system can't just guess what an image contains and see if it works out, because it has no way of judging whether it was correct without getting a human involved.

          2 votes
          1. Algernon_Asimov
            Link Parent
            "Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)" In my defence, there is a difference between knowledge of developer coding and knowing that AIs...

            I have to say, it's strange to see you acting like the way AI and machine learning work should be completely obvious to everyone, when your usual stance about anything moderately technical is more on the side of "maybe that makes sense to you, but we're not all developers".

            "Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)"

            In my defence, there is a difference between knowledge of developer coding and knowing that AIs need information to learn from. Well, I think so, anyway.

            1 vote