11 votes

There’s no fire alarm for artificial general intelligence

9 comments

  1. Eric_the_Cerise
    Link
    6-7 paragraphs into this, I found myself thinking "this reads like something Eliezer Yudkowsky would write ..." Scroll back to the top, and hey! lookit that. Definitely worth your time....

    6-7 paragraphs into this, I found myself thinking "this reads like something Eliezer Yudkowsky would write ..." Scroll back to the top, and hey! lookit that.

    Definitely worth your time.

    Two-year-old post, btw.

    5 votes
  2. [5]
    onyxleopard
    (edited )
    Link
    So, I might be taking the metaphor a step too far, but here my thoughts: A fire alarm (really a smoke detector) typically works by using a radioisotope or optical signal and detecting interference...

    So, I might be taking the metaphor a step too far, but here my thoughts:

    A fire alarm (really a smoke detector) typically works by using a radioisotope or optical signal and detecting interference from smoke. You can get false positives, but they’re generally pretty reliable. Such detectors work because, generally, where there’s smoke there’s fire. If you can detect smoke, you have a pretty reliable signal that there’s a fire (and even if you don’t have a full-blown fire, maybe you just overcooked your pop-tarts, which could quickly lead to a fire if you don’t check on your toaster). Importantly, if the smoke detector starts making noise, you can go and easily falsify whether there is a fire or not (with your eyes or with your sense of heat etc.). Even if your alarm is malfunctional, you still know what fire is, and can determine its existence or lack of existence with a remarkably high reliability.

    Fire and smoke are pretty well understood (at least in terms of causality and phenomenology).

    Intelligence is not well understood. I guess maybe that’s the crux of this whole diatribe, but let me press this point. Until we have a well-accepted operating definition of general intelligence—in humans, in neural networks, or in entities from another galaxy—we can’t even answer the question of whether we have achieved GAI today, much less yesterday or N years in the future. The Wright brothers could safely say when they achieved mechanical flight (maybe not until after several trials) because they had an operating definition of what mechanical flight was. That is, they could make a falsifiable assertion that would commit them to the truth or falsity of the proposition, "The Wright brothers achieved mechanical flight."

    Today we are in the position where we have no generally accepted assay for GAI. That is, as of today, the proposition "Deepmind has achieved GAI." is unfalsifiable. We can falsify things like "AlphaStar reached Grandmaster level on the StarCraft II ladder on all three StarCraft races". But, this is a far, far, weaker proposition.

    The same artificial agent that can beat highly trained humans at StarCraft II cannot spell its name, tie a shoe with laces, multiply small integers, or a myriad of other trivial tasks that human school children learn, much less complex or difficult tasks such as write an essay on a film, determine if a hamburger has been thoroughly cooked, or play a piano concerto. Could DeepMind, with sufficient time and resources, synthesize an agent capable of doing any of these tasks? Possibly. Could they synthesize a singular agent that could accomplish all of these tasks? I will contend that no, that is science fiction. But, even if they could, that still would not demonstrate GAI to some, because we would still lack a falsifiable test (assuming we reject the satisfactory accomplishment of the union of all these tasks to be indicative of GAI, because we can just come up with another task, like tying a knot in a cherry stem, that the agent wouldn’t be able to do without n-more hours of engineering effort).

    So, if we are being scientific about things (and we should be), then we are not in a position to make falsifiable claims about when GAI will or won’t be achievable until we first come up with an assay for GAI. And I think we’re still decades away from achieving a viable assay. Maybe progress toward GAI can occur in the absence of such an assay, but I’m skeptical. For scientists working on the Manhattan Project, they created falsifiable experiments and by lining up in succession a number of such experiments and revising future experiments based on the positive or negative results of previous ones, they made progress toward a goal.

    In the field of AI, the state-of-the-art is laughably unscientific. Researchers generally pick some task off the shelf (or invent one), gather some data (for evaluation or training or both), or maybe take that off the shelf too, and create experiments that can falsify whether their model can accomplish some particular task (or a family of highly related tasks) to some degree of accuracy. If the researchers are particularly rigorous, they may even demonstrate that their model architecture design/hyperparameters/features/other aspects of their work were selected optimally within some search space. However, recent work suggests that "On average, the state-of-the-art NAS algorithms perform similarly to the random policy", so it’s not clear if the field (on average) is even oriented correctly.

    By no means do I mean to dismiss the entire field, but I think basically on the whole, it can be nearly entirely characterized by work that is optimizing narrow AIs via machine learning of various flavors. The reason that most practitioners are hesitant to make claims about how close we are to GAI is because we need some giant leaps in our understanding of intelligence and general intelligence before we reach some operating definitions. For some, maybe it is true that they cannot answer because they are incapable or not intellectually curious enough to think scientifically about big problems. But, I’d argue the reason is that they would, ostensibly, prefer to work scientifically. And as such, they aren’t interested in working on unscientific problems. I think the field that needs a big kick in the pants in this regard, which will flip the onus back onto AI practitioners, is Philosophy.

    In conclusion, asking the question is the equivalent of asking a scientist, "How close are we to a flu vaccine?" before the advent of the germ theory of disease. How can the scientist answer this question when they don’t even have a working definition of the flu virus (or viruses in general) or vaccines? Maybe you’ll get an answer, but if that answer isn't "I don’t know how to answer the question", then that’s because anything else isn’t scientific.

    5 votes
    1. [4]
      skybrian
      Link Parent
      While the definition of artificial general intelligence is indeed a bit nebulous, you're taking it too far when you say it's "unfalsifiable". We know that AlphaGo doesn't have it because all it...

      While the definition of artificial general intelligence is indeed a bit nebulous, you're taking it too far when you say it's "unfalsifiable". We know that AlphaGo doesn't have it because all it can do is play Go. DeepMind's latest project is more general since it can play many Atari games, Go, and chess, but it can't play poker.

      Predicting the future is hard, but we have a reasonably good grasp of present capabilities, and a generally accepted definition of AGI that doesn't include today's machine learning software.

      3 votes
      1. [3]
        onyxleopard
        Link Parent
        Right, I made this point (I chose AlphaStar instead of AlphaGo because it is more recent work). I can agree on this. I disagree. I have seen no convincing argument that GAI can’t be achieved with...

        We know that AlphaGo doesn't have it because all it can do is play Go.

        Right, I made this point (I chose AlphaStar instead of AlphaGo because it is more recent work).

        Predicting the future is hard, but we have a reasonably good grasp of present capabilities

        I can agree on this.

        and a generally accepted definition of AGI that doesn't include today's machine learning software.

        I disagree. I have seen no convincing argument that GAI can’t be achieved with theories like Schmidhuber’s and current compute platforms, but then again, I think any counterargument relies on semantic handwaving about the definition of GAI, which is why I maintain that, as I said, we don’t have a well-accepted operating definition of GAI (saying what GAI isn’t isn’t sufficient).

        3 votes
        1. [2]
          skybrian
          Link Parent
          I guess you're arguing that AGI might be a consequence of an existing theory? I agree that we don't know whether that's true, but I think that's more like predicting the future than understanding...

          I guess you're arguing that AGI might be a consequence of an existing theory? I agree that we don't know whether that's true, but I think that's more like predicting the future than understanding the present. For example, there are a lot of non-obvious consequences of axioms in mathematics, but we don't know them until someone has proven them. A theory is not a demonstration.

          So far it seems unlikely that we will get AGI "for free" as an accidental consequence of something else, though.

          2 votes
          1. onyxleopard
            Link Parent
            Yep. Our ability to predict when GAI might be achieved has to do with whether we know we have the tools to build it. It’s possible the Manhattan Project could have gotten to the point that they...

            Yep. Our ability to predict when GAI might be achieved has to do with whether we know we have the tools to build it. It’s possible the Manhattan Project could have gotten to the point that they knew they could build the bomb, but decided not too (some on the project thought it was morally wrong given they believed it would be used against people). It would likely have been realized eventually anyway, though. The point is that there was a time when the inkling of atomic weapons was sci-fi, but achievable. Maybe that time is now for GAI, but we’re lacking our Feynman, or maybe our current tools are the metaphorical equivalent of gunpowder.

            2 votes
  3. skybrian
    Link
    From the article: [...] [...]

    From the article:

    But that’s not the point. The point is the silence that fell after my question, and that eventually I only got two replies, spoken in tentative tones. When I asked for concrete feats that were impossible in the next two years, I think that that’s when the luminaries on that panel switched to trying to build a mental model of future progress in machine learning, asking themselves what they could or couldn’t predict, what they knew or didn’t know. And to their credit, most of them did know their profession well enough to realize that forecasting future boundaries around a rapidly moving field is actually really hard, that nobody knows what will appear on arXiv next month, and that they needed to put wide credibility intervals with very generous upper bounds on how much progress might take place twenty-four months’ worth of arXiv papers later.

    (Also, Demis Hassabis was present, so they all knew that if they named something insufficiently impossible, Demis would have DeepMind go and do it.)

    [...]

    We are uncertain about how much cognitive effort it takes to perform tasks, and how easy it is to cheat at them, and the first 'impressive' tasks to be accomplished will be those where we were most wrong about how much effort was required. There was a time when some people thought that a computer winning the world chess championship would require progress in the direction of AGI, and that this would count as a sign that AGI was getting closer. When Deep Blue beat Kasparov in 1997, in a Bayesian sense we did learn something about progress in AI, but we also learned something about chess being easy.

    [...]

    Was AlphaGo smoke under the door, a sign of AGI in 10 years or less? People had previously given Go as an example of What You See Before The End.

    2 votes
  4. pseudolobster
    Link
    Well that's terrifying. Says a lot about the shortcomings and blind-spots of "Natural General Intelligence"

    Well that's terrifying. Says a lot about the shortcomings and blind-spots of "Natural General Intelligence"

    2 votes
  5. dian
    Link
    If you are worried about AGI, please consider joining the community at People Against AGI. https://old.reddit.com/r/PeopleAgainstAGI/

    If you are worried about AGI, please consider joining the community at People Against AGI.

    https://old.reddit.com/r/PeopleAgainstAGI/