13 votes

Why general artificial intelligence will not be realized

23 comments

  1. [9]
    Eric_the_Cerise
    (edited )
    Link
    He first goes to the trouble of clearly drawing a distinction between AGI, which he defines as human-level (but not explicitly human-like) general intelligence, and "strong AI", which he defines...

    He first goes to the trouble of clearly drawing a distinction between AGI, which he defines as human-level (but not explicitly human-like) general intelligence, and "strong AI", which he defines as human-like intelligence. He then goes on to argue that computers cannot develop AGI because they cannot learn experientially (cannot be "in the world") like humans.

    Others here have already pointed out (and, frankly, it seems obvious) that an AGI can be put in a body—physical or virtual—and allowed to "grow up" and learn like humans do, so his whole argument seems specious.

    But also, he seems to completely miss his own point, that his AGI vs "strong AI" distinction highlights the (perhaps disturbing) possibility that humans could well develop an AGI with human-level intelligence which nevertheless thinks in some way that is completely different/foreign to how humans do it. Our first contact with alien intelligence may well be in our own computers ... a la Battlestar Galactica.

    ETA: More generally ... I forget who, but some famous sci-fi author said (paraphrased) that futurists who predict humans will one day achieve "fill-in-the-blank" are almost always correct, while those who predict we will never accomplish it are almost always wrong ... which I think applies here, as well.

    11 votes
    1. onyxleopard
      Link Parent
      We already observe non-human intelligence in other animals such as apes, birds, porpoises, and cephalopods, so I agree that it seems rather myopic to think that intelligence is necessarily limited...

      But also, he seems to completely miss his own point, that his AGI vs "strong AI" distinction highlights the (perhaps disturbing) possibility that humans could well develop an AGI with human-level intelligence which nevertheless thinks in some way that is completely different/foreign to how humans do it.

      We already observe non-human intelligence in other animals such as apes, birds, porpoises, and cephalopods, so I agree that it seems rather myopic to think that intelligence is necessarily limited to humans.

      3 votes
    2. [2]
      PendingKetchup
      Link Parent
      This paper doesn't quite make it, but there's a strong argument that AI research will go much more slowly after we get about as far as we can go training non-agents on the corpus of the Internet....

      This paper doesn't quite make it, but there's a strong argument that AI research will go much more slowly after we get about as far as we can go training non-agents on the corpus of the Internet. You can hook an AI up to a robot and have it learn from that, but it learns in real time which is much, much slower. You can hook it up to a simulation and run it faster, which has had some strong results in the field of playing video games, but there's a limit to the speed at which you can simulate, and your simulation can't include a lot of things one might want to interact with (for example, humans).

      An individual human mind is also not just the tacit knowledge acquired during the person's lifetime, but also all the tacit world knowledge learned into the genome and the body it produces. Hundreds of millions of years of trial and error are a potentially unrepeatable capital investment.

      3 votes
      1. skybrian
        Link Parent
        It can be useful to think about this in terms of leaning efficiency. Many interesting machine learning algorithms are inefficient at learning because researchers found that just making them scale...

        It can be useful to think about this in terms of leaning efficiency. Many interesting machine learning algorithms are inefficient at learning because researchers found that just making them scale is hard enough, and then you can throw time and money at the problem and have a bunch of machines in a data center learn from a bigger dataset. It doesn't make sense to optimize all that much when you're trying to be the first to do something new.

        But doing research with unoptimized code doesn't tell us anything about how much better you could do if you really tried. There are researchers working on learning efficiency too. It's too soon to say how much better they will be able to do, but my guess is that there will be dramatic gains, because there is often a lot of low-hanging fruit in a young field.

        You've made a guess at a fundamental barrier and maybe a barrier is there but it's still only a guess. Actually showing that a barrier exists would be an interesting research result.

        2 votes
    3. [5]
      unknown user
      Link Parent
      This appears entirely impossible to me. @onyxleopard mentioned that there is alien intelligence on the planet. We can generally reason about what your dog is looking for, or why apes perform...

      his AGI vs "strong AI" distinction highlights the (perhaps disturbing) possibility that humans could well develop an AGI with human-level intelligence which nevertheless thinks in some way that is completely different/foreign to how humans do it

      This appears entirely impossible to me.

      @onyxleopard mentioned that there is alien intelligence on the planet. We can generally reason about what your dog is looking for, or why apes perform mating rituals the way they do, or why pack mentality comes about in wolves, but we can't perceive the world through their eyes.

      My guess? It's because we didn't design animals.

      Humans have incredible pattern recognition skills. We also live by patterns: we stick to them, we seek them out, and we make them when necessary. I don't think we can design something inhuman and still bear any semblance of understanding of it because we are inherently limited in this aspect. It's why we ascribe human intelligence and agency to concepts and things that are inherently inhuman: circumstance, evolution, human-friendly animals ("Your dog is so smart! She understands you so well!"), appliances breaking down... I don't think we can make something outside of that big-but-finite area.

      My guess is that there can never be non-human strong AI (i.e. AI that has capacity for self-motivated reasoning) that has been designed by humans. You could also argue that, if it were possible, it would be wrong ethically: creating something we may no longer be able to control, thus shedding ourselves flagrantly of the responsibility for what this new entity could do.

      1 vote
      1. [4]
        Eric_the_Cerise
        Link Parent
        Ironically, to me, it seems almost inevitable. We are attempting to build an artificial model of human intelligence -- something that we still do not even understand .... Seems inevitable that our...

        This appears entirely impossible to me.

        Ironically, to me, it seems almost inevitable. We are attempting to build an artificial model of human intelligence -- something that we still do not even understand .... Seems inevitable that our model is going to be, at best, a close approximation, and quite likely, we will stumble into something quite unlike human cognition.

        2 votes
        1. [3]
          unknown user
          Link Parent
          You're making an even clearer case for why this wouldn't happen. We are not attempting to build artificial human intelligence: we're building intelligence, from a point of understanding of a human...

          You're making an even clearer case for why this wouldn't happen. We are not attempting to build artificial human intelligence: we're building intelligence, from a point of understanding of a human being, that is to us human-like. It's somewhat subjective of a metric, and the Turing test can only go so far.

          There is research into how the human brain/mind/intelligence works – all together and separately – and there are cues from this research that engineers take to augment certain technologies, like neural networks, but they are not, as a whole, the foundational motivation platform for the research into AI. An AI doesn't have to resemble human intelligence to be considered "strong" or "general": it has to be evidently, obviously capable at problem-solving to a level considered human.

          You may find a lot of contention on how exactly to test that, and how to interpret these results, but I don't think you'll find the specific definition for an AI to be modelled after humans.

          My assumption is that it's going to happen regardless (see above), because that's the only measure (we think) we can objectively compare against. Other models may simply be sieved away.

          2 votes
          1. onyxleopard
            Link Parent
            FYI, there is a segment of the current ML research community that is focused on meta-learning. That is, so-called “learning to learn”. Some are convinced that deep neural networks are already...

            FYI, there is a segment of the current ML research community that is focused on meta-learning. That is, so-called “learning to learn”. Some are convinced that deep neural networks are already capable of such and are studying architectures to try to understand how they essentially program themselves.

            3 votes
          2. onyxleopard
            Link Parent
            Also on a separate topic, there are other segments of the research community interested in actually coming up with objective assays for intelligence. E.g., Chollet’s On the Measure of...

            Also on a separate topic, there are other segments of the research community interested in actually coming up with objective assays for intelligence. E.g., Chollet’s On the Measure of Intelligence, but this is a fairly young program AFAIK. Subjective measures like Turing tests have all sorts of issues that have been thoroughly dissected.

            1 vote
  2. Eric_the_Cerise
    Link
    The quest for AGI has been a long series of local maxima ...every decade or two, we discover Some New Thing that people believe is going to be the thing that gives us true AGI. Then we optimize...

    The quest for AGI has been a long series of local maxima ...every decade or two, we discover Some New Thing that people believe is going to be the thing that gives us true AGI. Then we optimize the bejeezus out of it, only to eventually, reluctantly admit that it's not actually the Thing.

    Right around then is when people start promoting the idea that we cannot actually create AGI.

    Neural nets, Deep Learning, Big Data training sets, they're all pieces of the puzzle that will eventually get us there; but we are still missing pieces. Alone, these technologies are not the Thing that will give us AGI.

    7 votes
  3. mrbig
    Link

    The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines, but can also manipulate symbols. It is possible to pursue this goal without assuming that machine intelligence is identical to human intelligence. This is known as weak AI.

    However, many AI researcher have pursued the aim of developing artificial intelligence that is in principle identical to human intelligence, called strong AI. Weak AI is less ambitious than strong AI, and therefore less controversial. However, there are important controversies related to weak AI as well.

    This paper focuses on the distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). Although AGI may be classified as weak AI, it is close to strong AI because one chief characteristics of human intelligence is its generality. Although AGI is less ambitious than strong AI, there were critics almost from the very beginning.

    One of the leading critics was the philosopher Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all.

    2 votes
  4. [6]
    tesseractcat
    Link
    I may have misunderstood the article, but isn't one of the major benefits of AI the ability to self-learn... i.e. to teach oneself the 'tacit knowledge' mentioned in the article? I really doubt...

    I may have misunderstood the article, but isn't one of the major benefits of AI the ability to self-learn... i.e. to teach oneself the 'tacit knowledge' mentioned in the article? I really doubt that anyone will make an AGI that you can just make install and have a conversation with. Much like a human baby, an AGI would teach itself many of the things that we humans also have to learn.

    Assuming there isn't some magic 'consciousness molecule' in our brain that makes us intelligent, I don't see why we couldn't stick a proto-AGI into a robot body, or some advanced VR simulation, and have it learn in much the same way a baby would learn.

    2 votes
    1. [3]
      ruspaceni
      Link Parent
      I agree. Like, sure, the human body has a bunch of wacky processes going on under the hood which might be crucial in making this whole thing work. Gut microbiomes, hormones, that 'two minds'...

      I don't see why we couldn't stick a proto-AGI into a robot body, or some advanced VR simulation, and have it learn in much the same way a baby would learn.

      I agree. Like, sure, the human body has a bunch of wacky processes going on under the hood which might be crucial in making this whole thing work. Gut microbiomes, hormones, that 'two minds' theory, and godknows what else. But there's no reason to think it can't be recreated in some form. AI already has hidden layers and make their own abstractions, although I don't know enough to say if that's an analogue or not.

      Also the more the we've discovered about the animal world, the more we realised there's nothing uniquely 'special' about humans. Other animals have been shown to exhibit self awareness, use of tools, and even doing basic math iirc. So to me its a bit fanciful to say that there's no way a machine can be intelligent unless you're defining intelligence as "something fleshy animals have"

      4 votes
      1. [2]
        Amarok
        Link Parent
        Another way to put it might be that we're around 4.5 billion years of brutally field-tested spaghetti-code ahead of any AGI right now... and we haven't got the source, just the compiled binaries....

        Another way to put it might be that we're around 4.5 billion years of brutally field-tested spaghetti-code ahead of any AGI right now... and we haven't got the source, just the compiled binaries. Reverse engineering that is not going to be easy.

        Our code doesn't operate in a pristine world where one and zero are the only choices, either.

        6 votes
        1. Duke
          Link Parent
          Thats the most beautiful description of evolution that I've ever seen

          Another way to put it might be that we're around 4.5 billion years of brutally field-tested spaghetti-code ahead of any AGI right now... and we haven't got the source, just the compiled binaries. Reverse engineering that is not going to be easy.

          Thats the most beautiful description of evolution that I've ever seen

          1 vote
    2. [2]
      skybrian
      Link Parent
      There is work on learning basic movement and manipulation that's done with robots. Sometimes virtual robots, to reduce cost, but validating with real robots. Also, machine learning models can be...

      There is work on learning basic movement and manipulation that's done with robots. Sometimes virtual robots, to reduce cost, but validating with real robots.

      Also, machine learning models can be copied and training can be distributed. Once a research team figures out how to do something, others can start with pre-trained models rather than starting from scratch. This is fundamentally different from how animals learn and has scary implications if it can be made to work for more interesting tasks.

      1. tesseractcat
        Link Parent
        Yeah, definitely. By make install I meant without a pre-trained model. One of the benefits of an AGI (I assume, as long as it's similar to current NN techniques) is all the data is there to copy,...

        Yeah, definitely. By make install I meant without a pre-trained model. One of the benefits of an AGI (I assume, as long as it's similar to current NN techniques) is all the data is there to copy, unlike a real brain where it's very, very difficult.

        1 vote
  5. [6]
    onyxleopard
    Link
    I don’t have a strong belief either way whether we are close to building general artificial intelligence or not, but I think it’s fascinating that I’m coming across this right after a co-worker...

    I don’t have a strong belief either way whether we are close to building general artificial intelligence or not, but I think it’s fascinating that I’m coming across this right after a co-worker linked to a couple pieces of text purportedly generated by the GPT-3 language model. GPT-3 is one of the largest language models with 175 billion parameters and trained on trillions of English words collected from text on the web. The model can be seeded with a prompt and then the model can generate text based on the prompt. I will not argue that the samples I linked to demonstrate general intelligence. But it does appear to demonstrate novel, coherent content. If I was not informed beforehand that these texts were artificially generated, I doubt I would have guessed as such. Still, GPT-3 requires a hell of a lot of compute power to train and store the model and humans are capable of much more than GPT-3 without needing to read trillions of words. GPT-3 is more of a demonstration that with enough data and compute power, it is possible to create a system that at least appears to be able to write short stories as well as some humans, maybe even better than many human writers can. Depending on how GPT-3 is prompted, it is supposedly capable of performing other tasks beyond writing short stories, but since it is not widely available right now, we’ll have to wait to see more of its capabilities. It’s possible GPT-3 still has difficulties similar to its precursors and that the samples that we’re seeing have been carefully selected while less impressive outputs have been discarded. But, notably, GPT-3 does not exist in the real world like humans do. Everything it has learned, it must have learned from reading text in the giant web corpus that it was trained on. In any case, I think it is worth entertaining the possibility that general artificial intelligences is possible, and moreover that it is not necessarily far off.

    2 votes
    1. [3]
      skybrian
      Link Parent
      It's pretty impressive how much a language-based learning approach can absorb about things that aren't strictly language, but I still think GPT-3 mostly just knows a lot about language. It seems...

      It's pretty impressive how much a language-based learning approach can absorb about things that aren't strictly language, but I still think GPT-3 mostly just knows a lot about language. It seems like that could be overcome if it were somehow combined with other work on learning from images and video. I don't think anyone knows how to do that yet, but see no reason why we should rule it out in principle.

      1 vote
      1. [2]
        onyxleopard
        (edited )
        Link Parent
        I think theories of intelligence like Schmidhuber’s may be the answer. Basically the methodology currently employed for machine-learned models in the research community right now is limited by...

        I think theories of intelligence like Schmidhuber’s may be the answer. Basically the methodology currently employed for machine-learned models in the research community right now is limited by computer memory. If one had enough memory, one could let a model learn continuously and continue to update a persistent model indefinitely. The trick then becomes that if the system does have finite memory (which is necessitated by physics), then the model needs to eventually lossily compress the information that it is exposed to. Schmidhuber’s theory is that a model that is given a reward function such that it only updates itself when it can demonstrate that the updated model is more efficiently compressed than the previous one would be capable of continuous learning. Such a model would then be incentivized to seek out only those inputs that are sufficiently information-rich in the kind of content that would help it improve its model. This would be a sort of self-supervised, active learning. I wouldn’t be surprised if this is how human intelligence bootstraps itself as well. Though, it seems humans need more than bootstrapping, as human children usually require instruction and stimulation from other humans, otherwise their intelligence does not reach the same level as children that are raised in such environments.

        2 votes
        1. tesseractcat
          Link Parent
          I definitely agree. While number of neurons doesn't directly correlate to intelligence, I think there is definitely a connection. How do we know that if we didn't have a computer capable of...

          I definitely agree. While number of neurons doesn't directly correlate to intelligence, I think there is definitely a connection. How do we know that if we didn't have a computer capable of simulating the equivalent of 8.6×10^10 neurons, we wouldn't be able to somewhat easily create an AGI? Maybe the issue is less complex than we really think, and it's just a matter of computing resources.

          (Although I bet we'll figure it out earlier, since who knows how efficient our brains are at reaching AGI level)

          1 vote
    2. [2]
      unknown user
      Link Parent
      This is what bothers me. GPT-3 texts may start off alright, but their semantic structure is incoherent. It's awkward. I could excuse the images being all wonky 'cause hey, it's art and maybe it's...

      If I was not informed beforehand that these texts were artificially generated, I doubt I would have guessed as such.

      This is what bothers me. GPT-3 texts may start off alright, but their semantic structure is incoherent. It's awkward. I could excuse the images being all wonky 'cause hey, it's art and maybe it's a reference I don't understand, but the narrative is erratic. (Butterfly, and an angel? Is that from the Upanishadas or something? Gaiman's The Sandman had a lot of that, and it still made sense if you didn't read as much literature as Gaiman has.)

      My uncanny valley for AI-produced stuff is long and vast. GPT-3 is impressive in that it generates gramatically-coherent, context-aware text, but it's oh so clearly not a human-written thing. Do people skip these narrative road bumps, in a "suspension of disbelief" way? Do they not see them? Do they notice them on the first read but them realize it's a GPT thing and excuse them retroactively?

      1 vote
      1. onyxleopard
        Link Parent
        Well, hopefully someone will set up tests where you’re given sets of 3 texts: 2 from a human and 1 from GPT-3 and you have to pick out the GPT-3 text. I think most humans would be hard pressed to...

        Well, hopefully someone will set up tests where you’re given sets of 3 texts: 2 from a human and 1 from GPT-3 and you have to pick out the GPT-3 text. I think most humans would be hard pressed to tell the difference. The most impressive thing about GPT-3 isn’t really its current performance. It’s that they didn’t change the architecture from GPT-2 much at all—they just gave it more parameters and a larger training corpus and it seems to scale up without diminishing returns. If anyone decides to throw more money at training a larger model, it should only improve. And as hardware costs get cheaper, and as we come up with cleverer tricks, those will also potentially improve these kinds of language models. I think a lot of research time is going to be spent trying to get these models to be fine-tuned on menial tasks and I think there’s going to be a lot of success at that.

        1 vote