I don’t buy this argument that current specific/narrow AI is ‘just statistics’ and that AGI will be based on ‘physics’. I mean I agree that statistical models are statistical, but I don’t agree...
I don’t buy this argument that current specific/narrow AI is ‘just statistics’ and that AGI will be based on ‘physics’. I mean I agree that statistical models are statistical, but I don’t agree that they are somehow fundamentally deficient. If we do subscribe to Gardner’s theory of multiple kinds of intelligence, there are kinds of intelligence that have nothing to do with the world outside of our internal states (consciousness if you believe in it). For instance, language faculties: there are parts of human neurological systems devoted to interpreting the sound waves that we perceive as speech, and neurons devoted to translating our verbal intentions into producing speech sounds with our vocal tracts and manipulating our tongues and jaw and various points of articulation. And there are then again separate systems for processing visual information and reproducing it in reading and writing. And I will admit, that the brain is obviously involved in those systems. But, the internal part—the part behind the Markov blanket—that’s the part that’s interesting and much more far off in terms of our understanding of it. But, I think we have linguistic intelligence that goes beyond the physical. We understand relations between words, and we can learn and interpret non-linguistic symbols, and even create logical proofs or mathematical simulations in our minds that are not based on anything physical.
Basically, there are components to intelligence that I do admit have to do with I/O, i.e., the human body drivers. It’s the ‘operating system’ that I think we’re much farther away from understanding and being able to implement. And I’m not sure if both are necessary, but it’s possible in order to reach human level general intelligence you’ll need the human body or an analogue. And something I haven’t seen people discussing is that maybe human level intelligence is really an emergent phenomenon that is dependent on that physical world and human faculties for I/O. That is, maybe the reason that human intelligence is different than dolphin intelligence, is different than rat intelligence, is different from cuttlefish intelligence is because our senses and our bodies and our evolved opposable thumbs, and vocal folds etc. And maybe, if you throw a pile of neuromorphic chips in datacenters and network them together, it will never achieve AGI, because its physical interactions with the world will be so different from the human experience.
I still think we’re a century off from understanding the human brain and human intelligence. There’s still way too much hand waving with theories like Gardner’s. I am more partial to ideas like Schmidhuber’s of giving systems the proper long term memory capacity to store a model of the world, and priming it with a goal to continually update that model by learning to compress it when that compression leads to generalizations. That is, building in a curiosity reward function to let it learn to learn. But, how to formally represent that model, and how human brains manage to store and compress such models (if that theory is actually apt) is still far outside of our understanding or ability to implement. And I still haven’t heard a complete and rational argument about how such models must be ‘physical’ and not statistical. I still can entertain the idea that whatever human intelligence is, it could be equivalent to a bunch of linear algebra. I haven’t seen good evidence on either side of that debate. I could entertain the idea that human intelligence is emergent from a bunch of specific intelligences. I could entertain the idea that human intelligence is some monolithic process, though it seems like there are cases such as savant syndrome that seem, at least naively, to indicate that the human brain may have some resource constraints that generally limit our intelligence (but we’d need a better definition of intelligence to explore this).
I have to say, though, I got very little out of this talk. It seemed very scattered in its breadth, and there was too much diving into minutiae of irrelevant history. I really hate this kind of non-technical talk and the proliferation of this kind of format at conventions. It gives egotists a chance to paint a veneer over their lack of real insight and peddle their personal brands as if they were authoritative. If you start your talk off by hand waving through Howard Gardner without really addressing any of that theory, you’ve already failed at explaining what you claimed to set out to do. I’m convinced Schmidhuber or someone more like him (maybe equally pompous and egotistical, but also actually very insightful, and who has already made major contributions) will continue to achieve in the field of AI, while I would expect someone like Morgan to fail to achieve any real progress in his lifetime, while continuing to profess that we’re just ‘years away’ for many, many years to come. Morgan should take notes from the philosopher Yogi Berra: “It's tough to make predictions, especially about the future.”
I don’t buy this argument that current specific/narrow AI is ‘just statistics’ and that AGI will be based on ‘physics’. I mean I agree that statistical models are statistical, but I don’t agree that they are somehow fundamentally deficient. If we do subscribe to Gardner’s theory of multiple kinds of intelligence, there are kinds of intelligence that have nothing to do with the world outside of our internal states (consciousness if you believe in it). For instance, language faculties: there are parts of human neurological systems devoted to interpreting the sound waves that we perceive as speech, and neurons devoted to translating our verbal intentions into producing speech sounds with our vocal tracts and manipulating our tongues and jaw and various points of articulation. And there are then again separate systems for processing visual information and reproducing it in reading and writing. And I will admit, that the brain is obviously involved in those systems. But, the internal part—the part behind the Markov blanket—that’s the part that’s interesting and much more far off in terms of our understanding of it. But, I think we have linguistic intelligence that goes beyond the physical. We understand relations between words, and we can learn and interpret non-linguistic symbols, and even create logical proofs or mathematical simulations in our minds that are not based on anything physical.
Basically, there are components to intelligence that I do admit have to do with I/O, i.e., the human body drivers. It’s the ‘operating system’ that I think we’re much farther away from understanding and being able to implement. And I’m not sure if both are necessary, but it’s possible in order to reach human level general intelligence you’ll need the human body or an analogue. And something I haven’t seen people discussing is that maybe human level intelligence is really an emergent phenomenon that is dependent on that physical world and human faculties for I/O. That is, maybe the reason that human intelligence is different than dolphin intelligence, is different than rat intelligence, is different from cuttlefish intelligence is because our senses and our bodies and our evolved opposable thumbs, and vocal folds etc. And maybe, if you throw a pile of neuromorphic chips in datacenters and network them together, it will never achieve AGI, because its physical interactions with the world will be so different from the human experience.
I still think we’re a century off from understanding the human brain and human intelligence. There’s still way too much hand waving with theories like Gardner’s. I am more partial to ideas like Schmidhuber’s of giving systems the proper long term memory capacity to store a model of the world, and priming it with a goal to continually update that model by learning to compress it when that compression leads to generalizations. That is, building in a curiosity reward function to let it learn to learn. But, how to formally represent that model, and how human brains manage to store and compress such models (if that theory is actually apt) is still far outside of our understanding or ability to implement. And I still haven’t heard a complete and rational argument about how such models must be ‘physical’ and not statistical. I still can entertain the idea that whatever human intelligence is, it could be equivalent to a bunch of linear algebra. I haven’t seen good evidence on either side of that debate. I could entertain the idea that human intelligence is emergent from a bunch of specific intelligences. I could entertain the idea that human intelligence is some monolithic process, though it seems like there are cases such as savant syndrome that seem, at least naively, to indicate that the human brain may have some resource constraints that generally limit our intelligence (but we’d need a better definition of intelligence to explore this).
I have to say, though, I got very little out of this talk. It seemed very scattered in its breadth, and there was too much diving into minutiae of irrelevant history. I really hate this kind of non-technical talk and the proliferation of this kind of format at conventions. It gives egotists a chance to paint a veneer over their lack of real insight and peddle their personal brands as if they were authoritative. If you start your talk off by hand waving through Howard Gardner without really addressing any of that theory, you’ve already failed at explaining what you claimed to set out to do. I’m convinced Schmidhuber or someone more like him (maybe equally pompous and egotistical, but also actually very insightful, and who has already made major contributions) will continue to achieve in the field of AI, while I would expect someone like Morgan to fail to achieve any real progress in his lifetime, while continuing to profess that we’re just ‘years away’ for many, many years to come. Morgan should take notes from the philosopher Yogi Berra: “It's tough to make predictions, especially about the future.”
Seems like the person is “Peter Morgan” who is the CEO of Turing.AI
Thanks, fixed.