5 votes

How a Pioneer of Machine Learning Became One of Its Sharpest Critics

2 comments

  1. onyxleopard
    Link
    I’ll first state that I am not familiar with Pearl’s work, but I do have some experience with Machine Learning within the field of Natural Language Processing (NLP). The interview at least made me...

    I’ll first state that I am not familiar with Pearl’s work, but I do have some experience with Machine Learning within the field of Natural Language Processing (NLP). The interview at least made me think, but I really don’t agree with Pearl on much.

    Mathematics has not developed the asymmetric language required to capture our understanding that if x causes y that does not mean that y causes x.

    Has nobody told Pearl about propositional or modal logic? I want to be charitable and just say that Pearl is trying to simplify things for the sake of the interview, or maybe less charitably that he is so buried in his own field that he can’t be bothered to learn about work in related fields (that is more than 50 years old now). In either case, I can’t take him seriously when he says this.

    No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting.

    While I do agree that there is a lot of fancy curve-fitting going in the machine-learning sphere today (and maybe those efforts could be more fruitful if they were spread out among more avenues of research), I don’t think anyone’s shown that models built on fitting a large enough variety of different functions is fundamentally insufficient to be generally intelligent, much less useful. If we’re surprised by how useful function fitting is, should we be less surprised to learn that all of what humans consider intelligence can be accomplished with higher order function fitting? Pearl doesn’t give us any hard evidence that modeling the world can’t be reduced to function fitting—he simply assumes it.

    We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.

    This is what Jürgen Schmidhuber has been advocating for a long time now. I still don’t see a fundamental problem with equipping artificial agents with a large amount of memory, some fancy optimization algorithms, and a sufficient cost function to motivate exploration/experimentation. That’s my personal view of intelligence: the ability to continually optimize/compress your model of reality by updating your model based on new information (and potentially manipulating your environment in such a way as to optimize the likelihood of experience new information). These are, fundamentally, optimization problems. I’m willing to admit that maybe I’ve been primed to think this way because of the success of the fancy optimizers that Pearl seems to have walked away from, but I feel like the onus is on him (and others who agree with him) to demonstrate that the hard problems of AI are the kinds of problems that our existing tools are incapable of solving.

    We’re going to have robots with free will, absolutely. We have to understand how to program them and what we gain out of it.

    I’m not sure what Pearl’s definition of free will is, but the idea of an autonomous agent that is also programmed by someone/something else seems oxymoronic. I don’t know enough about the philosophy of free will to dispute this, but I’m having trouble understanding how an agent that truly possessed free will could be forced to follow some program unless by coercion (in which case I’m not sure I’d call that programming).

    It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says, “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.

    I’m not sure I buy this definition. I guess it makes sense from the perspective of normative ethics, but I think it presumes awareness of one’s actions. For instance, I don’t think a child who is hungry and takes food that doesn’t belong to them is evil. Moreover, if a destitute person steals a sandwich from someone who can afford to buy another, is the destitute person evil? What if the destitute person has the option of stealing from another destitute person or a well-off person? What if they intend to pay the meal back? I’m not convinced that evil is reducible to such a simple account as Pearl describes. I guess that’s my main takeaway from this whole interview: Pearl is either dumbing things down so far as to be uninteresting (and the interviewer does not know enough to challenge the oversimplifications), or he is out of touch.

    2 votes