I've seen quite a lot of discussion about this post online. My feeling is that the article makes some good points, and some really bad ones. It is certainly true that there is a disconnect with...
I've seen quite a lot of discussion about this post online. My feeling is that the article makes some good points, and some really bad ones. It is certainly true that there is a disconnect with what the public has been lead to believe is possible in the next ~5 years, and what the reality of the situation is. I don't know any researchers who think deep learning is going to give us AGI, however. It is also clear that deep learning has enabled us to do things that were unthinkable only a few years ago, and we haven't come close to the limit of what it makes possible.
This post tasted like the kind of celebrity hype it was purporting to denigrate. Andrew Ng, who they call out individually, famously said “If a typical person can do a mental task with less than...
This post tasted like the kind of celebrity hype it was purporting to denigrate. Andrew Ng, who they call out individually, famously said “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future,” which is a far cry from hype.
It's pretty impressive that AI can (almost) match trained radiologists at reading scans.
The comments also point out that the giant upscaling of necessary compute power is on the training side: many of the models can be evaluated quite efficiently. And there's been recent work on shrinking the models even further if you're willing to give up a couple of percent of accuracy.
As someone who is working in the field, I would say both sides are wrong. The ones who are claiming that deep learning or recently deep reinforcement learning will bring us the superintelligence...
As someone who is working in the field, I would say both sides are wrong. The ones who are claiming that deep learning or recently deep reinforcement learning will bring us the superintelligence we have to fear that much is probably never tried to create this kind of solutions for any real use cases. They work only in very specific cases and most of the time we are very surprised that they work at all. On the other side claiming that because of the current solutions for extremely hard problems like self-driving cars or natural language understanding makes silly mistakes than everything is bad and winter is coming is equally extreme. I have lived through the end of the last AI winter and I can tell you research and applications did not stop using the techniques developed before the winter. What happened that the media hype was not there anymore which is sometimes just the sign of growing up and become a mature industry so it becomes boring for media. That will certainly happen with deep learning and in some extent, we are already in that stage. However, I predict that during this low noise period very narrow applied techniques on industries nobody talks about (eg manufacturing, agriculture general business processes) and methods which are not even using deep learning but just classic ML will transform those industries and the life of the people working there.
There was a period of increased interest in machine learning / AI around the 1980s, and along with it came a lot of funding. When the reality of the research that was being worked on became clear...
There was a period of increased interest in machine learning / AI around the 1980s, and along with it came a lot of funding. When the reality of the research that was being worked on became clear (e.g. it wasn't as impressive as people had expected), a lot of funding was cut. This was known as the AI winter https://en.wikipedia.org/wiki/AI_winter.
I've seen quite a lot of discussion about this post online. My feeling is that the article makes some good points, and some really bad ones. It is certainly true that there is a disconnect with what the public has been lead to believe is possible in the next ~5 years, and what the reality of the situation is. I don't know any researchers who think deep learning is going to give us AGI, however. It is also clear that deep learning has enabled us to do things that were unthinkable only a few years ago, and we haven't come close to the limit of what it makes possible.
This post tasted like the kind of celebrity hype it was purporting to denigrate. Andrew Ng, who they call out individually, famously said “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future,” which is a far cry from hype.
It's pretty impressive that AI can (almost) match trained radiologists at reading scans.
The comments also point out that the giant upscaling of necessary compute power is on the training side: many of the models can be evaluated quite efficiently. And there's been recent work on shrinking the models even further if you're willing to give up a couple of percent of accuracy.
As someone who is working in the field, I would say both sides are wrong. The ones who are claiming that deep learning or recently deep reinforcement learning will bring us the superintelligence we have to fear that much is probably never tried to create this kind of solutions for any real use cases. They work only in very specific cases and most of the time we are very surprised that they work at all. On the other side claiming that because of the current solutions for extremely hard problems like self-driving cars or natural language understanding makes silly mistakes than everything is bad and winter is coming is equally extreme. I have lived through the end of the last AI winter and I can tell you research and applications did not stop using the techniques developed before the winter. What happened that the media hype was not there anymore which is sometimes just the sign of growing up and become a mature industry so it becomes boring for media. That will certainly happen with deep learning and in some extent, we are already in that stage. However, I predict that during this low noise period very narrow applied techniques on industries nobody talks about (eg manufacturing, agriculture general business processes) and methods which are not even using deep learning but just classic ML will transform those industries and the life of the people working there.
What's "AI Winter"?
There was a period of increased interest in machine learning / AI around the 1980s, and along with it came a lot of funding. When the reality of the research that was being worked on became clear (e.g. it wasn't as impressive as people had expected), a lot of funding was cut. This was known as the AI winter https://en.wikipedia.org/wiki/AI_winter.