Thank you for the article - it nicely sums up why I've been neither giddy nor apocalyptically frightened of AI development to date, and likely for the remainder of my life. What's being called AI...
Thank you for the article - it nicely sums up why I've been neither giddy nor apocalyptically frightened of AI development to date, and likely for the remainder of my life.
What's being called AI is a mashup of expert systems (outputs from coded decision trees or neural networks using well-defined inputs), pattern recognizers, and Bayesian filters. There are use cases where you can now obtain novel results much faster than humans could derive them (e.g. protein prediction, drug discovery, military and econometric modeling, etc.). And yet these mechanistically created hypotheses still require real-world validation.
There's a monetizable novelty factor in "dreaming machines" which can create novel syntheses of the entire existing corpus of human-derived arts, letters, and music. I'm not dismissing the fact that livelihoods are being disrupted, and there's historical precedent for the suffering caused by automation. [My brother's professional photography job was basically destroyed by cheap high-quality digital imaging. It's not that he couldn't produce great pictures, it's that far fewer people were willing to pay when they could take acceptably mediocre ones themselves.]
But many of the most economically valuable human talent replacements (autonomous driving/piloting, scientific research, or police/security/military functions, for instance) depend on the ability to gather and encode training data from the physical world. The AI must then issue justifiable decisions that meet more stringent output requirements than humans could provide, before we can trust the results.
In this essay we assemble the best arguments that we have encountered for why transformative AI is hard to achieve. To avoid lengthening an already long piece, we often refer to the original sources instead of reiterating their arguments in depth. We are far from the first to suggest these points. Our contribution is to organize a well-researched, multidisciplinary set of ideas others first advanced into a single integrated case. Here is a brief outline of our argument:
The transformational potential of AI is constrained by its hardest problems
Despite rapid progress in some AI subfields, major technical hurdles remain
Even if technical AI progress continues, social and economic hurdles may limit its impact
2 things I don’t think we’re mentioned: For sub fields where quality matters over quantity (particularly creative and academic), being nearly as good as humans is nearly worthless. If AI can...
2 things I don’t think we’re mentioned:
For sub fields where quality matters over quantity (particularly creative and academic), being nearly as good as humans is nearly worthless. If AI can create media 90% as good as human-made, people will still disproportionately consume the human-made media first. AI has to produce content better than humans, which it seems like we’re still far away; or it needs to at least be good enough that people don’t notice or care (which is what the Hollywood writers are worried about, but I’m not convinced GPT has enough creativity and coherence for even a generic movie)
People are cheap. There are many assembly lines which still use humans even though they could be replaced by robots, because the cost of building and maintaining the robots is too high, but the cost of hiring underpaid factory workers is low enough. Moreover, most people actually like working to an extent, not in factories but common in the creative field; so with UBI or even in side jobs, some people will work for cheap or even free.
I think your first point is pretty true for hit-driven fields like entertainment, but there is often a long tail of tasks where good enough is good enough. For example, teaching isn't a hit-driven...
I think your first point is pretty true for hit-driven fields like entertainment, but there is often a long tail of tasks where good enough is good enough. For example, teaching isn't a hit-driven field, I don't think? At least, not day-to-day.
For the second point, whether people are cheap or not varies greatly by location and field. Rich countries tend to invest more in automation because locally, labor is expensive.
And, even so, AI is cheaper. If we're talking Mechanical Turk versus AI, I think it's going to be a tough for Mechanical Turk to compete on many tasks. I wonder how far my $20/month ChatGPT subscription would go on Mechanical Turk?
Volunteers need to either find the job fun or be motivated. Certainly a lot has been done by volunteers, though.
Thank you for the article - it nicely sums up why I've been neither giddy nor apocalyptically frightened of AI development to date, and likely for the remainder of my life.
What's being called AI is a mashup of expert systems (outputs from coded decision trees or neural networks using well-defined inputs), pattern recognizers, and Bayesian filters. There are use cases where you can now obtain novel results much faster than humans could derive them (e.g. protein prediction, drug discovery, military and econometric modeling, etc.). And yet these mechanistically created hypotheses still require real-world validation.
There's a monetizable novelty factor in "dreaming machines" which can create novel syntheses of the entire existing corpus of human-derived arts, letters, and music. I'm not dismissing the fact that livelihoods are being disrupted, and there's historical precedent for the suffering caused by automation. [My brother's professional photography job was basically destroyed by cheap high-quality digital imaging. It's not that he couldn't produce great pictures, it's that far fewer people were willing to pay when they could take acceptably mediocre ones themselves.]
But many of the most economically valuable human talent replacements (autonomous driving/piloting, scientific research, or police/security/military functions, for instance) depend on the ability to gather and encode training data from the physical world. The AI must then issue justifiable decisions that meet more stringent output requirements than humans could provide, before we can trust the results.
From the article:
2 things I don’t think we’re mentioned:
For sub fields where quality matters over quantity (particularly creative and academic), being nearly as good as humans is nearly worthless. If AI can create media 90% as good as human-made, people will still disproportionately consume the human-made media first. AI has to produce content better than humans, which it seems like we’re still far away; or it needs to at least be good enough that people don’t notice or care (which is what the Hollywood writers are worried about, but I’m not convinced GPT has enough creativity and coherence for even a generic movie)
People are cheap. There are many assembly lines which still use humans even though they could be replaced by robots, because the cost of building and maintaining the robots is too high, but the cost of hiring underpaid factory workers is low enough. Moreover, most people actually like working to an extent, not in factories but common in the creative field; so with UBI or even in side jobs, some people will work for cheap or even free.
I think your first point is pretty true for hit-driven fields like entertainment, but there is often a long tail of tasks where good enough is good enough. For example, teaching isn't a hit-driven field, I don't think? At least, not day-to-day.
For the second point, whether people are cheap or not varies greatly by location and field. Rich countries tend to invest more in automation because locally, labor is expensive.
And, even so, AI is cheaper. If we're talking Mechanical Turk versus AI, I think it's going to be a tough for Mechanical Turk to compete on many tasks. I wonder how far my $20/month ChatGPT subscription would go on Mechanical Turk?
Volunteers need to either find the job fun or be motivated. Certainly a lot has been done by volunteers, though.