7 votes

Will AI make us overconfident?

3 comments

  1. [2]
    Sodliddesu
    Link
    I mean "will it?" It already has. Lawyers are already submitting briefs that they ran through "AI", CEOs ready to replace everyone who's not them with a chat bot, you get the point. I think, in...

    I mean "will it?" It already has. Lawyers are already submitting briefs that they ran through "AI", CEOs ready to replace everyone who's not them with a chat bot, you get the point.

    I think, in time, we'll become less over confident due to understanding the limits of it but we're peak or near peak overconfidence right now.

    5 votes
    1. skybrian
      Link Parent
      Yes, and a similar thing happened during the dot-com bubble. The Internet was the Future and it was going to be glorious. Since then, the results have been less glorious and more mixed, but things...

      Yes, and a similar thing happened during the dot-com bubble. The Internet was the Future and it was going to be glorious. Since then, the results have been less glorious and more mixed, but things did change quite a lot.

      I don’t know if we’re at a peak. Perhaps it’s an eternal September thing? A lot of people will learn to be more cautious by being burned by it, ideally in a low-stakes, educational environment.

      Like gambling, astrology, and conspiracy theories, overconfidence in AI advice might be a well-known hazard that doesn’t go away, because there are always people who fall for it.

      4 votes
  2. skybrian
    Link
    From the blog post: … … I expect there will be more automation, too? It’s easier to get started on automation with AI, but again, the ease of building a demo that sometimes works is deceptive.

    From the blog post:

    As in any fairytale, accepting magical assistance comes with risks. Chatbot advice has saved me several days on a project, but if you add up bugs and mistakes it has cost me at least a day too. And it’s hard to find bugs if you’re not the person who put them in! I won’t try to draw up a final balance sheet here, because a) as tools evolve, the balance will keep changing, and b) people have developed very strong priors on the topic of “AI hallucination” and I don’t expect to persuade readers to renounce them.

    Instead, let’s agree to disagree about the ratio of “really helping” to “encouraging overconfidence.” What I want to say in this short post is just that, when it comes to education, even encouraging overconfidence can have positive effects. A lot of learning takes place when we find an accessible entry point to an actually daunting project.

    As in the story of “Stone Soup,” the ease was mostly deceptive. In fact I needed to back up and learn statistics in order to understand the numbers R was going to give me, and that took a few years more than I expected. But it’s still good for everyone that the interactive structure of the internet has created accessible entry points to difficult problems. Even if the problem is in fact still a daunting monolith … now it has a tempting front door.

    Once we get past the silly-ad phase of adjusting to language models, the effect of this technology may actually be to encourage people to try doing more things for themselves. The risk is not necessarily that AI will make people passive; in fact, it could make some of us more ambitious. Encouraging ambition has upsides and downsides. But anyway, “automation” might not be exactly the right word for the process. I would lean instead toward stories about seven-league boots and animal sidekicks who provide unreliable advice.

    I expect there will be more automation, too? It’s easier to get started on automation with AI, but again, the ease of building a demo that sometimes works is deceptive.