9 votes

Can you distinguish Daniel Dennett from a computer?

3 comments

  1. nothis
    Link
    I think they were pretty fucking close, though, lol. We're absolutely in AI fantasy land with GPT-3 and it's still so early. This is the kind of technology where, 5 years later, you're on an...

    I want to emphasize: This is not a Turing test! Had experts been given an extended opportunity to interact with GPT-3, I have no doubt they would soon have realized that they were not interacting with the real Daniel Dennett.

    I think they were pretty fucking close, though, lol. We're absolutely in AI fantasy land with GPT-3 and it's still so early. This is the kind of technology where, 5 years later, you're on an entirely different level and the trend line here is absolutely mind-boggling. I believe Turing had quite sterile and "controlled" environments in his mind when describing the test setup. For example, the evaluation would be done by a third person who evaluates a text printout of a conversation between another person and a computer, the evaluator thus would not have had an opportunity to "interact" himself and the person interacting with the computer would not expect a person whose speech patterns, life work and mannerisms they would easily recognize.

    Absolutely fascinating. All these answers are so good.

    3 votes
  2. imperialismus
    (edited )
    Link
    This is really interesting. I tried taking their simplified quiz, where you have to guess Dennett's response to 10 questions hidden among 4 AI-generated responses. I scored 7/10, which I think is...

    This is really interesting. I tried taking their simplified quiz, where you have to guess Dennett's response to 10 questions hidden among 4 AI-generated responses. I scored 7/10, which I think is pretty good, but a few years ago it would be absurd if you couldn't score at least 9/10 easily. I have a passing familiarity with Dennett's philosophy, but I've only read a few shorter articles by him and that was years ago, and never read any of his full-length books.

    I think it's interesting to look at the answers I got wrong, and why they tripped me up. The following section contains spoilers for the quiz, so don't read it if you want to take the quiz yourself!


    The first one I got wrong was the first question about evolution. I think I got tripped up by the AI name-dropping Galileo and Copernicus and their importance for our understanding of our place in the cosmos. Didn't expect the AI to be able to so easily conjure up a relevant analogy. I'm happy that the correct answer was my second choice, though.

    The second answer I got wrong was the one about suffering in chimps and dogs. I picked the answer that included a long story about a personal encounter with a sad injured chimpanzee, even though I felt like there was something off about it, simply because I underestimated the AI's ability to make up a story out of whole cloth. Again, my second choice was the correct one. Another case of me trying and failing to be clever.

    The third one was the question about Jerry Fodor's work. I simply am not familiar with his work at all, so I had no idea what Dennett's opinions about it would be, which meant that I had to guess simply based on the linguistic cadence of the responses rather than any relevant knowledge about philosophy.


    Spoilers end here.

    I'm happy that the blog post acknowledges how limited this experiment is. Interactive conversation tends to expose the flaws of these ultra-sophisticated, but still kind of dumb AIs quite quickly. I remember someone started asking GPT-3 about how many limbs a spider has, etc., and it was doing fine until it was asked how many fingers (or something) the sun has, and happily concocted an answer that demonstrated a lack of basic understanding of the physical world. Still, I'm amazed at how far we've come.

    I do suspect that there are limits to the current approach. It seems kind of random how much common-sense "knowledge" an AI acquires. It's utterly alien to imagine that an entity could simultaneously "understand", or at least convincingly fake understanding, the importance of specific scientific discoveries on human worldviews, and also fail to understand the difference between a mammal and a huge ball of burning gas and plasma in space.

    3 votes
  3. patience_limited
    Link
    Dammit, there's my preternatural serendipity again. On the commute to work this morning, I'm idly thinking my way through a first draft short story about an experience of teaching an AI to emulate...

    Dammit, there's my preternatural serendipity again. On the commute to work this morning, I'm idly thinking my way through a first draft short story about an experience of teaching an AI to emulate oneself. There are all the opportunities for humorous and horrifying divergences from the real-life model... but are they really divergent, or just possibilities the original model hasn't manifested yet?

    I'm not a Dennett expert, but I'm familiar enough with his canon, that it's well-known he can be self-contradictory or just plain whimsical and self-mocking. GPT-3 gets this, but it's hard to say that a set of randomness parameters wouldn't generate the same content.

    2 votes