20 votes

Google scrambles to manually remove weird AI answers in search

10 comments

  1. [8]
    JCAPER
    Link
    Google seems to be facing the same problem that I experienced with Perplexity, although they were rarer. Can count with one hand kind of rarer. Basically the quality of the AI’s output is highly...

    Google seems to be facing the same problem that I experienced with Perplexity, although they were rarer. Can count with one hand kind of rarer.

    Basically the quality of the AI’s output is highly dependent on the quality of the search results, but then there’s an additional step where the AI itself has to understand what it’s reading and separate the wheat from the chaff. That’s the problem with Google’s AI.

    Even the best AI’s like claude Opus and GPT4 have some trouble understanding things like sarcasm and lies, but they would never suggest things like the ones that I’ll paste below. Google’s AI seems incapable of doing basic critical thinking before outputing it’s final result.

    Here’s some blunders that I’ve seen:

    • in a query “I’m feeling depressed”, it said that a reddit user suggested jumping off the golden gate bridge

    • in a query “is murder cool”, it said “Murder is an unlawful homicide with malice aforethought and is considered wrong or evil by its nature. However, some say that murder can be thrilling and titillating, and that the authenticity of murder and trauma can be appealing.”

    • suggested adding non-toxic glue to pizza sauce, based on an 11-year-old Reddit joke

    • advised that it's safe to leave dogs in hot cars

    • recommended staring at the sun for 5-30 minutes for health benefits, falsely attributing this to WebMD

    • proposed eating rocks daily, citing a satirical article from The Onion as if it were factual

    • if asked if X game/series has gay characters, it will often say yes, sometimes with characters that don’t exist. Its source probably being fan fictions from whatppad

    28 votes
    1. [6]
      Pavouk106
      Link Parent
      And we want to call that AI? Instead of artificial intelligence, this should be artificial dumbness. This perfectly shows that thereis nothing inteligent behind he amswers, it is just a big...

      And we want to call that AI? Instead of artificial intelligence, this should be artificial dumbness.

      This perfectly shows that thereis nothing inteligent behind he amswers, it is just a big algorithmic language model. It doesn't think, it just sorts out and spit out output based on some "knowledge" (it doesn't "know", it just has some facts, whether true or not, at its disposal).

      Don't get me wrong, I'm not anti-AI. But in its current form, it's not the AI that I imagine. Once it is sentient and thinks by itself, it would be AI. Think VIKI from I, robot movie. Or Skynet from Terminator. Today's AI is just a really big machine that was handed data and acts upon them to answer your query.

      12 votes
      1. json
        Link Parent
        Artificial Internet, since it's a generative repost machine.

        Artificial Internet, since it's a generative repost machine.

        11 votes
      2. raze2012
        Link Parent
        It's intelligence but intelligence can be wrong. I've had my share of sarcasm I took seriously. I think the big issue is that you don't seem to be able to properly "correct" an AI in the sense you...

        It's intelligence but intelligence can be wrong. I've had my share of sarcasm I took seriously. I think the big issue is that you don't seem to be able to properly "correct" an AI in the sense you correct a human. It also has a habit to lie (hallucinate) in some answers where it doesn't have a properly vetted answer. These definitely harm the credibility of the tech as of now.

        As for "AI", there's another level above AI that companies want to aspire to: Artificial General Intelligence (AGI). Basically a form of AI that approaches sapience in its ability to approach information, learn, continue to learn of its own volition, and even postulate and become opinionated.

        We're not remotely close to this with current technology, but tech sure does want to sell this to other companies as if they can fully replace human labor now. But by the time AGI is reached well have much deeper dilemmas to ponder than copyright.

        6 votes
      3. JCAPER
        Link Parent
        Gemini AI is notably poor when compared to its competitors (Claude Opus and GPT 4o). Only thing going for it is that it has the biggest context window, but it lacks in reasoning. I use Perplexity...

        Gemini AI is notably poor when compared to its competitors (Claude Opus and GPT 4o). Only thing going for it is that it has the biggest context window, but it lacks in reasoning.

        I use Perplexity and use Opus and GPT 4o to process the search results, and they’re far more competent than what I’ve seen with Google’s AI. They’re not perfect mind you, but good enough that they changed how I do searches. For quick summaries or news update, they’re perfect, with no hallucinations.

        It doesn’t substitute proper deep dives, and for more serious things it’s better to tweak the search results (so you don’t get random unsubstantiated comments from Reddit for example), but it works well enough.

        5 votes
      4. zipf_slaw
        Link Parent
        Seems apt to me: it's not "an intelligence that is artificial", rather it's that the intelligence itself is artificial.

        And we want to call that AI?

        Seems apt to me: it's not "an intelligence that is artificial", rather it's that the intelligence itself is artificial.

        4 votes
      5. JCPhoenix
        (edited )
        Link Parent
        There was a sci-fi book I once read (or started reading anyway) that referred to AI as "Artificial Idiots." Seems apt here. Don't remember the name of the book. If I'm remembering correctly, the...

        There was a sci-fi book I once read (or started reading anyway) that referred to AI as "Artificial Idiots." Seems apt here. Don't remember the name of the book.

        If I'm remembering correctly, the society in the book eventually got to human-level artificial intelligence, but they ended up calling it something else entirely because of the association of "AI" and "artificial idiot." Maybe we'll end up taking a similar path with the term.

        3 votes
    2. DefinitelyNotAFae
      Link Parent
      The Golden gate bridge one was apparently faked And reposted by somebody unaware when it went viral. So at the least it didn't do that. Probably.

      The Golden gate bridge one was apparently faked And reposted by somebody unaware when it went viral. So at the least it didn't do that. Probably.

      5 votes
  2. Thrabalen
    Link
    Okay, someone please reassure me that I am not the only one who thought "Yankovich?" for just a moment thanks to wording and font. Also, I'm beginning to suspect this whole AI thing isn't fully...

    Okay, someone please reassure me that I am not the only one who thought "Yankovich?" for just a moment thanks to wording and font.

    Also, I'm beginning to suspect this whole AI thing isn't fully baked. Just a hunch.

    10 votes
  3. skybrian
    (edited )
    Link
    Bad results from a Google search are nothing new; it’s never been true that you get sensible results for every query. It’s worth doing the searches because sometimes you find what you’re looking...

    Bad results from a Google search are nothing new; it’s never been true that you get sensible results for every query. It’s worth doing the searches because sometimes you find what you’re looking for., for whatever questions you actually have.

    The difference with AI is that the bad results are entertaining, often funny or alarming, so people reshare them and they go viral. Furthermore, because they’re funny, people are searching for weird results. Someone searching on “how many rocks should I eat” isn’t looking for an answer, they’re hoping that the result will be funny.

    The more usual behavior is that a bad result would result in irrelevant search results or spam, which are boring and easy to ignore. You don’t share it with the world. You refine your query.

    Compare with Google bombing. In that case, it was pretty clear that it wasn’t a normal response, but it was funny. With AI, I think we’re still learning what normal behavior is. You can’t really tell from social media how hard it is to find a funny or alarming answer when you’re asking a normal question, in part because what counts as a normal question is pretty nebulous. People use Google very differently.

    I suppose one response would be to lean into it and make a search engine for wrong answers only. :)

    Slightly more seriously, maybe Google needs a way to say "yes, we know this result is wrong, but we're leaving it in because it's funny."

    2 votes