21 votes

Google removes some of its AI summaries after users’ health put at risk

14 comments

  1. MimicSquid
    Link
    Blocking the specific search that might cause legal liability if they left it up after being informed amd ignoring all the other searches shows exactly how much they actually care.

    Blocking the specific search that might cause legal liability if they left it up after being informed amd ignoring all the other searches shows exactly how much they actually care.

    24 votes
  2. [7]
    crialpaca
    Link
    I hope I didn't trust the AI overview but I have literally searched something along these lines (liver levels) in the last six months. Ugh. I hate what Google is doing to its own product.

    I hope I didn't trust the AI overview but I have literally searched something along these lines (liver levels) in the last six months. Ugh. I hate what Google is doing to its own product.

    9 votes
    1. [3]
      Diff
      Link Parent
      And it's poisoned the entire well of the internet. There's always been SEO-optimized nonsense sites clogging search results, but it's nigh impossible to find any good sources of information...

      And it's poisoned the entire well of the internet. There's always been SEO-optimized nonsense sites clogging search results, but it's nigh impossible to find any good sources of information nowadays that aren't from one of the mainstream social media sites.

      13 votes
      1. [2]
        skybrian
        Link Parent
        It seems like for general health information, there are a few good sites like the Mayo clinic and they tend to rank pretty high? It only goes so deep, though.

        It seems like for general health information, there are a few good sites like the Mayo clinic and they tend to rank pretty high? It only goes so deep, though.

        3 votes
        1. Diff
          Link Parent
          Health is one narrow well that isn't completely poisoned. There's a few of those left, programing is another, but they are fairly narrow and still very clogged.

          Health is one narrow well that isn't completely poisoned. There's a few of those left, programing is another, but they are fairly narrow and still very clogged.

          11 votes
    2. [3]
      redwall_hp
      Link Parent
      I've found that the AI summary is more often than not the opposite of what the top ranked page says. And if something isn't phrased as a question (nothing I search is), it seems to often infer a...

      I've found that the AI summary is more often than not the opposite of what the top ranked page says. And if something isn't phrased as a question (nothing I search is), it seems to often infer a positively-phrased question and confirms it.

      It also likes to screw up attempts to convert units, which used to more or less verbatim go into the unit converter widget in the search results, but now tends to end up triggering some LLM garbage that is wildly incorrect.

      9 votes
      1. crialpaca
        Link Parent
        I've had it contradict itself in the same sentence, which is super fun.

        I've had it contradict itself in the same sentence, which is super fun.

        10 votes
      2. Diff
        Link Parent
        Below in my work vent, the internal inconsistency was actually due to exactly this, a bad unit conversion. It actually had (mostly) decent math, but it generated a final answer at the start of the...

        Below in my work vent, the internal inconsistency was actually due to exactly this, a bad unit conversion. It actually had (mostly) decent math, but it generated a final answer at the start of the overview, before it ever generated the math question, which was clearly wildly incorrect when units were converted to match the work it showed below.

        3 votes
  3. [6]
    skybrian
    Link
    From the article: [...] [...] [...] ... An optimistic take on it might be that they're cooperating to improve search results. Could be worse, I guess.

    From the article:

    Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information.

    [...]

    In one case that experts described as “dangerous” and “alarming”, Google provided bogus information about crucial liver function tests that could leave people with serious liver disease wrongly thinking they were healthy.

    [...]

    After the investigation, the company has removed AI Overviews for the search terms “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.

    [...]

    The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range”, prompted AI Overviews. That was a big worry, Hebditch said.

    ...

    Google, which has a 91% share of the global search engine market, said it was reviewing the new examples provided to it by the Guardian.

    An optimistic take on it might be that they're cooperating to improve search results. Could be worse, I guess.

    5 votes
    1. [5]
      Diff
      Link Parent
      There's just too much potential to go wrong. It's whack-a-mole with one hammer and a million parallel moles to hit. Someone asking how to tight to tighten their car's lug nuts, asking how long a...

      There's just too much potential to go wrong. It's whack-a-mole with one hammer and a million parallel moles to hit. Someone asking how to tight to tighten their car's lug nuts, asking how long a food stays good in the fridge, asking whether two medications interact. There are so many ways this can go wrong and cause serious harm. When it's human-authored, a human can be held accountable. These systems continue to prove they can not be held accountable, but are continuing to prove they have massive potential for harm.

      AI usage at work ventAnd people are relying on them more and more, and they trust them against continuing evidence not to. Today at work there were three separate instances where my boss just trusted the AI overview answer, even in one case where the AI's answer was internally inconsistent, one where it was just plain disconnected from reality, and one where it gave incorrect information on a niche USPS policy topic that would have cost us a good deal of money and time to get wrong on an order that's been cooking for weeks.

      It could be worse, that's certainly true. But there's just no way to do this right. It's a rotten idea from the start.

      13 votes
      1. [4]
        skybrian
        Link Parent
        It's a stopgap measure, but they do have a way to respond when the media finds some really bad ones until their AI folks figure out a more general solution. Better than being antagonistic like X.

        It's a stopgap measure, but they do have a way to respond when the media finds some really bad ones until their AI folks figure out a more general solution. Better than being antagonistic like X.

        1. MimicSquid
          Link Parent
          Them outsourcing basic medical answer Q&A to the media is not an acceptable baseline. Just because X is actively antagonistic doesn't make it better. It's like X is actively putting poison into...

          Them outsourcing basic medical answer Q&A to the media is not an acceptable baseline. Just because X is actively antagonistic doesn't make it better. It's like X is actively putting poison into people's glasses of water, and Google just has a series of factories poisoning the groundwater. When the issue is pointed out Google will address the specific wastewater pipe out of thousands and leave the others. In both cases people are getting poisoned. That Google looks more considerate on the surface doesn't change that people are being poisoned.

          16 votes
        2. [2]
          mantrid
          Link Parent
          I think an even better way would be to detect any medical questions, then give a standard response about how it's important to consult reliable sources and give a link to such a source. They...

          I think an even better way would be to detect any medical questions, then give a standard response about how it's important to consult reliable sources and give a link to such a source. They already do that below YouTube videos about climate change and vaccines.

          6 votes
          1. skybrian
            Link Parent
            Yeah, why do they think they're ready to implement medical summaries? I think they did some work many years ago to make sure reputable sources rank highly for medical searches. Something similar...

            Yeah, why do they think they're ready to implement medical summaries?

            I think they did some work many years ago to make sure reputable sources rank highly for medical searches. Something similar needs to happen here.

            4 votes