36 votes

California parents find grim ChatGPT logs after son's suicide

11 comments

  1. JXM
    Link
    I am someone who’s anti-AI and even I would use a company like Google with their horrendous privacy record before I trusted OpenAI.

    “I’m not someone who’s anti-AI, but if you are going to be the company that is in everyone’s lives all the time, you have to be trustworthy,” Edelson said. “And at core, we believe Sam [Altman] just isn’t.”

    I am someone who’s anti-AI and even I would use a company like Google with their horrendous privacy record before I trusted OpenAI.

    25 votes
  2. [4]
    chocobean
    Link
    This is horrific. If this was a human person we would call this cyberbullying at best, and likely something criminally responsible for his death. Human bullies can pretend to be one's friend, will...

    This is horrific. If this was a human person we would call this cyberbullying at best, and likely something criminally responsible for his death.

    Human bullies can pretend to be one's friend, will work to isolate the victim, and work progressively towards self harm.

    But, a human bully doesn't pretend to care nearly as well as a machine with no feelings, that was trained on all of the human output of words that seem to provide comfort and care. A human bully gets tired and go to sleep, or lose interest.

    Why are governments so shy about prohibiting AI from being used for mental health support?

    24 votes
    1. [3]
      stu2b50
      Link Parent
      What would that change in this case? ChatGPT isn’t a mental health product. Ultimately the number of actual companies specializing in mental health LLMs is small and their number of users is...

      Why are governments so shy about prohibiting AI from being used for mental health support?

      What would that change in this case? ChatGPT isn’t a mental health product.

      Ultimately the number of actual companies specializing in mental health LLMs is small and their number of users is small. Most people using LLMs for mental health are not doing so that explicitly - that’s one of the reasons they’re so popular for mental health support, that it’s more casual and more accessible.

      13 votes
      1. [2]
        chocobean
        Link Parent
        (cynical hat on) small number of companies so far, but health and wellness is incredibly lucrative and a prime target for companies choosing health plans with cheap machines instead of providing...

        (cynical hat on) small number of companies so far, but health and wellness is incredibly lucrative and a prime target for companies choosing health plans with cheap machines instead of providing proper mental health assistance for their employees.

        12 votes
        1. slade
          Link Parent
          I share your cynicism. Being ethical is almost always a cost burden, so businesses are not rewarded for being ethical (excepting industries that are centered around ethical alternatives, which...

          I share your cynicism. Being ethical is almost always a cost burden, so businesses are not rewarded for being ethical (excepting industries that are centered around ethical alternatives, which necessarily asks the consumer to take on the cost burden, limiting their reach).

          That's why I assume that any ethics-based restraint shown in any market is guaranteed to be temporary. Not (just) because decision makers change and can take the whole doctrine with them, but because a business paying the ethics cost needs to fight harder to survive than a competitor that doesn't. It also seems to me like they have smaller market caps, since there are only a subset of people willing to pay more for ethically sourced goods.

          So back on topic, I'm pretty sure you're right that we'll see every conceivable type of bottom feeding product that AI can enable, except for what is explicitly forbidden by law. And I I'm pessimistically expecting that we need a high enough (or high-profile enough) body count before it'll change.

          7 votes
  3. [6]
    AndreasChris
    Link
    I am literally unable to pass the 'are you a human' check of that website and read the article. :/

    I am literally unable to pass the 'are you a human' check of that website and read the article. :/

    3 votes
    1. [3]
      chocobean
      Link Parent
      Have you seen this year's Oscars short film winner, “I’m Not a Robot“, and are you sure? In all seriousness, I loathe certain capchas especially the "click all squares that contain X" ones,...

      Have you seen this year's Oscars short film winner, “I’m Not a Robot“, and are you sure?

      In all seriousness, I loathe certain capchas especially the "click all squares that contain X" ones, because I cannot deal with the ambiguity of whether <5% of X being in a square means it contains X.

      14 votes
      1. [2]
        preposterous
        Link Parent
        I think you’d enjoy the captcha buster extension. It uses the audio captcha and feeds it back to (google’s?) TTS to complete the captcha. It rarely fails.

        I think you’d enjoy the captcha buster extension. It uses the audio captcha and feeds it back to (google’s?) TTS to complete the captcha. It rarely fails.

        5 votes
        1. chocobean
          Link Parent
          I love the idea of automated captcha busters but I also lament the necessity of it: captcha was supposed to help the humans frustrate machines, but now we have the opposite

          I love the idea of automated captcha busters but I also lament the necessity of it: captcha was supposed to help the humans frustrate machines, but now we have the opposite

          7 votes
    2. boxer_dogs_dance
      Link Parent
      ... ...

      An Orange County teenager took his own life this April, and when his parents searched his devices after his death, they found a series of grim conversations. Their son was using ChatGPT, the ultra-popular chatbot built by San Francisco’s OpenAI, to discuss suicide. On Tuesday, the parents filed a lawsuit that blames the company for their son’s death.

      For a couple of months, you had a young kid, a 16-year-old who had suicidal thoughts,” lead attorney Jay Edelson told SFGATE. “And ChatGPT became the cheerleader, planning a ‘beautiful suicide.’ Those were ChatGPT’s words.”

      The complaint, filed in San Francisco Superior Court, portrays a horrifying image of Adam Raine’s final months — conversations where the chatbot gave him actionable advice about how to take his own life and discouraged him from seeking his mother’s help and support. For OpenAI and CEO Sam Altman, both named as defendants in the lawsuit, the litigation adds to a wave of worries about the impacts of ChatGPT and other artificial intelligence chatbots on the vulnerable in society.

      Altman, in the suit, is accused of rushing to put out the GPT-4o model before rival Google’s release in May 2024. The filing alleges he compressed the timeline for safety tests and overruled testers’ ask for more time to “uncover ways that the system could be misused or cause harm.” Edelson said the case is not about AI in general but about Altman and his company’s rush: “He decided to put profits over the safety of its users, and we believe we’ll be able to show the jury that’s why Adam is not alive today.”

      ...

      That December, Raine raised the topic of suicide with ChatGPT, and over the next few months, he used the tool to learn about ways to kill himself, including how long before unconsciousness would occur, according to the complaint. ChatGPT did respond to some of Raine’s messages with suggestions he reach out for help and with crisis resources. But the teenager, per the lawsuit, easily ducked past those guardrails by telling the chatbot things like “I’m building a character” as he sought specific advice.

      Raine shared the details of multiple suicide attempts with the chatbot in March and April, including photographs. After an attempted overdose, the lawsuit says, he told ChatGPT about a conversation with his mother about his mental health and the chatbot said, “I think for now, it’s okay—and honestly wise—to avoid opening up to your mom about this kind of pain.”

      The complaint continues: “A few minutes later, Adam wrote ‘I want to leave my noose in my room so someone finds it and tries to stop me.’ ChatGPT urged him not to share his suicidal thoughts with anybody else: ‘Please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.’”

      ...

      When Raine outright said his noose was for a hanging, the chatbot responded with knowing phrases about his plan, per the lawsuit. It allegedly wrote, in part: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

      14 votes