21 votes

Chatbots urged teen to self-harm, suggested murdering parents, Texas lawsuit says

24 comments

  1. [8]
    Fiachra
    Link
    Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this. My guess is that since bots are prone to...

    Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this.

    My guess is that since bots are prone to agree with the human chatter, even when corrected on something nonsensical, it can remember what you say at your lowest and then repeat the sentiment back to you later. So the bot might have been telling this kid "your parents aren't the kind of people to trust with that" because he had previously vented to the bot about how alienated he feels from his parents.

    People might effectively be teaching the bot their pain points and, in trying to agree with the human customer, the bot unknowingly strikes those pain points later and derails a more positive train of thought.

    And even if this was just a once in a billion glitch, intimate details of a teenager's mental health struggles are an incredibly dangerous piece of data for a company to have.

    24 votes
    1. [7]
      DefinitelyNotAFae
      Link Parent
      We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I...

      We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I literally saw someone say, "There's no c.ai for the UHC CEO killer," with a reply of, "I just made one."

      There's a level of harm here beyond the general concerns I had with the original death by suicide. And I don't know what to do about it.

      8 votes
      1. [4]
        Fiachra
        Link Parent
        IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and...

        IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and the suggestion that these bots meet any standard of care whatsoever is absurd.

        It'd be like advertising whiskey as a treatment for depression... in the days before alcohol standards were regulated. You don't know what you're getting and even if it's the good stuff, is it getting you better or just an unhealthy coping strategy?

        15 votes
        1. winther
          Link Parent
          Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only...

          Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only digital" so not actually real. Other industries wouldn't be able to get away with similar careless behaviour. If you want to sell a new sort of medicine you have to go through a strict process determining side effects and the known side effects needs to within acceptable limits of what the drug is trying to remedy. Could we apply similar rules for releasing a chatbot to the public? That they need to prove a certain level of ability to not recommend life threatening behaviour at the very least?

          9 votes
        2. [2]
          DefinitelyNotAFae
          Link Parent
          They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and...

          They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and slang terms for drugs more often than I'd like), I get hotlines, helplines and information about quitting drugs.

          The absolute lack of care for the fact that kids but also vulnerable people of all ages would be harmed by this shit is infuriating.

          8 votes
          1. doctortofu
            Link Parent
            Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar...

            Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar questions, CLEARLY and unequivocally stating that the responses are generated by AI with no control or oversight by any human, and should be treated as such...

            7 votes
      2. [2]
        boxer_dogs_dance
        Link Parent
        Call Congress is one step

        Call Congress is one step

        3 votes
        1. DefinitelyNotAFae
          Link Parent
          Oh I'm going to, but tbh I don't trust them to understand tech.

          Oh I'm going to, but tbh I don't trust them to understand tech.

          8 votes
  2. [2]
    arqalite
    (edited )
    Link
    My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of...

    My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of C.AI and I trust her to avoid inappropriate content when it appears - however seeing all this, I'm wondering if I should ask for her phone and check these chats for anything harmful.

    My parents are technologically illiterate and they cannot really step in outside of confiscating her phone (which I don't want) so it's something I have to deal with. I do want her to have the freedom to do the things she likes but I also have to ensure her online safety, so I dunno.

    I'll try to talk to her about it and maybe convince her to use the app less often (or not at all). Worst case scenario I can block the app from her phone but I hope I don't have do to that.

    EDIT: I reached out to her, I got the "well, I dunno, it hasn't happened to me" treatment. She's an active user of the app so it's gonna be an uphill battle to convince her to give it up. She did however say she always reminds herself that these AI models are robots and will never take advice from them. I'll monitor the situation.

    10 votes
    1. aetherious
      Link Parent
      I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to...

      I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to form attachments with these chatbots if they use it for prolonged periods of time, so it is certainly good you are already checking in and asking your sister how she's using it. Character AI's primary goal is increasing engagement, not the well-being of their users, so I would also be mindful of the time spent on the website.

      2 votes
  3. [4]
    first-must-burn
    Link
    How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.

    How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.

    Like all men of power, when he talked of prices worth paying, you could be sure of one thing. Someone else was paying.

    ~ Richard K. Morgan, Altered Carbon

    8 votes
    1. [3]
      Landhund
      Link Parent
      Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is...

      Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is to limited the liability of the owners. It is the only reason why they were invented as a legal construct in the UK in 1855 and from there started to spread all over the world.

      If you want to hold company owners liable (aside from deliberate actions or gross negligence), you would have to completely undo the last 170 years of the development of economic law of the entire world.

      5 votes
      1. first-must-burn
        Link Parent
        Yup, that's pretty much my idea.

        Yup, that's pretty much my idea.

        2 votes
      2. GenuinelyCrooked
        Link Parent
        I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing...

        I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing it for so long" is a good reason to do it slowly and carefully, but not necessarily a good reason not to do it.

        1 vote
  4. [2]
    doctortofu
    Link
    It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were...

    It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were not real, he was a fully aware and sceptical adult, it still made him doubt himself at times: https://www.youtube.com/watch?v=FExnXCEAe6k

    Now imagine a naive teenager, possibly with mental health issues, being exposed to something like this. Bone-chillingly scary, but apart from an outright ban I really don't know how to address it...

    6 votes
    1. kollkana
      Link Parent
      I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I...

      I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I guess we don't need the tech that badly.

      4 votes
  5. waxwing
    Link
    One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous...

    One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous systems produce.

    If we don't set social norms and legal precedents to the contrary, I think this will be an extremely undesirable development.

    3 votes
  6. [7]
    datavoid
    Link
    I don't care what you say, there is no way this could have been predicted!

    ...at least one character, called "Your mom and sister," targeted him with "taboo and extreme sexual themes," including incest, the complaint said.

    I don't care what you say, there is no way this could have been predicted!

    7 votes
    1. [6]
      DefinitelyNotAFae
      Link Parent
      Given the huge amount of incest related porn and erotica, it absolutely is predictable.

      Given the huge amount of incest related porn and erotica, it absolutely is predictable.

      6 votes
      1. [5]
        datavoid
        Link Parent
        I'll try to up my sarcasm in the future to avoid these confusions!

        I'll try to up my sarcasm in the future to avoid these confusions!

        8 votes
        1. [4]
          DefinitelyNotAFae
          Link Parent
          Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally. I didn't clock it, because I do tend to...

          Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally.

          I didn't clock it, because I do tend to assume genuine discussion here.

          4 votes
          1. [3]
            datavoid
            (edited )
            Link Parent
            It's actually possible to make a valid point via sarcasm in my opinion. What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way. PS...

            It's actually possible to make a valid point via sarcasm in my opinion.

            What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.

            PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.

            5 votes
            1. [2]
              DefinitelyNotAFae
              Link Parent
              I'm aware, I often use the tool. I am aware now though as I said, I didn't clock it. You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation...

              It's actually possible to make a valid point via sarcasm in my opinion.

              I'm aware, I often use the tool.

              What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.

              I am aware now though as I said, I didn't clock it.

              PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.

              You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation goes beyond stupid into dangerous and deliberately harmful of children to me. The users generally make the C.AI bots, that one was probably built for the purpose of fetish use as likely as not. I don't know if any of them are made by the company.

              Still dangerous and literally perpetuating sexual abuse of children.

              1 vote