19 votes

Character.AI faces US lawsuit after teen's suicide

13 comments

  1. [3]
    Bet
    (edited )
    Link
    There is something darkly humorous about the inclusion of this information in the midst of this particular subject matter. Emphasis mine. ETA: Oh, and this: Those last few lines there are very...

    On Character.AI, users can create their own chatbots and give them directions about how they should act. They can also select from a vast array of user-created chatbots that mimic celebrities like Elon Musk, historical figures like William Shakespeare or unlicensed versions of fictional characters. (Character.AI told me that the “Daenerys Targareyn” bot Sewell used was created by a user, without permission from HBO or other rights holders, and that it removes bots that violate copyright laws when they’re reported.)

    There is something darkly humorous about the inclusion of this information in the midst of this particular subject matter. Emphasis mine.

    ETA: Oh, and this:

    Like many A.I. researchers these days, Mr. Shazeer says his ultimate vision is to build artificial general intelligence — a computer program capable of doing anything the human brain can — and he said in the conference interview that he viewed lifelike A.I. companions as “a cool first use case for A.G.I.”

    Moving quickly was important, he added, because “there are billions of lonely people out there” who could be helped by having an A.I. companion.

    I want to push this technology ahead fast because it’s ready for an explosion right now, not in five years, when we solve all the problems,” he said.

    Those last few lines there are very interesting.

    10 votes
    1. ShroudedScribe
      Link Parent
      AI seems to re-sparking the "fuck it, ship it" movement. Testing is difficult, so let's test in Prod!

      AI seems to re-sparking the "fuck it, ship it" movement. Testing is difficult, so let's test in Prod!

      11 votes
    2. raze2012
      Link Parent
      May as well have said "we wanna get ours before the gold rush ends". OpenAI seemed just as frank in terms of treating the lawsuits as an expense instead of an ethical quandry

      “I want to push this technology ahead fast because it’s ready for an explosion right now, not in five years, when we solve all the problems,” he said.

      May as well have said "we wanna get ours before the gold rush ends". OpenAI seemed just as frank in terms of treating the lawsuits as an expense instead of an ethical quandry

      6 votes
  2. [4]
    443
    Link
    archive.is (bypasses paywall) Character.AI response: https://x.com/character_ai/status/1849055407492497564

    archive.is (bypasses paywall)

    Character.AI response: https://x.com/character_ai/status/1849055407492497564

    We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/

    7 votes
    1. [3]
      Rudism
      Link Parent
      I looked at the linked "safety features," and this one stuck out to me: Makes it sound like the feature is that they'll start banning anyone who mentions suicide or other hot button topics. Kind...

      I looked at the linked "safety features," and this one stuck out to me:

      Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines.

      Makes it sound like the feature is that they'll start banning anyone who mentions suicide or other hot button topics. Kind of tips their hat that the features are more for their own safety against liability versus actual safety of the people using their chat bots. Not that I'm surprised or expected anything different, just thought this seemed a bit like they're saying the quiet part out loud.

      5 votes
      1. [2]
        unkz
        Link Parent
        I’m not sure what you take issue with here? It sounds like they are going to look for and respond to people talking about suicide. How is this negative? Or, what is the alternative? Not looking...

        I’m not sure what you take issue with here? It sounds like they are going to look for and respond to people talking about suicide. How is this negative? Or, what is the alternative? Not looking for it or not responding to it?

        1 vote
        1. Rudism
          Link Parent
          Not taking issue with anything. Just pointing out that in the context of the given story, when they say "we're implementing safety features" the implication is they're doing things to improve the...

          Not taking issue with anything.

          Just pointing out that in the context of the given story, when they say "we're implementing safety features" the implication is they're doing things to improve the safety of children on their platform. But when they use language like "violates our terms or community guidelines" it sounds more like they want to keep their platform safe from the children, subverting my initial interpretation of the word "safety."

          1 vote
  3. [2]
    turmacar
    Link
    Other than the current zeitgeist, I'm not sure AI was the issue. Unfortunately without treatment people have been convincing themselves suicide is the only way out without assistance for a long...

    Other than the current zeitgeist, I'm not sure AI was the issue. Unfortunately without treatment people have been convincing themselves suicide is the only way out without assistance for a long time. Will be interesting to see if the case goes somewhere.

    Like a lot of tech startup money grabs I can't imagine reading classical dystopian sci-fi/cyberpunk and coming away with the impression it would be a good idea to create this company though. Enabling someone to retreat from society is not a good thing. The CEO says one of their primary demographics is people struggling with depression and loneliness. Those people more than anyone need personal connections and professional help, not an echo chamber.

    7 votes
    1. sparksbet
      Link Parent
      True, but I imagine the degree of legal (and probably moral) liability on character.ai's part would still be different if the chatbot said "just kill yourself" than if it referred them to a...

      Unfortunately without treatment people have been convincing themselves suicide is the only way out without assistance for a long time.

      True, but I imagine the degree of legal (and probably moral) liability on character.ai's part would still be different if the chatbot said "just kill yourself" than if it referred them to a suicide hotline.

      4 votes
  4. [3]
    DavesWorld
    Link
    First, the NYT is currently in litigation against OpenAI, Microsoft AI, and probably other AI companies I may have forgottena bout. Trying to push the argument that AI processing of any and all...

    Sewell was diagnosed with mild Asperger’s syndrome as a child, but he never had serious behavioral or mental health problems before, his mother said. Earlier this year, after he started getting in trouble at school, his parents arranged for him to see a therapist. He went to five sessions and was given a new diagnosis of anxiety and disruptive mood dysregulation disorder.

    some (chatbots) market themselves as a way of combating the so-called loneliness epidemic.

    And when users are experiencing a mental health crisis, their A.I. companions may not be able to get them the help they need.

    First, the NYT is currently in litigation against OpenAI, Microsoft AI, and probably other AI companies I may have forgottena bout. Trying to push the argument that AI processing of any and all content the NYT might have published at any point is a copyright violation. So there's an inherent conflict of interest in NYT AI "coverage."

    If you look back over the recent months, as they've posted pieces targeting AI, it's easy to see how they have a vested reason to keep fanning flames and inciting fear mongering with anti-AI stories as a way to try and "subtly" build public sentiment in their favor (and against AI).

    Second, on the specific case cited, we have here (again) an Autistic person who's being frozen out of basic social interaction. "Trouble at school" when combined with an autistic person is very often the autistic person being themselves, which upsets, alarms, and irritates non-Autistic people. Other students as well as teachers and administrators.

    The school punted to a counselor, who came back with yet another diagnosis. Nothing is mentioned about what help Sewell might have been offered. Which, to be clear, needed to not be drugs but some sort of therapy that would have provided him an outlet to understand and learn "how to fit in."

    And that is a super dangerous and volatile subject, since most such 'therapies' basically consist of telling the autistic "stop being yourself." They want the autistic to sit down, shut up, don't fidget, don't ask questions, stop being "problematic" with curiosity and interests and so on. But, theoretically, a good and invested therapist who tried to act as a bridge between Sewell's behaviors and how they're perceived, and who worked with Sewell to understand these perceptions and look for ways to fit in more gently and non-invasively ... that could have been helpful to Sewell.

    Could have saved his life.

    Helpful here is defined as an outcome where Sewell stops feeling so isolated and outcast. A non-helpful outcome, which is what will usually be pushed, would be one where Sewell is very aggressively 'taught' to "stop being yourself" as mentioned above.

    Third, a lot of people do not have access to "mental health resources." Waiting lists are insane. Trained mental health professionals and services are often few and far between, and expensive even if available. So that little comment NYT threw in about a chatbot not being able to get them the mental health help they might need is just missing the forest for the handful of pine cones laying scattered around on the ground.

    Sewell had "access", but he only got five sessions. And there's zero mention of what "help" was provided, just a diagnosis that doesn't involve his autism. Which, again, isn't a disease. Or a problem to be fixed. Autism is the same as being gay, bi, being black or white or latin, being young or old. Autism is who you are. It is incredibly offensive to be told who you are is wrong, and you need to cut it out.

    So what "fix" were they going to offer Sewell in five sessions? Especially when they just ended the fifth with a foisted off "diagnosis" of something that's not going to address the autism that is likely the reason Sewell was referred in the first place.

    Most people won't have access. Many folks are budget crunched for all the usual reasons thanks to the society we live in crunching them. When a therapist or doctor is going to charge two to three hundred per hour ... a lot of the time that's food for the week and gas to get to your low pay job. Many therapists aren't even well trained in autism or in how autistics aren't "broken 'normal' people". So the money you scrape up by going without is just wasted.

    Reference the line in the second X-Men movie. "Have you tried ... not being a mutant?" That's what a lot of untrained "mental health professionals" will lob at autistic people who do end up in front of them. "Have you tried ... not being autistic?" Very few such "professionals" have any specific knowledge of or training in autism, and resultingly are often doing more harm than good when they sit down with an autistic.

    Further, there's a very clear theme in autism. That of becoming invisible the moment an autistic becomes an adult. You rarely see autism discussed in an adult context. Most of the time, any mention (certainly any media mention) will be of children. Autistic adults just don't count, aren't important. Which is why the dearth of affordable and autistic-specific therapists, counselors, and other mental health professionals trained in autism is such a huge problem, and the reason I mention it.

    Which brings us back to these AI chatbots. Most autistic people are socially shunned, simply because they're "weird" or "strange." Because they act in ways, or have interests, that make neurotypical people "uncomfortable." Note, I'm not talking about autistic people who are criminals, who engage in criminal behavior. Just a normal neurodiverse autistic person who, simply by existing, bothers neurotypical people who respond by pushing that ND person away.

    Isolating them.

    Solitary confinement is known to be cruel and unusual punishment.

    Solitary confinement has been associated with significant negative effects on mental health.[68] Research indicates that the psychological effects of solitary confinement may encompass a range of adverse symptoms including "anxiety, depression, anger, cognitive disturbances, perceptual distortions, obsessive thoughts, paranoia, and psychosis."[69] These symptoms are so widespread among individuals held in solitary that some psychiatrists have labeled them "SHU Syndrome," with SHU standing for Special Housing Unit or Security Housing Unit. In a 1983 journal article, Stuart Grassian described SHU Syndrome as a "major, clinically distinguishable psychiatric syndrome."[70] Grassian notes solitary confinement can cause extremely vivid hallucinations in multiple sensory modalities including visual, auditory, tactile, olfactory. Some other effects include dissociative features including amnesia, motor excitement with aimless violence and delusions.[70]

    For those who enter the prison system already diagnosed with a mental illness, solitary confinement can significantly worsen their condition. Incarcerated individuals with mental health conditions often "decompensate in isolation, requiring crisis care or psychiatric hospitalization."[69] The lack of human contact and sensory deprivation that characterize solitary confinement have been shown to cause permanent or semi-permanent changes to brain physiology. [71] Alterations to brain physiology can lead individuals to commit suicide or self-harm.[72]

    Social isolation is effectively the same thing. And, to be clear, we're not talking about "has only a few friends." Many autistics have none, no social outlets or inputs. As a developing child, when you're just beginning to try to figure yourself out, explore your maturing mind, it's crippling to be so isolated. Children are cruel, and teenagers are basically devils eager to see what happens when they push the buttons and pull the levers that Make Things Happen To Others.

    That isolation happens in school, it happens in life after school. Here, it was probably happening to Sewell. And he found an outlet that seemed to ease the pain of rejection, of isolation. Then he did something irreversible for some reason that made sense to him at the time, but that he thought would help him, that would ease his pain.

    Autism is a risk factor for suicide. Some studies show a seven times greater likelihood of an autistic committing or attempting suicide. Medicine may argue over the exact rate, some sources point to it "only* being three or even four times more likely, but it's fairly well established that autistic people are less likely to die by non-suicide than a neurotypical person.

    Something else that rarely gets any mention. Why should it? Autistic people bother neurotypical people. Out of sight, out of mind. Oh, the weirdo killed himself? Hmm, bummer. Well, what's for dinner?

    Chatbots aren't the problem. AI isn't the problem. Estimates vary, but somewhere between two and five percent of the world's population is autistic. It might be higher by a few more percentage points. But it's not high enough to make it a problem for neurotypical people when they shuffle the bothersome, disruptive, irritating autistic people off to the corner where they won't bother "normal" folks.

    Why are suicide rates higher in autistics? What could it be ... hmm, guess it's a mystery. But we should totally ban any outlet that might provide a little entertainment, interest, or even faint hope to the weirdos. Yup, fuck chatbots. They're disruptive.

    Just like autistic people.

    6 votes
    1. raze2012
      Link Parent
      At this point I don't think there's any newspaper that isn't either fighting or utilizing ai. so it's hard to discount all bias. As usual, we will put aside societal problems and target the...
      1. At this point I don't think there's any newspaper that isn't either fighting or utilizing ai. so it's hard to discount all bias.

      2. As usual, we will put aside societal problems and target the immediate detractor in a tragedy. In this case, an app that didn't handle therapy as well as a therapist (and marketed as such). It'll be an interesting lawsuit, if nothing else

      3. Following on the above, society wants short term solutions, not long term approach and understanding. A politician may not even take credit for a long term plan proposed in their term. Current incentives and human wiring simply focus on what's around them, not the cause nor the proper solution.

      1 vote
    2. EgoEimi
      Link Parent
      NYT's insinuation that we just need to throw more mental health resources at people is indeed tone-deaf when those resources are very scarce.* Chatbots, when done properly, can provide immediate...

      NYT's insinuation that we just need to throw more mental health resources at people is indeed tone-deaf when those resources are very scarce.* Chatbots, when done properly, can provide immediate relief to people struggling with mental health and social connection.

      In the article, this chatbot appeared to be the one bright spot of the boy's life. In a life where he struggled with the outside world that didn't make sense to him, this chatbot provided him an inner sanctuary that did.

      I'm very pro-chatbot for practical purposes (basic medical and mental health guidance, helping with Webpack configuration, etc.) while very opposed to the normalization of AI chatbots as personal companions for philosophical reasons, but I feel that this article has unfairly portrayed the situation.

      *Aside: This seems to be touted as a panacea for social problems everywhere: let's deploy more mental health resources. But there aren't enough resources right now.

      1 vote
  5. Thrabalen
    Link
    AI chatbots are going to be an immensely interesting tool in our future, as soon as Artificial Intelligence is buoyed by Artificial Maturity and Artificial Responsibility. I'd love it if we also...

    AI chatbots are going to be an immensely interesting tool in our future, as soon as Artificial Intelligence is buoyed by Artificial Maturity and Artificial Responsibility.

    I'd love it if we also worked on the natural versions of those eventually as well.

    (It goes without saying that we've also got to remove the corruption and thievery elements from AI alongside those previously mentioned fixes.)