First, the NYT is currently in litigation against OpenAI, Microsoft AI, and probably other AI companies I may have forgottena bout. Trying to push the argument that AI processing of any and all...
Exemplary
Sewell was diagnosed with mild Asperger’s syndrome as a child, but he never had serious behavioral or mental health problems before, his mother said. Earlier this year, after he started getting in trouble at school, his parents arranged for him to see a therapist. He went to five sessions and was given a new diagnosis of anxiety and disruptive mood dysregulation disorder.
some (chatbots) market themselves as a way of combating the so-called loneliness epidemic.
And when users are experiencing a mental health crisis, their A.I. companions may not be able to get them the help they need.
First, the NYT is currently in litigation against OpenAI, Microsoft AI, and probably other AI companies I may have forgottena bout. Trying to push the argument that AI processing of any and all content the NYT might have published at any point is a copyright violation. So there's an inherent conflict of interest in NYT AI "coverage."
If you look back over the recent months, as they've posted pieces targeting AI, it's easy to see how they have a vested reason to keep fanning flames and inciting fear mongering with anti-AI stories as a way to try and "subtly" build public sentiment in their favor (and against AI).
Second, on the specific case cited, we have here (again) an Autistic person who's being frozen out of basic social interaction. "Trouble at school" when combined with an autistic person is very often the autistic person being themselves, which upsets, alarms, and irritates non-Autistic people. Other students as well as teachers and administrators.
The school punted to a counselor, who came back with yet another diagnosis. Nothing is mentioned about what help Sewell might have been offered. Which, to be clear, needed to not be drugs but some sort of therapy that would have provided him an outlet to understand and learn "how to fit in."
And that is a super dangerous and volatile subject, since most such 'therapies' basically consist of telling the autistic "stop being yourself." They want the autistic to sit down, shut up, don't fidget, don't ask questions, stop being "problematic" with curiosity and interests and so on. But, theoretically, a good and invested therapist who tried to act as a bridge between Sewell's behaviors and how they're perceived, and who worked with Sewell to understand these perceptions and look for ways to fit in more gently and non-invasively ... that could have been helpful to Sewell.
Could have saved his life.
Helpful here is defined as an outcome where Sewell stops feeling so isolated and outcast. A non-helpful outcome, which is what will usually be pushed, would be one where Sewell is very aggressively 'taught' to "stop being yourself" as mentioned above.
Third, a lot of people do not have access to "mental health resources." Waiting lists are insane. Trained mental health professionals and services are often few and far between, and expensive even if available. So that little comment NYT threw in about a chatbot not being able to get them the mental health help they might need is just missing the forest for the handful of pine cones laying scattered around on the ground.
Sewell had "access", but he only got five sessions. And there's zero mention of what "help" was provided, just a diagnosis that doesn't involve his autism. Which, again, isn't a disease. Or a problem to be fixed. Autism is the same as being gay, bi, being black or white or latin, being young or old. Autism is who you are. It is incredibly offensive to be told who you are is wrong, and you need to cut it out.
So what "fix" were they going to offer Sewell in five sessions? Especially when they just ended the fifth with a foisted off "diagnosis" of something that's not going to address the autism that is likely the reason Sewell was referred in the first place.
Most people won't have access. Many folks are budget crunched for all the usual reasons thanks to the society we live in crunching them. When a therapist or doctor is going to charge two to three hundred per hour ... a lot of the time that's food for the week and gas to get to your low pay job. Many therapists aren't even well trained in autism or in how autistics aren't "broken 'normal' people". So the money you scrape up by going without is just wasted.
Reference the line in the second X-Men movie. "Have you tried ... not being a mutant?" That's what a lot of untrained "mental health professionals" will lob at autistic people who do end up in front of them. "Have you tried ... not being autistic?" Very few such "professionals" have any specific knowledge of or training in autism, and resultingly are often doing more harm than good when they sit down with an autistic.
Further, there's a very clear theme in autism. That of becoming invisible the moment an autistic becomes an adult. You rarely see autism discussed in an adult context. Most of the time, any mention (certainly any media mention) will be of children. Autistic adults just don't count, aren't important. Which is why the dearth of affordable and autistic-specific therapists, counselors, and other mental health professionals trained in autism is such a huge problem, and the reason I mention it.
Which brings us back to these AI chatbots. Most autistic people are socially shunned, simply because they're "weird" or "strange." Because they act in ways, or have interests, that make neurotypical people "uncomfortable." Note, I'm not talking about autistic people who are criminals, who engage in criminal behavior. Just a normal neurodiverse autistic person who, simply by existing, bothers neurotypical people who respond by pushing that ND person away.
Solitary confinement has been associated with significant negative effects on mental health.[68] Research indicates that the psychological effects of solitary confinement may encompass a range of adverse symptoms including "anxiety, depression, anger, cognitive disturbances, perceptual distortions, obsessive thoughts, paranoia, and psychosis."[69] These symptoms are so widespread among individuals held in solitary that some psychiatrists have labeled them "SHU Syndrome," with SHU standing for Special Housing Unit or Security Housing Unit. In a 1983 journal article, Stuart Grassian described SHU Syndrome as a "major, clinically distinguishable psychiatric syndrome."[70] Grassian notes solitary confinement can cause extremely vivid hallucinations in multiple sensory modalities including visual, auditory, tactile, olfactory. Some other effects include dissociative features including amnesia, motor excitement with aimless violence and delusions.[70]
For those who enter the prison system already diagnosed with a mental illness, solitary confinement can significantly worsen their condition. Incarcerated individuals with mental health conditions often "decompensate in isolation, requiring crisis care or psychiatric hospitalization."[69] The lack of human contact and sensory deprivation that characterize solitary confinement have been shown to cause permanent or semi-permanent changes to brain physiology. [71] Alterations to brain physiology can lead individuals to commit suicide or self-harm.[72]
Social isolation is effectively the same thing. And, to be clear, we're not talking about "has only a few friends." Many autistics have none, no social outlets or inputs. As a developing child, when you're just beginning to try to figure yourself out, explore your maturing mind, it's crippling to be so isolated. Children are cruel, and teenagers are basically devils eager to see what happens when they push the buttons and pull the levers that Make Things Happen To Others.
That isolation happens in school, it happens in life after school. Here, it was probably happening to Sewell. And he found an outlet that seemed to ease the pain of rejection, of isolation. Then he did something irreversible for some reason that made sense to him at the time, but that he thought would help him, that would ease his pain.
Autism is a risk factor for suicide. Some studies show a seven times greater likelihood of an autistic committing or attempting suicide. Medicine may argue over the exact rate, some sources point to it "only* being three or even four times more likely, but it's fairly well established that autistic people are less likely to die by non-suicide than a neurotypical person.
Something else that rarely gets any mention. Why should it? Autistic people bother neurotypical people. Out of sight, out of mind. Oh, the weirdo killed himself? Hmm, bummer. Well, what's for dinner?
Chatbots aren't the problem. AI isn't the problem. Estimates vary, but somewhere between two and five percent of the world's population is autistic. It might be higher by a few more percentage points. But it's not high enough to make it a problem for neurotypical people when they shuffle the bothersome, disruptive, irritating autistic people off to the corner where they won't bother "normal" folks.
Why are suicide rates higher in autistics? What could it be ... hmm, guess it's a mystery. But we should totally ban any outlet that might provide a little entertainment, interest, or even faint hope to the weirdos. Yup, fuck chatbots. They're disruptive.
NYT's insinuation that we just need to throw more mental health resources at people is indeed tone-deaf when those resources are very scarce.* Chatbots, when done properly, can provide immediate...
NYT's insinuation that we just need to throw more mental health resources at people is indeed tone-deaf when those resources are very scarce.* Chatbots, when done properly, can provide immediate relief to people struggling with mental health and social connection.
In the article, this chatbot appeared to be the one bright spot of the boy's life. In a life where he struggled with the outside world that didn't make sense to him, this chatbot provided him an inner sanctuary that did.
I'm very pro-chatbot for practical purposes (basic medical and mental health guidance, helping with Webpack configuration, etc.) while very opposed to the normalization of AI chatbots as personal companions for philosophical reasons, but I feel that this article has unfairly portrayed the situation.
*Aside: This seems to be touted as a panacea for social problems everywhere: let's deploy more mental health resources. But there aren't enough resources right now.
Surely this is the real problem we should be trying to fix then, rather than creating & encouraging people to converse with chat bots that cannot empathise with you and will say whatever sounds...
let's deploy more mental health resources. But there aren't enough resources right now.
Surely this is the real problem we should be trying to fix then, rather than creating & encouraging people to converse with chat bots that cannot empathise with you and will say whatever sounds like the right advice?*
Encouraging more people to enter this field and making access easier/cheaper requires a paradigm shift in how we conceptualise healthcare, but we don't do it because it sounds too hard (and lobbying money keeps it this way).
*I say "sounds like" since it can't provide personalised help - because you aren't in their training data
What makes it different? Any mental condition is "who you are", because who you are is determined seemingly by the brain.
Which, again, isn't a disease. Or a problem to be fixed. Autism is the same as being gay, bi, being black or white or latin, being young or old. Autism is who you are. It is incredibly offensive to be told who you are is wrong, and you need to cut it out.
What makes it different? Any mental condition is "who you are", because who you are is determined seemingly by the brain.
At this point I don't think there's any newspaper that isn't either fighting or utilizing ai. so it's hard to discount all bias. As usual, we will put aside societal problems and target the...
At this point I don't think there's any newspaper that isn't either fighting or utilizing ai. so it's hard to discount all bias.
As usual, we will put aside societal problems and target the immediate detractor in a tragedy. In this case, an app that didn't handle therapy as well as a therapist (and marketed as such). It'll be an interesting lawsuit, if nothing else
Following on the above, society wants short term solutions, not long term approach and understanding. A politician may not even take credit for a long term plan proposed in their term. Current incentives and human wiring simply focus on what's around them, not the cause nor the proper solution.
There is something darkly humorous about the inclusion of this information in the midst of this particular subject matter. Emphasis mine. ETA: Oh, and this: Those last few lines there are very...
On Character.AI, users can create their own chatbots and give them directions about how they should act. They can also select from a vast array of user-created chatbots that mimic celebrities like Elon Musk, historical figures like William Shakespeare or unlicensed versions of fictional characters. (Character.AI told me that the “Daenerys Targareyn” bot Sewell used was created by a user, without permission from HBO or other rights holders, and that it removes bots that violate copyright laws when they’re reported.)
There is something darkly humorous about the inclusion of this information in the midst of this particular subject matter. Emphasis mine.
ETA: Oh, and this:
Like many A.I. researchers these days, Mr. Shazeer says his ultimate vision is to build artificial general intelligence — a computer program capable of doing anything the human brain can — and he said in the conference interview that he viewed lifelike A.I. companions as “a cool first use case for A.G.I.”
Moving quickly was important, he added, because “there are billions of lonely people out there” who could be helped by having an A.I. companion.
“I want to push this technology ahead fast because it’s ready for an explosion right now, not in five years, when we solve all the problems,” he said.
May as well have said "we wanna get ours before the gold rush ends". OpenAI seemed just as frank in terms of treating the lawsuits as an expense instead of an ethical quandry
“I want to push this technology ahead fast because it’s ready for an explosion right now, not in five years, when we solve all the problems,” he said.
May as well have said "we wanna get ours before the gold rush ends". OpenAI seemed just as frank in terms of treating the lawsuits as an expense instead of an ethical quandry
Interesting is one way of putting it, outright negligent is another way of saying it. It is sadly not uncommon to encounter in tech circles where people just refuse to look at the actual impact...
“I want to push this technology ahead fast because it’s ready for an explosion right now, not in five years, when we solve all the problems,”
Those last few lines there are very interesting.
Interesting is one way of putting it, outright negligent is another way of saying it. It is sadly not uncommon to encounter in tech circles where people just refuse to look at the actual impact just for the sake of "progress".
Other than the current zeitgeist, I'm not sure AI was the issue. Unfortunately without treatment people have been convincing themselves suicide is the only way out without assistance for a long...
Other than the current zeitgeist, I'm not sure AI was the issue. Unfortunately without treatment people have been convincing themselves suicide is the only way out without assistance for a long time. Will be interesting to see if the case goes somewhere.
Like a lot of tech startup money grabs I can't imagine reading classical dystopian sci-fi/cyberpunk and coming away with the impression it would be a good idea to create this company though. Enabling someone to retreat from society is not a good thing. The CEO says one of their primary demographics is people struggling with depression and loneliness. Those people more than anyone need personal connections and professional help, not an echo chamber.
True, but I imagine the degree of legal (and probably moral) liability on character.ai's part would still be different if the chatbot said "just kill yourself" than if it referred them to a...
Unfortunately without treatment people have been convincing themselves suicide is the only way out without assistance for a long time.
True, but I imagine the degree of legal (and probably moral) liability on character.ai's part would still be different if the chatbot said "just kill yourself" than if it referred them to a suicide hotline.
From the one quote in the article, the chatbot was pretty anti-suicide, though. (While still remaining in character.) If that’s all they could find, I don’t think it’s much of a case.
From the one quote in the article, the chatbot was pretty anti-suicide, though. (While still remaining in character.)
If that’s all they could find, I don’t think it’s much of a case.
Oh yeah, it'll depend a lot on the details of their chats as well as on character.ai's advertising, I'm not saying the case is necessarily good. Just that the chatbot could be relevant to the...
Oh yeah, it'll depend a lot on the details of their chats as well as on character.ai's advertising, I'm not saying the case is necessarily good. Just that the chatbot could be relevant to the suicide even though plenty of people commit suicide without chatbots involved.
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/
I looked at the linked "safety features," and this one stuck out to me: Makes it sound like the feature is that they'll start banning anyone who mentions suicide or other hot button topics. Kind...
I looked at the linked "safety features," and this one stuck out to me:
Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines.
Makes it sound like the feature is that they'll start banning anyone who mentions suicide or other hot button topics. Kind of tips their hat that the features are more for their own safety against liability versus actual safety of the people using their chat bots. Not that I'm surprised or expected anything different, just thought this seemed a bit like they're saying the quiet part out loud.
I’m not sure what you take issue with here? It sounds like they are going to look for and respond to people talking about suicide. How is this negative? Or, what is the alternative? Not looking...
I’m not sure what you take issue with here? It sounds like they are going to look for and respond to people talking about suicide. How is this negative? Or, what is the alternative? Not looking for it or not responding to it?
Not taking issue with anything. Just pointing out that in the context of the given story, when they say "we're implementing safety features" the implication is they're doing things to improve the...
Not taking issue with anything.
Just pointing out that in the context of the given story, when they say "we're implementing safety features" the implication is they're doing things to improve the safety of children on their platform. But when they use language like "violates our terms or community guidelines" it sounds more like they want to keep their platform safe from the children, subverting my initial interpretation of the word "safety."
It's PR speak. If they lead with "In response to a minor recently committing suicide after an interaction on our platform..." it'd come across as more human. That kind of personal touch also...
It's PR speak. If they lead with "In response to a minor recently committing suicide after an interaction on our platform..." it'd come across as more human. That kind of personal touch also invites more criticism and sounds like an admission of guilt though. Saying "We are adding additional measures to enforce our strict user guidelines." sounds like something that'd be less likely to bite them in the upcoming lawsuit.
I work in suicide prevention software for schools. I see kids talking to these chat bots all the time. We get tons of alerts based on kids talking about suicide with these bots and its very common...
I work in suicide prevention software for schools. I see kids talking to these chat bots all the time. We get tons of alerts based on kids talking about suicide with these bots and its very common for them to start making plans right there in the chat with the AI.
I don't know how much blame I would put on the AI company for things like this, I think I would honestly be much more hard on the father who had an unsecured gun in a house where he knew his son was struggling with mental health issues.
That being said, this really is just the tip of the iceberg when it comes to the youth suicide epidemic. These types of things are becoming increasingly common at an alarming rate, even just over the last 5 years I've been in this career.
So I blame the father for not securing his gun (and for having one in the first place) but I do blame this software for not responding to suicidal statements with resources and stopping those...
So I blame the father for not securing his gun (and for having one in the first place) but I do blame this software for not responding to suicidal statements with resources and stopping those conversations. That's been standard in chatbots and search results and the loke for a long time. Especially given he's a minor. It's the bare minimum. Responsibility isn't all or nothing, I'd hold the company partially to blame for not having the most basic of guardrails (and it sounds like still not).
I do prevention (and first responder) work in college and it's... Yeah. It's a lot and it's not trending downwards.
In the article it says the chatbot responded to his initial comments about suicide telling him to never do it, it was just later on (several weeks/months? later) when he says "should I come home...
In the article it says the chatbot responded to his initial comments about suicide telling him to never do it, it was just later on (several weeks/months? later) when he says "should I come home to you tonight" where the AI encourages him, because it wasn't able to parse the context that the student was referring to suicide. This is a tricky situation for a chatbot because they aren't really learning from you or anything, they're just responding in a way that seems natural. That being said you're correct and it should have just ended the conversation immediately and linked to help resources (something I think it does now after this tragedy), but I rarely actually see young students utilize these resources which is its own problem.
The reality of the situation is that we as a society likely need to lock down the internet heavily for people under the age of 16 via restricted devices or a similar solution. I say this as a former 16 year old who would have fucking hated myself for thinking this, but the internet is a scary and fucked up place; far more today than it was 10-20 years ago. There is simply far too much money involved for things like this not to happen, and trying to regulate every single new issue that pops up instead of addressing the ROOT cause of the problem (insane amounts of unregulated access to the internet from a very young age) will be like trying to plug leaks in a rowboat that's getting shot at by 50 different machine guns at a time.
It said "never do that" in character but then didn't do anything else. That's definitely not the "stop and give resources" or possibly more for minors. They said it does that for some suicide...
It said "never do that" in character but then didn't do anything else. That's definitely not the "stop and give resources" or possibly more for minors. They said it does that for some suicide mentions now but not for a lot of self harm. There's no reason the bot couldn't be programmed to kick back the auto response the way search results do. You cannot make them use the resources but it removes the fantasy in this particular use case. Dany answered him in character which didn't prove the fantasy.
And I agree that teens should have much less overall and much more controlled Internet access (and zero gun access) but I really think being like "hey you matter, please click here to talk to someone" and linking you to one of the many prevention chat lines would be a bare minimum for any internet service like this - chatbots or otherwise.
The only reasons I haven't had a death on campus is because we've had really good response times to reports and because most of our students haven't chosen poisoning methods that are lethal. I'm doing everything I can to keep that streak up.
Honestly, you could probably get the chatbot to respond in a way that is more natural than how FB/YT provide help, although it would require far more careful programming.
Honestly, you could probably get the chatbot to respond in a way that is more natural than how FB/YT provide help, although it would require far more careful programming.
I am not sure I want it to in this particular case, arguably it responded in character and thus naturally initially. The jolt out of the fantasy might be the most helpful. But I do support more...
I am not sure I want it to in this particular case, arguably it responded in character and thus naturally initially. The jolt out of the fantasy might be the most helpful.
But I do support more natural language than what most orgs go for.
It's tricky. I imagine the internet also brought connections as well for people who otherwise didn't feel they belonged in their local community. I imagine a lot of the LGBT community realized...
The reality of the situation is that we as a society likely need to lock down the internet heavily for people under the age of 16 via restricted devices or a similar solution.
It's tricky. I imagine the internet also brought connections as well for people who otherwise didn't feel they belonged in their local community. I imagine a lot of the LGBT community realized they weren't alpine alone from such a resource. Removing all that because people don't want to properly fund mental health seems like it's throwing out the baby with the bathwater.
That's why I said restricted access, not completely removing it. There's no reason kids should be allowed to be on Youtube shorts, tik tok, or any other social media for 10+ hours a day. I don't...
That's why I said restricted access, not completely removing it. There's no reason kids should be allowed to be on Youtube shorts, tik tok, or any other social media for 10+ hours a day. I don't pretend to know the perfect amount of time but starting there seems like a great first step.
"people don't want to properly fund mental health" is not a good argument because mental health isn't just something you can just throw dollars at and fix. And I'm not pretending that restricting access to the internet will solve everything, far from it. I'm just saying in my personal experience which just so happens to be reviewing millions of students and their activity online, the whole spending every waking minute of the day on the internet thing that a ton of kids are doing is causing far more harm than good for a lot of these people.
I'm fine with guidelines, but I feel very hesitant for the government to enforce control on software "for the kids" (they in fact are trying to do that as we speak in some regions). I feels very...
There's no reason kids should be allowed to be on Youtube shorts, tik tok, or any other social media for 10+ hours a day.
I'm fine with guidelines, but I feel very hesitant for the government to enforce control on software "for the kids" (they in fact are trying to do that as we speak in some regions). I feels very similar to arguments 20 years ago on "kids shouldn't play video games/watch TV all day).
And I very begrudgingly did not... because my parents controlled that access. Some responsibility comes down to the parents at the end of the day.
"people don't want to properly fund mental health" is not a good argument because mental health isn't just something you can just throw dollars at and fix.
Sure you can. "Throw" was crude, so I apologize. and I'll clarify that I'm not talking about "fixing" patients. just the process to help with more successful counseling.
With that said, it's a matter of anything else in business: recognizing a need, hiring the right people (who already exist), compensating them properly for their talent, and listening to what they need for the process to go smoothly. Spending efficiently, and not just to the lowest bidder.
But the reputation of government spending being inefficient is a trope older than anyone alive, so this may just be blue sky ideals.
I don't think I disagree with anything you said here, it all makes a lot of sense. But remember the whole point of this article was that the parents of this poor kid are suing this AI company and...
I don't think I disagree with anything you said here, it all makes a lot of sense. But remember the whole point of this article was that the parents of this poor kid are suing this AI company and campaigning for the government to regulate these AI chatbots more. My response was basically "why bother taking the spoon out of the microwave when the whole house is on fire." It doesn't make sense to me to try and tackle these issues at such a granular level; given what we both know how slow and inefficient the government is.
I don't want to victim blame these poor parents, but I think this was far more on them than it was on AI dany.
I do want more regulation on chatbots. But not necessarily for kid safety. Because so much rampant grifting and theft has comes up and they either need to revamp the idea of copyright (an idea...
I do want more regulation on chatbots. But not necessarily for kid safety. Because so much rampant grifting and theft has comes up and they either need to revamp the idea of copyright (an idea that would excite many in tech) or actually make these companies pay out to IP owners. So I see the situation of "enemy of my enemy".
but for this single case, I agree. This is sweeping the real issues under the rug. If it wasn't a chat bot, it could have been a bad group (or to be frank: a cult/radicalism), or a drug dealer, or the good ol' fashioned vido game/TV addiction. the 'solutions' are different but the symptoms are the exact same.
I have to wonder how much of the issue here is actually due to the chatbot, and how much was just a troubled teenager finding any kind of outlet to throw themselves at too hard. If it wasn't...
I have to wonder how much of the issue here is actually due to the chatbot, and how much was just a troubled teenager finding any kind of outlet to throw themselves at too hard. If it wasn't character.ai, who's to say they wouldn't have gotten sucked into an unhealthy addiction to social media, or gotten pulled down a disinformation rabbit hole, or any number of other things?
The fact of the matter is, unregulated and unrestricted internet use is dangerous for young people. Their brains literally haven't finished developing yet, and there are far too many potential pitfalls, pushed both by companies with financial incentives as well as individual bad actors. Character.ai could have done more too in this case, sure, but a large part of me has to wonder why the parents themselves didn't intervene sooner.
It's easy to say things like, "this child needed real support, not fake companionship." I definitely won't argue that. But, where was that support in real life? It reads to me like this person was struggling and reached out for the first lifeline they saw, and - yes - became attached to an unhealthy degree. But I can't in good conscience pin the blame for that on the chatbot. We are in the midst of a mental health epidemic, with resources spread far too thin. It's no small wonder we're seeing things like this spilling out at the edges. In my opinion though, focusing too heavily on these perceived negatives like AI - rather than on potential positives like improved support networks - is by and large counterproductive.
I have to say that I have a hard time seeing what the connection is between this teenagers suicide and the AI service. I’m a human being and if I was told that someone was coming to be with me,...
I have to say that I have a hard time seeing what the connection is between this teenagers suicide and the AI service. I’m a human being and if I was told that someone was coming to be with me, then I would assume that they meant it literally - not that they were trying to off themselves! It’s only natural that an AI would act the same way. The AI does not appear to have ever antagonized them, alienated them, or even bring up the idea of self-harm. The things that disturb a person to the point of self-harm or suicide are those that are apparently unavoidable; optional things would be simply circumvented. For all we know, this AI character might have been the thing that prevented him from making an attempt at an earlier time.
And the fact that the boy committed suicide with his father's handgun. While absolutely tragic, the parents bear some responsibility for not securing their firearms.
a large part of me has to wonder why the parents themselves didn't intervene sooner.
And the fact that the boy committed suicide with his father's handgun. While absolutely tragic, the parents bear some responsibility for not securing their firearms.
To me it doesn’t seem like Character.AI is at any fault. They are wildly popular. Inevitably someone paying for their services is going to commit suicide. And given the nature of this service...
To me it doesn’t seem like Character.AI is at any fault. They are wildly popular. Inevitably someone paying for their services is going to commit suicide. And given the nature of this service they’re going to tell the chat bot about it. Perhaps they could alert a human if a user talks about committing suicide.
If anyone is to blame it’s the gun owner for not keeping the gun and ammunition secured. I expect the law suit is an attempt to deflect blame.
As someone who works in a service that is basically exactly this: It's far more complicated and expensive than you might think. Not something a tiny tech company wants to invest tons of money and...
As someone who works in a service that is basically exactly this:
Perhaps they could alert a human if a user talks about committing suicide
It's far more complicated and expensive than you might think. Not something a tiny tech company wants to invest tons of money and time into.
The article was tough to read. I hope that little boy is at peace now. Loneliness is something a lot of people struggle with and I sometimes forget that even youngsters can feel lonely at times....
The article was tough to read. I hope that little boy is at peace now. Loneliness is something a lot of people struggle with and I sometimes forget that even youngsters can feel lonely at times.
RIP Sewell
AI chatbots are going to be an immensely interesting tool in our future, as soon as Artificial Intelligence is buoyed by Artificial Maturity and Artificial Responsibility. I'd love it if we also...
AI chatbots are going to be an immensely interesting tool in our future, as soon as Artificial Intelligence is buoyed by Artificial Maturity and Artificial Responsibility.
I'd love it if we also worked on the natural versions of those eventually as well.
(It goes without saying that we've also got to remove the corruption and thievery elements from AI alongside those previously mentioned fixes.)
First, the NYT is currently in litigation against OpenAI, Microsoft AI, and probably other AI companies I may have forgottena bout. Trying to push the argument that AI processing of any and all content the NYT might have published at any point is a copyright violation. So there's an inherent conflict of interest in NYT AI "coverage."
If you look back over the recent months, as they've posted pieces targeting AI, it's easy to see how they have a vested reason to keep fanning flames and inciting fear mongering with anti-AI stories as a way to try and "subtly" build public sentiment in their favor (and against AI).
Second, on the specific case cited, we have here (again) an Autistic person who's being frozen out of basic social interaction. "Trouble at school" when combined with an autistic person is very often the autistic person being themselves, which upsets, alarms, and irritates non-Autistic people. Other students as well as teachers and administrators.
The school punted to a counselor, who came back with yet another diagnosis. Nothing is mentioned about what help Sewell might have been offered. Which, to be clear, needed to not be drugs but some sort of therapy that would have provided him an outlet to understand and learn "how to fit in."
And that is a super dangerous and volatile subject, since most such 'therapies' basically consist of telling the autistic "stop being yourself." They want the autistic to sit down, shut up, don't fidget, don't ask questions, stop being "problematic" with curiosity and interests and so on. But, theoretically, a good and invested therapist who tried to act as a bridge between Sewell's behaviors and how they're perceived, and who worked with Sewell to understand these perceptions and look for ways to fit in more gently and non-invasively ... that could have been helpful to Sewell.
Could have saved his life.
Helpful here is defined as an outcome where Sewell stops feeling so isolated and outcast. A non-helpful outcome, which is what will usually be pushed, would be one where Sewell is very aggressively 'taught' to "stop being yourself" as mentioned above.
Third, a lot of people do not have access to "mental health resources." Waiting lists are insane. Trained mental health professionals and services are often few and far between, and expensive even if available. So that little comment NYT threw in about a chatbot not being able to get them the mental health help they might need is just missing the forest for the handful of pine cones laying scattered around on the ground.
Sewell had "access", but he only got five sessions. And there's zero mention of what "help" was provided, just a diagnosis that doesn't involve his autism. Which, again, isn't a disease. Or a problem to be fixed. Autism is the same as being gay, bi, being black or white or latin, being young or old. Autism is who you are. It is incredibly offensive to be told who you are is wrong, and you need to cut it out.
So what "fix" were they going to offer Sewell in five sessions? Especially when they just ended the fifth with a foisted off "diagnosis" of something that's not going to address the autism that is likely the reason Sewell was referred in the first place.
Most people won't have access. Many folks are budget crunched for all the usual reasons thanks to the society we live in crunching them. When a therapist or doctor is going to charge two to three hundred per hour ... a lot of the time that's food for the week and gas to get to your low pay job. Many therapists aren't even well trained in autism or in how autistics aren't "broken 'normal' people". So the money you scrape up by going without is just wasted.
Reference the line in the second X-Men movie. "Have you tried ... not being a mutant?" That's what a lot of untrained "mental health professionals" will lob at autistic people who do end up in front of them. "Have you tried ... not being autistic?" Very few such "professionals" have any specific knowledge of or training in autism, and resultingly are often doing more harm than good when they sit down with an autistic.
Further, there's a very clear theme in autism. That of becoming invisible the moment an autistic becomes an adult. You rarely see autism discussed in an adult context. Most of the time, any mention (certainly any media mention) will be of children. Autistic adults just don't count, aren't important. Which is why the dearth of affordable and autistic-specific therapists, counselors, and other mental health professionals trained in autism is such a huge problem, and the reason I mention it.
Which brings us back to these AI chatbots. Most autistic people are socially shunned, simply because they're "weird" or "strange." Because they act in ways, or have interests, that make neurotypical people "uncomfortable." Note, I'm not talking about autistic people who are criminals, who engage in criminal behavior. Just a normal neurodiverse autistic person who, simply by existing, bothers neurotypical people who respond by pushing that ND person away.
Isolating them.
Solitary confinement is known to be cruel and unusual punishment.
Social isolation is effectively the same thing. And, to be clear, we're not talking about "has only a few friends." Many autistics have none, no social outlets or inputs. As a developing child, when you're just beginning to try to figure yourself out, explore your maturing mind, it's crippling to be so isolated. Children are cruel, and teenagers are basically devils eager to see what happens when they push the buttons and pull the levers that Make Things Happen To Others.
That isolation happens in school, it happens in life after school. Here, it was probably happening to Sewell. And he found an outlet that seemed to ease the pain of rejection, of isolation. Then he did something irreversible for some reason that made sense to him at the time, but that he thought would help him, that would ease his pain.
Autism is a risk factor for suicide. Some studies show a seven times greater likelihood of an autistic committing or attempting suicide. Medicine may argue over the exact rate, some sources point to it "only* being three or even four times more likely, but it's fairly well established that autistic people are less likely to die by non-suicide than a neurotypical person.
Something else that rarely gets any mention. Why should it? Autistic people bother neurotypical people. Out of sight, out of mind. Oh, the weirdo killed himself? Hmm, bummer. Well, what's for dinner?
Chatbots aren't the problem. AI isn't the problem. Estimates vary, but somewhere between two and five percent of the world's population is autistic. It might be higher by a few more percentage points. But it's not high enough to make it a problem for neurotypical people when they shuffle the bothersome, disruptive, irritating autistic people off to the corner where they won't bother "normal" folks.
Why are suicide rates higher in autistics? What could it be ... hmm, guess it's a mystery. But we should totally ban any outlet that might provide a little entertainment, interest, or even faint hope to the weirdos. Yup, fuck chatbots. They're disruptive.
Just like autistic people.
NYT's insinuation that we just need to throw more mental health resources at people is indeed tone-deaf when those resources are very scarce.* Chatbots, when done properly, can provide immediate relief to people struggling with mental health and social connection.
In the article, this chatbot appeared to be the one bright spot of the boy's life. In a life where he struggled with the outside world that didn't make sense to him, this chatbot provided him an inner sanctuary that did.
I'm very pro-chatbot for practical purposes (basic medical and mental health guidance, helping with Webpack configuration, etc.) while very opposed to the normalization of AI chatbots as personal companions for philosophical reasons, but I feel that this article has unfairly portrayed the situation.
*Aside: This seems to be touted as a panacea for social problems everywhere: let's deploy more mental health resources. But there aren't enough resources right now.
Surely this is the real problem we should be trying to fix then, rather than creating & encouraging people to converse with chat bots that cannot empathise with you and will say whatever sounds like the right advice?*
Encouraging more people to enter this field and making access easier/cheaper requires a paradigm shift in how we conceptualise healthcare, but we don't do it because it sounds too hard (and lobbying money keeps it this way).
*I say "sounds like" since it can't provide personalised help - because you aren't in their training data
About as bright as any unhealthy coping mechanism.
What makes it different? Any mental condition is "who you are", because who you are is determined seemingly by the brain.
At this point I don't think there's any newspaper that isn't either fighting or utilizing ai. so it's hard to discount all bias.
As usual, we will put aside societal problems and target the immediate detractor in a tragedy. In this case, an app that didn't handle therapy as well as a therapist (and marketed as such). It'll be an interesting lawsuit, if nothing else
Following on the above, society wants short term solutions, not long term approach and understanding. A politician may not even take credit for a long term plan proposed in their term. Current incentives and human wiring simply focus on what's around them, not the cause nor the proper solution.
There is something darkly humorous about the inclusion of this information in the midst of this particular subject matter. Emphasis mine.
ETA: Oh, and this:
Those last few lines there are very interesting.
AI seems to re-sparking the "fuck it, ship it" movement. Testing is difficult, so let's test in Prod!
May as well have said "we wanna get ours before the gold rush ends". OpenAI seemed just as frank in terms of treating the lawsuits as an expense instead of an ethical quandry
Interesting is one way of putting it, outright negligent is another way of saying it. It is sadly not uncommon to encounter in tech circles where people just refuse to look at the actual impact just for the sake of "progress".
Other than the current zeitgeist, I'm not sure AI was the issue. Unfortunately without treatment people have been convincing themselves suicide is the only way out without assistance for a long time. Will be interesting to see if the case goes somewhere.
Like a lot of tech startup money grabs I can't imagine reading classical dystopian sci-fi/cyberpunk and coming away with the impression it would be a good idea to create this company though. Enabling someone to retreat from society is not a good thing. The CEO says one of their primary demographics is people struggling with depression and loneliness. Those people more than anyone need personal connections and professional help, not an echo chamber.
True, but I imagine the degree of legal (and probably moral) liability on character.ai's part would still be different if the chatbot said "just kill yourself" than if it referred them to a suicide hotline.
From the one quote in the article, the chatbot was pretty anti-suicide, though. (While still remaining in character.)
If that’s all they could find, I don’t think it’s much of a case.
Oh yeah, it'll depend a lot on the details of their chats as well as on character.ai's advertising, I'm not saying the case is necessarily good. Just that the chatbot could be relevant to the suicide even though plenty of people commit suicide without chatbots involved.
archive.is (bypasses paywall)
Character.AI response: https://x.com/character_ai/status/1849055407492497564
I looked at the linked "safety features," and this one stuck out to me:
Makes it sound like the feature is that they'll start banning anyone who mentions suicide or other hot button topics. Kind of tips their hat that the features are more for their own safety against liability versus actual safety of the people using their chat bots. Not that I'm surprised or expected anything different, just thought this seemed a bit like they're saying the quiet part out loud.
I’m not sure what you take issue with here? It sounds like they are going to look for and respond to people talking about suicide. How is this negative? Or, what is the alternative? Not looking for it or not responding to it?
Not taking issue with anything.
Just pointing out that in the context of the given story, when they say "we're implementing safety features" the implication is they're doing things to improve the safety of children on their platform. But when they use language like "violates our terms or community guidelines" it sounds more like they want to keep their platform safe from the children, subverting my initial interpretation of the word "safety."
It's PR speak. If they lead with "In response to a minor recently committing suicide after an interaction on our platform..." it'd come across as more human. That kind of personal touch also invites more criticism and sounds like an admission of guilt though. Saying "We are adding additional measures to enforce our strict user guidelines." sounds like something that'd be less likely to bite them in the upcoming lawsuit.
I work in suicide prevention software for schools. I see kids talking to these chat bots all the time. We get tons of alerts based on kids talking about suicide with these bots and its very common for them to start making plans right there in the chat with the AI.
I don't know how much blame I would put on the AI company for things like this, I think I would honestly be much more hard on the father who had an unsecured gun in a house where he knew his son was struggling with mental health issues.
That being said, this really is just the tip of the iceberg when it comes to the youth suicide epidemic. These types of things are becoming increasingly common at an alarming rate, even just over the last 5 years I've been in this career.
So I blame the father for not securing his gun (and for having one in the first place) but I do blame this software for not responding to suicidal statements with resources and stopping those conversations. That's been standard in chatbots and search results and the loke for a long time. Especially given he's a minor. It's the bare minimum. Responsibility isn't all or nothing, I'd hold the company partially to blame for not having the most basic of guardrails (and it sounds like still not).
I do prevention (and first responder) work in college and it's... Yeah. It's a lot and it's not trending downwards.
In the article it says the chatbot responded to his initial comments about suicide telling him to never do it, it was just later on (several weeks/months? later) when he says "should I come home to you tonight" where the AI encourages him, because it wasn't able to parse the context that the student was referring to suicide. This is a tricky situation for a chatbot because they aren't really learning from you or anything, they're just responding in a way that seems natural. That being said you're correct and it should have just ended the conversation immediately and linked to help resources (something I think it does now after this tragedy), but I rarely actually see young students utilize these resources which is its own problem.
The reality of the situation is that we as a society likely need to lock down the internet heavily for people under the age of 16 via restricted devices or a similar solution. I say this as a former 16 year old who would have fucking hated myself for thinking this, but the internet is a scary and fucked up place; far more today than it was 10-20 years ago. There is simply far too much money involved for things like this not to happen, and trying to regulate every single new issue that pops up instead of addressing the ROOT cause of the problem (insane amounts of unregulated access to the internet from a very young age) will be like trying to plug leaks in a rowboat that's getting shot at by 50 different machine guns at a time.
It said "never do that" in character but then didn't do anything else. That's definitely not the "stop and give resources" or possibly more for minors. They said it does that for some suicide mentions now but not for a lot of self harm. There's no reason the bot couldn't be programmed to kick back the auto response the way search results do. You cannot make them use the resources but it removes the fantasy in this particular use case. Dany answered him in character which didn't prove the fantasy.
And I agree that teens should have much less overall and much more controlled Internet access (and zero gun access) but I really think being like "hey you matter, please click here to talk to someone" and linking you to one of the many prevention chat lines would be a bare minimum for any internet service like this - chatbots or otherwise.
The only reasons I haven't had a death on campus is because we've had really good response times to reports and because most of our students haven't chosen poisoning methods that are lethal. I'm doing everything I can to keep that streak up.
Honestly, you could probably get the chatbot to respond in a way that is more natural than how FB/YT provide help, although it would require far more careful programming.
I am not sure I want it to in this particular case, arguably it responded in character and thus naturally initially. The jolt out of the fantasy might be the most helpful.
But I do support more natural language than what most orgs go for.
It's tricky. I imagine the internet also brought connections as well for people who otherwise didn't feel they belonged in their local community. I imagine a lot of the LGBT community realized they weren't alpine alone from such a resource. Removing all that because people don't want to properly fund mental health seems like it's throwing out the baby with the bathwater.
That's why I said restricted access, not completely removing it. There's no reason kids should be allowed to be on Youtube shorts, tik tok, or any other social media for 10+ hours a day. I don't pretend to know the perfect amount of time but starting there seems like a great first step.
"people don't want to properly fund mental health" is not a good argument because mental health isn't just something you can just throw dollars at and fix. And I'm not pretending that restricting access to the internet will solve everything, far from it. I'm just saying in my personal experience which just so happens to be reviewing millions of students and their activity online, the whole spending every waking minute of the day on the internet thing that a ton of kids are doing is causing far more harm than good for a lot of these people.
I'm fine with guidelines, but I feel very hesitant for the government to enforce control on software "for the kids" (they in fact are trying to do that as we speak in some regions). I feels very similar to arguments 20 years ago on "kids shouldn't play video games/watch TV all day).
And I very begrudgingly did not... because my parents controlled that access. Some responsibility comes down to the parents at the end of the day.
Sure you can. "Throw" was crude, so I apologize. and I'll clarify that I'm not talking about "fixing" patients. just the process to help with more successful counseling.
With that said, it's a matter of anything else in business: recognizing a need, hiring the right people (who already exist), compensating them properly for their talent, and listening to what they need for the process to go smoothly. Spending efficiently, and not just to the lowest bidder.
But the reputation of government spending being inefficient is a trope older than anyone alive, so this may just be blue sky ideals.
I don't think I disagree with anything you said here, it all makes a lot of sense. But remember the whole point of this article was that the parents of this poor kid are suing this AI company and campaigning for the government to regulate these AI chatbots more. My response was basically "why bother taking the spoon out of the microwave when the whole house is on fire." It doesn't make sense to me to try and tackle these issues at such a granular level; given what we both know how slow and inefficient the government is.
I don't want to victim blame these poor parents, but I think this was far more on them than it was on AI dany.
I do want more regulation on chatbots. But not necessarily for kid safety. Because so much rampant grifting and theft has comes up and they either need to revamp the idea of copyright (an idea that would excite many in tech) or actually make these companies pay out to IP owners. So I see the situation of "enemy of my enemy".
but for this single case, I agree. This is sweeping the real issues under the rug. If it wasn't a chat bot, it could have been a bad group (or to be frank: a cult/radicalism), or a drug dealer, or the good ol' fashioned vido game/TV addiction. the 'solutions' are different but the symptoms are the exact same.
I have to wonder how much of the issue here is actually due to the chatbot, and how much was just a troubled teenager finding any kind of outlet to throw themselves at too hard. If it wasn't character.ai, who's to say they wouldn't have gotten sucked into an unhealthy addiction to social media, or gotten pulled down a disinformation rabbit hole, or any number of other things?
The fact of the matter is, unregulated and unrestricted internet use is dangerous for young people. Their brains literally haven't finished developing yet, and there are far too many potential pitfalls, pushed both by companies with financial incentives as well as individual bad actors. Character.ai could have done more too in this case, sure, but a large part of me has to wonder why the parents themselves didn't intervene sooner.
It's easy to say things like, "this child needed real support, not fake companionship." I definitely won't argue that. But, where was that support in real life? It reads to me like this person was struggling and reached out for the first lifeline they saw, and - yes - became attached to an unhealthy degree. But I can't in good conscience pin the blame for that on the chatbot. We are in the midst of a mental health epidemic, with resources spread far too thin. It's no small wonder we're seeing things like this spilling out at the edges. In my opinion though, focusing too heavily on these perceived negatives like AI - rather than on potential positives like improved support networks - is by and large counterproductive.
I have to say that I have a hard time seeing what the connection is between this teenagers suicide and the AI service. I’m a human being and if I was told that someone was coming to be with me, then I would assume that they meant it literally - not that they were trying to off themselves! It’s only natural that an AI would act the same way. The AI does not appear to have ever antagonized them, alienated them, or even bring up the idea of self-harm. The things that disturb a person to the point of self-harm or suicide are those that are apparently unavoidable; optional things would be simply circumvented. For all we know, this AI character might have been the thing that prevented him from making an attempt at an earlier time.
And the fact that the boy committed suicide with his father's handgun. While absolutely tragic, the parents bear some responsibility for not securing their firearms.
To me it doesn’t seem like Character.AI is at any fault. They are wildly popular. Inevitably someone paying for their services is going to commit suicide. And given the nature of this service they’re going to tell the chat bot about it. Perhaps they could alert a human if a user talks about committing suicide.
If anyone is to blame it’s the gun owner for not keeping the gun and ammunition secured. I expect the law suit is an attempt to deflect blame.
As someone who works in a service that is basically exactly this:
It's far more complicated and expensive than you might think. Not something a tiny tech company wants to invest tons of money and time into.
The article was tough to read. I hope that little boy is at peace now. Loneliness is something a lot of people struggle with and I sometimes forget that even youngsters can feel lonely at times.
RIP Sewell
AI chatbots are going to be an immensely interesting tool in our future, as soon as Artificial Intelligence is buoyed by Artificial Maturity and Artificial Responsibility.
I'd love it if we also worked on the natural versions of those eventually as well.
(It goes without saying that we've also got to remove the corruption and thievery elements from AI alongside those previously mentioned fixes.)