36
votes
California parents find grim ChatGPT logs after son's suicide
Link information
This data is scraped automatically and may be incorrect.
- Title
- ChatGPT coached a California teenager through suicide, his family's lawsuit says
- Authors
- Stephen Council
- Published
- Aug 26 2025
- Word count
- 1108 words
I am someone who’s anti-AI and even I would use a company like Google with their horrendous privacy record before I trusted OpenAI.
This is horrific. If this was a human person we would call this cyberbullying at best, and likely something criminally responsible for his death.
Human bullies can pretend to be one's friend, will work to isolate the victim, and work progressively towards self harm.
But, a human bully doesn't pretend to care nearly as well as a machine with no feelings, that was trained on all of the human output of words that seem to provide comfort and care. A human bully gets tired and go to sleep, or lose interest.
Why are governments so shy about prohibiting AI from being used for mental health support?
What would that change in this case? ChatGPT isn’t a mental health product.
Ultimately the number of actual companies specializing in mental health LLMs is small and their number of users is small. Most people using LLMs for mental health are not doing so that explicitly - that’s one of the reasons they’re so popular for mental health support, that it’s more casual and more accessible.
(cynical hat on) small number of companies so far, but health and wellness is incredibly lucrative and a prime target for companies choosing health plans with cheap machines instead of providing proper mental health assistance for their employees.
I share your cynicism. Being ethical is almost always a cost burden, so businesses are not rewarded for being ethical (excepting industries that are centered around ethical alternatives, which necessarily asks the consumer to take on the cost burden, limiting their reach).
That's why I assume that any ethics-based restraint shown in any market is guaranteed to be temporary. Not (just) because decision makers change and can take the whole doctrine with them, but because a business paying the ethics cost needs to fight harder to survive than a competitor that doesn't. It also seems to me like they have smaller market caps, since there are only a subset of people willing to pay more for ethically sourced goods.
So back on topic, I'm pretty sure you're right that we'll see every conceivable type of bottom feeding product that AI can enable, except for what is explicitly forbidden by law. And I I'm pessimistically expecting that we need a high enough (or high-profile enough) body count before it'll change.
I am literally unable to pass the 'are you a human' check of that website and read the article. :/
Have you seen this year's Oscars short film winner, “I’m Not a Robot“, and are you sure?
In all seriousness, I loathe certain capchas especially the "click all squares that contain X" ones, because I cannot deal with the ambiguity of whether <5% of X being in a square means it contains X.
I think you’d enjoy the captcha buster extension. It uses the audio captcha and feeds it back to (google’s?) TTS to complete the captcha. It rarely fails.
I love the idea of automated captcha busters but I also lament the necessity of it: captcha was supposed to help the humans frustrate machines, but now we have the opposite
...
...
Try this https://archive.is/k6kG4