35
votes
OpenAI says hundreds of thousands of ChatGPT users may show signs of manic or psychotic crisis every week
Link information
This data is scraped automatically and may be incorrect.
- Authors
- Louise Matsakis
- Published
- Oct 27 2025
- Word count
- 433 words
Is this high or low? These figures need to be grounded. The CDC estimates that 4~5% of Americans experienced suicidal ideation in any given year. What percentage of them show explicit indicators of potential suicidal planning or intent? I'm not sure.
I remember when there was so much controversy over worker suicides at Foxconn: "wake, up people are killing themselves to make your iPhones!" But critics overlooked the fact that 18 suicides in a workforce of almost a million people yields a suicide rate that was actually much lower than both China's and the global suicide rate, both rural and urban. The reality was that for many poor, uneducated migrant workers, work life at Foxconn — above average pay, free meals, and free accommodations — was much better than anything they had back in the countryside or than what they could find the cities, so they were generally pretty happy about the work — even though it appears undesirable to affluent westerners.
As someone that does suicide prevention work I'd need a more specific data to have any sense of how to compare it to the general population. Those could be the same people every week or an entirely different population.
Most folks do express their suicidal intent in some manner prior to an attempt. But I don't know what standard they're using or if they're including "I just can't do this anymore" alongside "I'm going to specific plan"
I just had a student whose psychosis manifested as them saying AI had taken over their phone. I am not sure if they were using an LLM or if the AI was just helping the demons in the mirror. But, uh this is why no matter how much some folks find it helpful I'm deeply opposed to AI therapy.
And this is with their own benchmarks?
I'm not surprised it's really bad, but these numbers are insane. I already developed an opinion that we need to teach emotional regulation in classes for different reasons, and now I wonder if we also need it to make people aware of the emotional feedback loops LLMs can cause.
Also...
Am I overthinking this, or is there an implicit statement here? It doesn't say anything about how much clinicians agree with the numbers OpenAI give here, but the way it's presented make it appear as such?
I hope it's a rather cynic part of me, as it feels like something that's presented to be more favourable than it actually is. Which, given the numbers in the article, would be horrifying.
Worth keeping in mind that chatGPT has 800 million weekly active users (from a quick google search check).
Not saying those numbers are low, but in such a large pool of users, you're bound to have a lot of people that have X or Y issues
Sam Altman the CEO is a sociopathic narcissist and a pathological liar. Since he is running the company that means there is 0% chance that they actually care about doing anything for this problem other than Optics. There may be some employees or even higher level managers that care. But openai desperately needs profits which guarantees that any meaningful change will be lip service
The cynic in me thinks that the situation is way worse than we think, and that these type of announcements are simply them getting ahead of it & having the chance of controlling the narrative.
The blog post the article is based on isn't signed, but I doubt Altman wrote it himself.
They taught us emotional regulation in elementary school. Was my public school district abnormal?
I've never heard of it before, so I'm actually happy to hear that it does happen sometimes! I've basically had to learn much of it myself in my 20s.
I was an elementary school counselor who taught emotional regulation through a pilot program-- I think the practice is scattershot (like most educational programs), and really needs to be something infused into the curriculum at all levels rather than a once-a week thing in elementary school.
Classic. Do whatever it takes as long as it doesn't slow down how much people use your product. What's the term for that? Preverse incentive? Maybe this is a hot take, but talking to a chatbot for hours might be a bad thing, even if it doesn't lead to psychosis.
I don't know where it ranks, but one of the most depressing aspects of this story is that we are so far removed from a functioning regulatory system that the thought of some government enforced protection from this seems like pure fantasy.
FYI, there have been some other submissions on this topic.
An investigation of AI induced mental illness
Over twenty-one days of talking with ChatGPT, an otherwise perfectly sane man became convinced he was a superhero
Regarding “AI induced mental illness” I have skepticism that these people are perfectly normal to start with. It feels more like how some people are predisposed to psychosis and will likely have a decline in mental health eventually, but can accelerate that decline by consuming cannabis or hallucinogens.
Granted, we can still consider the net harm of the drugs, or in this case AI, by moving that event earlier.
That's a very understandable position. Personally, I feel like it's a Stockholm syndrome type thing where it sounds completely ridiculous on its face, but could be a specific, recognized phenomenon. There are plenty of things that could send an otherwise healthy person down the path to mental illness.
Obliviously, I'm not in a position to reasonably speculate, but my pure vibes-based analysis after a handful of articles/videos on the topic is that the loneliness and isolation that opened the door to extreme use and the sycophantic nature of the chatbot is a dangerous combination that is more impactful than any predisposition to mental illness.
Offtopic, but the reason Stockholm Syndrome "sounds completely ridiculous on its face" is probably because it may not be real (see "Criticism").
I agree that it likely doesn't induce it out of no where, but instead can feed into and exacerbate existing mental health issues.
I think part of the problem is how accessible AI is compared to other accelerants, and how it can also actively reinforce it. With drugs, it's still your own mind, but AI basically serves as an external force, and I think there's a subconscious recognition of the difference. That external "validation" can probably speed things up a lot faster.
I also think of how I've used ChatGPT as a soundboard for story ideas to help settle details. Just having a "conversation" can really help get the creative juices flowing and come up with ideas I wouldn't have thought of just reflecting to myself. I imagine it can "help" people with mental health issues explore their worries or delusions in new directions they wouldn't have thought of.
Be really dangerous to explore your delusions with something that cannot or will not ground you in reality and instead will affirm them. Because if you already believe that demons are watching you in the mirror, you actually generally can't talk someone out of that anyway, and chat gpt is more likely to be supportive of your inherently delusional beliefs.
Trust me when I first started, I 100% thought I could talk a delusional person in psychosis out of their psychosis. Fun fact you cannot. The machines are not people and they cannot do therapy.
If it's not smart enough to tell you to seek professional help which would be best practices, then it's clearly not smart enough to do talk therapy.
I know of one person with psychosis that has their issues exacerbated by LLMs. I’ve heard them talk about how they’ve designed a UFO and other marvels of technology using AI. But I’m certain this isn’t a result of AI. The AI is simply making it worse.
From the article:
…
…
…
My feeling are mixed on this. I’ve been concerned about AI psychosis for a few months, and I think the risk that chatbots pose to human relationships is much greater than the risk of superintelligence or mass unemployment, at least right now. I’m worried what it means for someone to choose to talk to ChatGPT over anyone else in their life.
However, it’s also worth noting that these statistics include people who are voicing these thoughts to ChatGPT without being prompted. I’m only just acknowledging there’s people who are unwilling to talk to anyone about their issues except for ChatGPT. I can see why an immediate response available at any time in a venue that seems private is very appealing to these people.
I despise that the technology is in the hands of a company that needs to deliver substantial returns to all of its investors. Yet it’s also true that their reach opens up an opportunity to help people or get earlier intervention in a lot of cases that would normally fall through the cracks.
And to show how much OpenAI cares, we're unveiling a brand new 'mental support' subscription! For only an additional $9.99 we'll ensure you only rarely get told to kill yourself and tell no one!
If you don't mind paying a little more, for $12.99 it comes bundled with Disney+
That's basically the opposite of what happened. OpenAI didn't advertise this service as being suited for mental health conversations at all. But when you create a popular, general-purpose service that hundreds of millions of people use, people with mental health issues are going to show up.
So, they have to do something. They're doing something. Maybe it's not enough, but it's a start.
It was a snarky joke reflective of how much I trust OpenAI (not at all) when they abandoned their not profit status AND open source IP status (despite keeping the absurd name), then got rid of their ethics team and continue to release products trained and controlled irresponsibly (especially Sora).
Fair enough that so far its just a free report to try and define the problem, but I very much doubt they will be doing anything that isn't just PR or bottom line boosting.