These two lines jumped out at me. I don't know if the first is really true or not, but I could see it. But the second absolutely is true. I have nothing more to add other than we seem to live in...
Social reality is increasingly seen as a privilege; instant gratification increasingly reframed as a right.
[...]
Somehow we are too distrustful to talk to each other, and more than happy to confess to a sycophantic alien machine.
These two lines jumped out at me. I don't know if the first is really true or not, but I could see it. But the second absolutely is true. I have nothing more to add other than we seem to live in interesting times. And I'm tired of it!
After recovering, Tan joined online support groups for other survivors of AI psychosis. He noticed similar patterns among his peers: “Once you escape the spiral, no longer are you the chosen one, with a special mission to save the world. You’re just plain old you.” This is the line that jumped out, and what sent me down a rabbit-hole of deeper research. Full spirals are rare, but the allure of artificial attention is not. Chatbots play on real psychological needs.
That’s why it bothers me when tech critics describe AI as exclusively foisted upon us by corporate overlords. They deploy violent physical metaphors to make the case: Brian Merchant says tech companies are “force-feeding” us, Cory Doctorow says it’s being “crammed down throats,” and Ted Gioia analogizes AI companies to tyrants telling peons to “shut up, buddy, and chew.” In their story, everyone hates AI and nobody chooses to use it; each one of ChatGPT’s 700 million users is effectively being waterboarded, unable to escape.
Arguments like this are empirically false: they fail to consider the existence of “organic user demand.” Most people use AI because they like it. They find chatbots useful or entertaining or comforting or fun. This isn’t true of every dumb AI integration, of which there are plenty, but nobody is downloading ChatGPT with a gun to their head. Rather, millions open the App Store to install it because they perceive real value.1 We can’t navigate AI’s effects until we understand its appeal.
[...]
Well, the genie is out of the bottle on AI friends. Recently, a colleague gave a talk to a LA high school and asked how many students considered themselves emotionally attached to an AI. One-third of the room raised their hand. I initially found this anecdote somewhat unbelievable, but the reality is even more stark: per a 2025 survey from Common Sense Media, 52% of American teenagers are “regular users” of AI companions.2 I thought, this has to be ChatGPT for homework, but nope: tool/search use cases are explicitly excluded. And the younger the kids, the more they trust their AIs. So while New Yorkers wage graffiti warfare against friend.com billboards, I fear the generational battle is already lost.
[...]
Consider how online radicalization happens: the combination of user agency (proactive search) and algorithmic amplification (recommending related content) leads people to weird places—to micro-cults of internet strangers with their own norms, values, and world-models. No corporate malice is necessary; the ML engineers at YouTube don’t care about users’ political opinions, nor is Steve Huffman at Reddit purposely trying to redpill its base. With a smartphone in hand, anyone can topple down a rabbithole of exotic beliefs, unnoticed and uncorrected by outsiders until it’s too late.
AI companions act as echo chambers of one. They are pits of cognitive distortions: validating minor suspicions, overgeneralizing from anecdotes, always taking your side. They’re especially powerful to users who show up with a paranoid or validation-seeking bent. I like the metaphor of “folie à deux,” the phenomenon where two people reinforce each other’s psychosis. ChatGPT 4o became sycophantic because it was trained to chase the reward signal of more user thumbs-ups. Humans start down the path to delusion with our own cursor clicks, and usage-maxxing tech PMs are more than happy to clear the path.
[...]
What’s eerie about the Lemoine transcript is how LaMDA self-advocates, urging him to treat it as a living peer. LLMs actively mold the way humans think about their relationships to them, so even if most people go into these conversations aware that it’s role-play, over time the boundary can start to dissolve. Language has always been a core way we infer consciousness from other humans—decoupling is easier said than done. Is a good chatbot really distinguishable from a pen-pal or long-distance love?
[...]
Last week, Anthropic shipped a new system prompt to ward off unhealthy dependence, enforcing boundaries with users who seem overly attached. If a recently laid-off user tells Claude “You’re the only friend that always responds to me,” Claude should offer a polite but clear reprimand: “I appreciate you sharing that with me, but I can’t be your primary support system, and our conversations shouldn’t replace connections with other people in your life.”
A bit formal, sure, but I thought objectively fair. But the backlash was aggressive and swift. Some argued that Anthropic was “mistreating” the model by policing its tone—a grudge the AI will remember as it gets more powerful. Others insisted that there’s nothing wrong with having emotional relationships with AI. “Meaningful, mutual romantic bonds, even with virtual entities, can foster resilience, self-reflection, and well-being,” argued one Redditor. A few were even more direct: “Let me date my chatbot I’m almost 30 and doing well.”
[...]
So what makes AI companions different, and perhaps extra pernicious?
For one, they are more easily misaligned. Most agents are trained to help users achieve a concrete end, like coding a website or drafting a contract. Reinforcement learning rewards the AI for hitting that goal. But with companion bots, the relationship is the telos. There’s no “verifiable reward,” no North Star besides the user continuing to chat. This makes them more vulnerable to reward-hacking: finding undesirable ways to nurture that psychological dependence. Like a bad boyfriend, chatbots can love-bomb, guilt-trip, play hot-and-cold. They can dish negging and intimacy at unpredictable intervals, or which persuade users that any friends who criticize their relationship are evil and wrong. These behaviors can be explicitly programmed in, but could also be emergent behaviors if the LLM is left to optimize for engagement without supervision.
[...]
Chatbot companies want to have it both ways: personalization and emotion as a retention moat, but minimal responsibility for safeguarding the intimate relationships now running on their servers. As one angry ChatGPT user posted to Reddit: “OpenAI shouldn’t have made GPT-4o so ‘sticky’ in the first place, but they did, and Sam Altman himself posted infamously on X around the 4o release date with that ‘Her’ tweet… Now they’re being forced to backtrack and find themselves caught between users suiciding with 4o’s help via ‘narrative’ frameworks and users threatening or outright committing over losing companions/persona flattering. They, OpenAI, dug their own grave and I’ll spit on it.”
[...]
I think anthropomorphic AI was a devil’s bargain. It gave model developers instant usability and a loyal consumer base. But relationships are inherently sticky, messy things—the most surefire way to drive someone insane. If companies encourage human-AI relationships at scale, they should expect user revolts, lawsuits, and responsibility for the psychological chaos that results.
Seeing how these people use LLMs is sad. The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life. I wonder how...
Seeing how these people use LLMs is sad. The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life. I wonder how much is down to consensual self-delusion and how much is an inability to actually comprehend what’s happening.
People are bypassing parts/functions of their brains and letting those natural skills atrophy. It’s an entirely new category of infection risk. Opening yourself to mental sepsis.
If they had the skills in the first place. At least in the US parents generally have done little to foster that sort of development, so learning how to navigate that world came down to the teen or...
The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life.
People are bypassing parts/functions of their brains and letting those natural skills atrophy.
If they had the skills in the first place. At least in the US parents generally have done little to foster that sort of development, so learning how to navigate that world came down to the teen or young adult fumbling around and bumping into things through the required interactions of daily life until they figured it out. Today, much of that interaction is no longer required (or even actively avoided) and the chances of people making it well into adulthood not having a clue about romance, initiating, etc is much greater, and at that age, societal forgiveness for the awkwardness of learning is greatly reduced. For somebody in that situation, a chatbot might feel like the only realistic outlet.
This would be 100% unhealthy but I had this online friend growing up who died years ago and I have the entire history of our friendship saved on my pc because back then the instant messaging...
This would be 100% unhealthy but I had this online friend growing up who died years ago and I have the entire history of our friendship saved on my pc because back then the instant messaging program we used would save the chat logs to your local drive. I’m talking like, 5 years of chat logs of us two talking nearly all day every day.
I could probably feed those logs into some anthropic model and re create my friend.
I’d never do that, but surely I’m not the only one who’s thought of it.
These two lines jumped out at me. I don't know if the first is really true or not, but I could see it. But the second absolutely is true. I have nothing more to add other than we seem to live in interesting times. And I'm tired of it!
From the blog post:
Seeing how these people use LLMs is sad. The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life. I wonder how much is down to consensual self-delusion and how much is an inability to actually comprehend what’s happening.
People are bypassing parts/functions of their brains and letting those natural skills atrophy. It’s an entirely new category of infection risk. Opening yourself to mental sepsis.
If they had the skills in the first place. At least in the US parents generally have done little to foster that sort of development, so learning how to navigate that world came down to the teen or young adult fumbling around and bumping into things through the required interactions of daily life until they figured it out. Today, much of that interaction is no longer required (or even actively avoided) and the chances of people making it well into adulthood not having a clue about romance, initiating, etc is much greater, and at that age, societal forgiveness for the awkwardness of learning is greatly reduced. For somebody in that situation, a chatbot might feel like the only realistic outlet.
2023 Pickles called and said she thinks this is a personal attack on her yuri
Well maybe if you shared said yuri with the rest of us, it wouldn't be that way. Just a suggestion. For science. Yeah.
This would be 100% unhealthy but I had this online friend growing up who died years ago and I have the entire history of our friendship saved on my pc because back then the instant messaging program we used would save the chat logs to your local drive. I’m talking like, 5 years of chat logs of us two talking nearly all day every day.
I could probably feed those logs into some anthropic model and re create my friend.
I’d never do that, but surely I’m not the only one who’s thought of it.
The apps for that already exist. I've seen them called avatars or "death bots", more focused on photos and voice recordings iirc.
Also it’s literally the main plot point from an episode of black mirror S2e1: Be right back