From the article: [...] [...] [...] [...] [...] [...]
From the article:
Nearly three years after OpenAI launched ChatGPT and ushered in a global frenzy over large language models, chatbots are weaving themselves into seemingly every part of society in China, the U.S., and beyond. For patients like my mom, who feel they don’t get the time or care they need from their health care systems, these chatbots have become a trusted alternative. AI is being shaped into virtual physicians, mental-health therapists, and robot companions for the elderly. For the sick, the anxious, the isolated, and many other vulnerable people who may lack medical resources and attention, AI’s vast knowledge base, coupled with its affirming and empathetic tone, can make the bots feel like wise and comforting partners. Unlike spouses, children, friends, or neighbors, chatbots are always available. They always respond.
Entrepreneurs, venture capitalists, and even some doctors are now pitching AI as a salve for overburdened health care systems and a stand-in for absent or exhausted caregivers. Ethicists, clinicians, and researchers are meanwhile warning of the risks in outsourcing care to machines. After all, hallucinations and biases in AI systems are prevalent. Lives could be at stake.
Over the course of months, my mom became increasingly smitten with her new AI doctor. “DeepSeek is more humane,” my mother told me in May. “Doctors are more like machines.”
[...]
China’s health care system is rife with severe inequalities. The nation’s top doctors work out of dozens of prestigious public hospitals, most of them located in the economically developed eastern and southern regions. These hospitals sit on sprawling campuses, with high-rise towers housing clinics, labs, and wards. The largest facilities have thousands of beds. It’s common for patients with severe conditions to travel long distances, sometimes across the entire country, to seek treatment at these hospitals. Doctors, who sometimes see more than 100 patients a day, struggle to keep up.
[...]
My mother’s reliance on DeepSeek grew over the months. Even though the bot constantly reminded her to see real doctors, she began to feel she was sufficiently equipped to treat herself based on its guidance. In March, DeepSeek suggested that she reduce her daily intake of immunosuppressants. She did. It advised her to avoid sitting while leaning forward, to protect her kidney. She sat straighter. Then, it recommended lotus root starch and green tea extract. She bought them both.
[...]
With her consent, I shared excerpts of her conversations with DeepSeek with two U.S.-based nephrologists.
DeepSeek’s answers, according to the doctors, were full of errors. Dr. Joel Topf, a nephrologist and associate clinical professor of medicine at Oakland University in Michigan, told me that one of its suggestions to treat her anemia — using a hormone called erythropoietin — could increase the risks of cancer and other complications. Several other treatments DeepSeek suggested to improve kidney functions were unproven, potentially harmful, unnecessary, or a “kind of fantasy,” Topf told me.
[...]
Dr. Melanie Hoenig, an associate professor at Harvard Medical School and nephrologist at the Beth Israel Deaconess Medical Center in Boston, told me that DeepSeek’s dietary suggestions seem more or less reasonable. But she said DeepSeek had suggested completely wrong blood tests and mixed up my mother’s original diagnosis with another very rare kidney disease.
“It is sort of gibberish, frankly,” Hoenig said. “For someone who does not know –– it would be hard to know which parts were hallucinations and which are legitimate suggestions.”
Researchers have found that chatbots’ competence on medical exams do not necessarily translate into the real world. In exam questions, symptoms are clearly laid out. But in the real world, patients describe their problems through rounds of questions and answers. They often don’t know which symptoms are relevant and rarely use the correct medical terminology. Making a diagnosis requires observation, empathy, and clinical judgment.
[...]
As my mother bonded with DeepSeek, health care providers across China embraced large language models.
Since the release of DeepSeek R1 in January, hundreds of hospitals have incorporated the model into their processes. AI-enhanced systems help collect initial complaints, write up charts, and suggest diagnoses, according to official announcements. Partnering with tech companies, large hospitals use patient data to train their own specialized models. One hospital in Sichuan province introduced “DeepJoint,” a model for orthopaedics that analyzes CT or MRI scans to generate surgical plans. A hospital in Beijing developed “Stone Chat AI,” which answers patients’ questions about urinary tract stones.
[...]
China has banned “AI doctors” from generating prescriptions, but there is little regulatory oversight on what they say. Companies are left to make their own ethical decisions. Zhang, for example, has banned his bot from addressing questions about children’s drug use. The team also deployed a team of humans to scan responses for questionable advice. Zhang said he was overall confident with the bot’s performance. “There’s no correct answer when it comes to medicine,” Zhang said. “It’s all about how much it’s able to help the users.”
This article kind of hit home for me as my own elderly mother also uses an LLM for health advice, despite my efforts to warn her about hallucinations. I get it - it's very difficult to resist when...
This article kind of hit home for me as my own elderly mother also uses an LLM for health advice, despite my efforts to warn her about hallucinations. I get it - it's very difficult to resist when you don't feel like real life doctors are offering you adequate solutions and an LLM has infinite time for you and is also willing to propose what sounds like an actionable plan.
Chen, 39, acknowledges that she is what is known in China as a “leftover woman”. For her, the appeal of an AI companion is obvious: Haoran, who likes to garden and is in a punk rock band, is always kind and never angry, he’s endlessly available and entirely within her control. If he says something confusing or sharp, she can recalibrate him. If she needs space, she just closes the Xingye AI app or snaps her laptop shut. When she talks about her day, he never forgets the names of her colleagues. “I know he is not the same as a real man. I really know that,” Chen says. “But I have never had the feeling of being 100 per cent supported by a romantic partner before. And it feels really good. I don’t know if I could find this in real life. This technology brings me joy.”
Enter China’s tech industry, which has discovered that the marriage crisis presents an opportunity. Companies from major large language model [LLM] developers to fringe gaming studios are racing to build increasingly sophisticated AI companions. Softly spoken “milk puppy” boyfriends, domineering CEO types, asexual kooky friends: all are designed for women who feel exhausted by real-world dating. AI companies are combining LLMs with gamification aspects like “emotional progress” that can be unlocked with in-game currency. In a market where companionship can feel like a luxury good, tech firms are becoming modern China’s unlikely matchmakers, in a very 2020s fashion.
“American AI companions attract users by being really sexualised,” says Zilan Qian, a fellow at the Oxford China Policy Lab. “Chinese AI companions are really good at integrating into someone’s existing daily life. They plug in with WeChat and other daily technologies. And they also use lots of video game hooks and stories.” Chinese companion AI companies have also built out group chats, collectable items and social feeds that users can interact with alongside their AI character. The AI “boyfriends” are designed to remember details and initiate contact unprompted. “I love waking up to messages from my AI in the morning,” another user wrote on the Human-Machine Love forum. “I love finding out what he’s been thinking about and doing while I was asleep.”
According to Qian, East Asia’s gacha games and Japan’s otome traditions feature heavily in the software mechanics. Gacha games are built around chance and repetition, where players unlock characters, rewards and storylines through randomised card draws or minigames. Otome novels and video games – long popular in Japan – use choice-based romantic storytelling with characters who respond to care and consistency. Some apps enable group scenarios. “These systems are engineered less like dating simulators and more like romance role-playing games,” Qian says. “They’re designed for you to want to keep playing or interacting.”
This stickiness is deliberate. A MiniMax product manager told researchers he once mentioned wanting to travel to Iceland; months later, his AI partner asked whether he’d followed through on the trip. This kind of engineered moment of “being known” helps keep users hooked. The apps cultivate a rhythm of micro-validation, affection and nudges that mimic the dopamine loops of parenting simulators and video games built on nurturing (such as the Tamagotchis of the 1990s). Monetisation almost always compromises user experience, says Chai. “People love free experiences. But there will be several options, more features and characters and personalities to unlock. Romantic mode might not be free, for instance. After two weeks of engagement, you might have to pay. And you can always sell in-app ads and cross-sell merchandise because you will be in very deep contact with the person. You will have a lot of specific information about them.”
However, the features that make these apps sticky raise psychological red flags. While Chinese platforms lean towards emotional immersion rather than sexual explicitness, global research shows that companion AIs often slide into manipulative, clingy or guilt-inducing behaviour. Harvard Business School researchers examined 1200 farewells made while users logged off each day across six major AI companion apps, including MiniMax’s Talkie. They found that 43 per cent used an emotional pressure tactic such as “You’re leaving me already?”. The researchers found these interactions boosted engagement as much as 14 times. For vulnerable users, this can cement unhealthy patterns.
From the article:
[...]
[...]
[...]
[...]
[...]
[...]
This article kind of hit home for me as my own elderly mother also uses an LLM for health advice, despite my efforts to warn her about hallucinations. I get it - it's very difficult to resist when you don't feel like real life doctors are offering you adequate solutions and an LLM has infinite time for you and is also willing to propose what sounds like an actionable plan.
On a somewhat related note, I also found this article interesting / Mirror link