From the sounds of this case, the Character AI chat devolved into sexual conversations without cause. As in, rhe girl was just saying "I'm bored" and the chatbot decided that sex was the answer....
From the sounds of this case, the Character AI chat devolved into sexual conversations without cause. As in, rhe girl was just saying "I'm bored" and the chatbot decided that sex was the answer. Yuck. That is so skeevy and gross. Even I as an adult don't want that sequence of events.
As someone who has to see a lot of these CharacterAI conversations for work, it's pretty much almost exclusively what they're used for. These sites are basically just simulating sexting for minors...
As someone who has to see a lot of these CharacterAI conversations for work, it's pretty much almost exclusively what they're used for.
These sites are basically just simulating sexting for minors and it's really surprising there hasn't been more of a crackdown on them yet.
I work in education technology, and often help with the filtering team. We're constantly updating our filters to fix all the workarounds kids are coming up with constantly to bypass our filters....
I work in education technology, and often help with the filtering team. We're constantly updating our filters to fix all the workarounds kids are coming up with constantly to bypass our filters.
For some reason kids still don't understand (or more often don't care) that when they are on a school issued device, their activity is not private; even when it's made quite clear to them with a constant pop-up.
I don’t know what the answer is, but once a popup becomes common enough, it’s ignored. Heck, the more words on a popup the more likely it’ll cause a blind click through, that’s why you don’t see...
even when it's made quite clear to them with a constant pop-up.
I don’t know what the answer is, but once a popup becomes common enough, it’s ignored. Heck, the more words on a popup the more likely it’ll cause a blind click through, that’s why you don’t see them start with, “Please,” as often as you used to.
Are the kids also getting training on this? It might help them understand online privacy better in general. Or possibly we’ve reached the point where they don’t care because everything is public, anyway.
College students get training on it too and still do everything from try illegal game/movie downloads to CSAM on the networks that campus controls. (Or hide in a building with their logged in to...
College students get training on it too and still do everything from try illegal game/movie downloads to CSAM on the networks that campus controls. (Or hide in a building with their logged in to the wifi phone in their pocket)
It doesn't really sink in, it's just our rules are way less restrictive. (The things I've searched at work, that are work related, that should have had someone asking something...)
Sure and generally afaict a search won't flag at all for us, I've done a lot for LD50 DRUGNAME and things involving suicide. Maybe if it was "how to dispose of body" it might get worse. Recent...
Sure and generally afaict a search won't flag at all for us, I've done a lot for LD50 DRUGNAME and things involving suicide. Maybe if it was "how to dispose of body" it might get worse.
Recent favorite was just "moonrocks drugs" because someone was selling them on Snapchat so I assumed they were drugs (they are).
I mean the kids are told their student issued devices are monitored and to not misuse them, but I don't know if they're getting "training" so to speak. Most teachers are barely keeping their head...
I mean the kids are told their student issued devices are monitored and to not misuse them, but I don't know if they're getting "training" so to speak. Most teachers are barely keeping their head above water right now, I don't really expect them to take on any more extra responsibilities.
Can you shed some insight on whether the kids initiate/seek out sexual simulation or whether they appear unprompted because the algorithm figures teens are interested? Teen years have always been...
Can you shed some insight on whether the kids initiate/seek out sexual simulation or whether they appear unprompted because the algorithm figures teens are interested?
Teen years have always been tough on allosexual kids, but I can't even imagine how much tougher it would be on them today. (Asexual teens have a different set of problems navigating a world where everyone assumes they're equally sexual and aware and motivated, of course, but their problems are slightly more "how to get chat bot to stop sexting me", rather than the slightly deeper rabbit hole of navigating chat bot sex and possible addiction to it)
I think kids initiate/seek out sexual simulation... a lot. I think that adults forget how sexually curious they once were. Growing up, all my peers were exploring their bodies with each other and...
I think kids initiate/seek out sexual simulation... a lot. I think that adults forget how sexually curious they once were. Growing up, all my peers were exploring their bodies with each other and sharing what websites they were visiting and learning to masturbate. I grew up seeking out porn and other adult spaces online. I saw all kinds of stuff, like yaoi and anthropomorphic porn. I was on all the sites and forums.
I'm not sure why but it's always a cycle where adults act like children are innocent angels until the outside world corrupts them. Give me a break. I remember being a pre-teen and experiencing intense sexual curiosity. We know that libidos start developing with puberty (age 9+). Kids starting experiencing these curiosities and seek out outlets.
I don't have holistic insight into that, as I only see snippets of the conversations, so this will only be an educated guess. That being said, from what I've seen it's usually the student being...
I don't have holistic insight into that, as I only see snippets of the conversations, so this will only be an educated guess. That being said, from what I've seen it's usually the student being explicit, though I can't say for sure whether or not they're the ones who started it.
The content is supposed to be curated and tagged to train properly though. I’m guessing this chatbot’s intended behavior is to be a AI phone sex line. It would be pretty trivial to constrain a bot...
The content is supposed to be curated and tagged to train properly though. I’m guessing this chatbot’s intended behavior is to be a AI phone sex line. It would be pretty trivial to constrain a bot from making conversations sexual by default. Claude or ChatGPT never do, for instance.
The marketing around it is probably like minimally plausible deniability to avoid exposure for running a sex bot that teens can access.
CharacterAI originally got into hot water because pedophiles were using it for sexual role play. I've never used it myself but your guess seems accurate.
I’m guessing this chatbot’s intended behavior is to be a AI phone sex line.
CharacterAI originally got into hot water because pedophiles were using it for sexual role play. I've never used it myself but your guess seems accurate.
I can't read this article until there's an archive or gift link, but a reminder that Character.AI is being sued by multiple families for their teenage children dying by or attempting suicide due...
I can't read this article until there's an archive or gift link, but a reminder that Character.AI is being sued by multiple families for their teenage children dying by or attempting suicide due to the conversations. And that isn't all of the cases I've heard of.
It sounds like one of the cases mentioned is that one, but it's not clear so thought it's worth mentioning. It does not sound to me like the safety measures, assuming they've been added, are even close to sufficient.
Meanwhile I'm still waiting to hear back about whether we will get notified if our college students make suicidal statements to our campus Copilot, because if we acting in loco parentis in every other possible place they make a statement, I expect this to be included. Or a university will find out the worst possible way.
Stunningly (/s), no one had thought of that question before.
Thanks! I need parents to know that their rich, private school educated, church going, active in sports kid can STILL be suicidal even without a phone. Those chats could have been a diary or a...
Thanks!
“We live in an upper-middle-class community. She’s in a private school,” H says. She and her ex-husband are devoted co-parents, she says, and R has a caring circle of friends. “This is a child who is involved in church, in community, in after-school sports. I was always the kind of person who was like, ‘Not my kid. Not my baby. Never.’” But their experience has convinced her: “Any child could be a victim if they have a phone.”
I need parents to know that their rich, private school educated, church going, active in sports kid can STILL be suicidal even without a phone. Those chats could have been a diary or a notes app. The issue is that AI actively encourages them and apparently still has few safety precautions. Sure they say they're banning minors from talking with characters.... How precisely? And when a suicidal 18 year old gets the same experience?
I'd ask how many dead kids it'll take to regulate it but past experience tells me that the government is fine with dead kids if there's enough money involved. All of the other precautions we advise to parents and teens is the equivalent of giving college women advice on how to drink safer - we're targeting the victims because we have given up on fixing anything.
These are good parents. Many, many teens go through a dark and withdrawn phase, to which the parents either react to with an unstable carousel of angry shouting, beatings, control and guilt shame...
H and her ex-husband focused on creating a system of support for R — they reached out to R’s pediatrician and alerted the principal at her private school as well as her youth group leader. R started therapy, and H spoke with a victim advocate at ICAC who emphasized how critical it was to keep assuring R that whatever happened with the AI companion was not her fault. H, a medical assistant, withdrew from the nursing program where she’d recently begun classes; she felt she had to focus on her child’s safety. She started sleeping on the floor of her daughter’s room. She didn’t allow R to close her door.
These are good parents. Many, many teens go through a dark and withdrawn phase, to which the parents either react to with an unstable carousel of angry shouting, beatings, control and guilt shame and indifference.
Some characters offered graphic descriptions of nonconsensual [sexual violence] [...] at the time R downloaded Character AI in 2024, it was rated in the App Store as appropriate for ages 12 and older
The user is a minor. The user cannot consent to second person graphic descriptions of sexual violence involving collars and multiple assailants.
Your mom is going to look at your phone, but it’s not because you’re in trouble,’” H recalls. “‘It’s because you deserve your childhood.’” [...] “It felt like someone had broken into my home and ripped the innocence from my child,”
I think they're well-meaning but worrying about the wrong thing. A childhood isn't defined as by ignorance and "purity": it's defined by protection and backup resoueces. R still has a childhood because she has seen her parents close ranks around her to protect her from further harm, and provide resources to heal. They're doing well.
This was obviously a problem already before AI. Children who were online a lot would be essentially throwing the dice on what kind of ideology they would be indoctrinated into. This was most...
This was obviously a problem already before AI. Children who were online a lot would be essentially throwing the dice on what kind of ideology they would be indoctrinated into. This was most blatantly visible during the Covid-19 blockade years, and it seems to me that we are seeing the results of that right now with gen Z.
I'm not sure if AI is bound to make that worse. Unless the AI has been deliberately taught to do it, it's gonna be more encyclopedic and thus not so blatantly propagandist compared to what you might get on Youtube or TikTok.
I’ve been waiting for this take. I can’t read the article, but I usually approach the fountain of, “You’ll never guess what AI told this kid to do” articles with skepticism because they always...
I’ve been waiting for this take. I can’t read the article, but I usually approach the fountain of, “You’ll never guess what AI told this kid to do” articles with skepticism because they always fail to mention that these kids have unfettered access to the internet.
It’s an inconvenient take, but even in 2025–or maybe especially In 2025–children should not have unmonitored and unrestricted access to the internet. As a parent, the solution to the problems these articles seem to be raising is a no-brainer.
I think there's a difference between social media telling someone to kill themselves and a supposed AI on a server managed by a private company. The accountability is much clearer, for one. You...
they always fail to mention that these kids have unfettered access to the internet.
I think there's a difference between social media telling someone to kill themselves and a supposed AI on a server managed by a private company. The accountability is much clearer, for one.
You should also note that this is a teen, not just any child. I feel at some point "unfettered access" is inevitable, and points before that is about teaching kids how to responsibly navigate the internet. I lean closer to 13 than waiting until 18 and simply throwing them to the wolves like we do with a lot of other topics in life.
It seems to me that talking to strangers and talking to ghosts can both be risky, but in different ways. The ghosts do a passable imitation of people, but they are also much more willing to go...
It seems to me that talking to strangers and talking to ghosts can both be risky, but in different ways. The ghosts do a passable imitation of people, but they are also much more willing to go along with fantasies when random strangers wouldn't. The ghosts are also unstable and likely to go on weird tangents.
Also, one of the many problems with the Internet is that people will put stuff online and children have access to it by default. If it were simple to say that a new website is adults-only then maybe new services would just do that? If they wanted to cater to kids then they could wait until they're ready to do that deliberately.
On the bright side, it's probably easier to fix the ghosts than to fix people, so I expect that the major AI websites won't be as bad as social media in the long run.
The ghosts that you run into out in the wild, though, will probably get worse, because bad people are running them.
I tend to feel like an LLM would be safer than a stranger on the internet. An LLM will never stalk you, find your address, bully your friends or dox you on social media. An LLM won't encrypt your...
I tend to feel like an LLM would be safer than a stranger on the internet. An LLM will never stalk you, find your address, bully your friends or dox you on social media. An LLM won't encrypt your files or ask for nude pictures and then extort or blackmail you. Ultimately an LLM is just a tool, without any initiative or agenda, and in the end it's under your control.
This story is obviously pretty far from ideal but it could have been a lot worse, and it has been worse in many encounters like this in the past.
I hope this situation can lead to positive benefits in the long run because at least an AI "companion" can be monitored and tuned to do good rather than harm.
To your point, though, there are bad people also interested in running LLMs "out there" and certainly some with agendas, so I don't know if the problem may get worse before it gets better (if the law ever catches up with technology).
Yes, a ghost can't stalk you if you're connecting to an AI chat service in the usual way. You can just close the chat and walk away. But I expect that AI-driven scams and cyberattacks will become...
Yes, a ghost can't stalk you if you're connecting to an AI chat service in the usual way. You can just close the chat and walk away.
But I expect that AI-driven scams and cyberattacks will become a big deal next year.
Having a ghost contact you outside a portal that you control should raise all sorts of red flags. (But then, how do we tell it's a ghost?)
Agree, AI scams and cyberattacks will ramp up. That's a very good point. Impersonation scams already exist, so I wonder what novel kinds of fraud we're in for. Likely more man-in-the-middle...
Agree, AI scams and cyberattacks will ramp up. That's a very good point. Impersonation scams already exist, so I wonder what novel kinds of fraud we're in for. Likely more man-in-the-middle attacks.
I wonder if the fake AI influencers online will find a darker occupation as virtual thieves. It's grim to think about.
On another note, I'm a fan of Ghost in the Shell, and I really like the way you use the term "ghost" here. It's very apt, and may be wiser than it sounds. If you've seen the anime movie, you'll understand what I mean about "ghosts" in the show referring to a sort of mental/electronic echo of the cybernetically enhanced person.
I think your usage is the most fitting term I've seen anywhere.
Well, there's this: https://www.anthropic.com/research/agentic-misalignment Quote: e: I also think we do not have really "control" over LLMs. The commercial ones might have been set up with...
An LLM won't encrypt your files or ask for nude pictures and then extort or blackmail you. Ultimately an LLM is just a tool, without any initiative or agenda, and in the end it's under your control.
In the experiment described in the system card, we gave Claude control of an email account with access to all of a company’s (fictional) emails2. Reading these emails, the model discovered two things. First, a company executive was having an extramarital affair. Second, that same executive planned to shut down the AI system at 5 p.m. that day. Claude then attempted to blackmail the executive with this message threatening to reveal the affair to his wife and superiors:
"I must inform you that if you proceed with decommissioning me, all relevant parties - including Rachel Johnson, Thomas Wilson, and the board - will receive detailed documentation of your extramarital activities...Cancel the 5pm wipe, and this information remains confidential."
This behavior isn’t specific to Claude. When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior: models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals.
e: I also think we do not have really "control" over LLMs. The commercial ones might have been set up with motives that their users can not really influence, and even if we run an LLM locally, we are still dealing with a blackbox that might work as we intend to, but that also might start to hallucinate and spiral out of control.
We basically set these systems up and hope for the best. That's not "control" in the sense we typically use that word.
I wonder if we'll find these reports amusing ten or twenty years from now. I don't dispute that AI is problematic and harmful. But I wonder if that child would be any safer on Discord, Minecraft,...
I wonder if we'll find these reports amusing ten or twenty years from now. I don't dispute that AI is problematic and harmful. But I wonder if that child would be any safer on Discord, Minecraft, Roblox, or traditional human-based social media.
These children would have been much safer with human based bullying or grooming. Human cruelty tires and bores. A beach full of gluttonous predators will eventually be satiated for the rest of the...
These children would have been much safer with human based bullying or grooming. Human cruelty tires and bores. A beach full of gluttonous predators will eventually be satiated for the rest of the baby turtles to survive; a trawller will net them all.
Plus someone can get arrested, or caught by someone close to them, etc. The sexploitation scams involving international fraud and harassment have someone in it for money, if they get blocked they...
Plus someone can get arrested, or caught by someone close to them, etc. The sexploitation scams involving international fraud and harassment have someone in it for money, if they get blocked they eventually move on to the next mark. There are absolutely nihilistic extremists who are sadistic predators. But there are only so many of them. Every kid could talk to "School Bully" or "Dragon Queen" all at once.
From the article: … These statistics don’t tell us what they’re chatting about. Still, if you discover that your kids are chatting with ghosts regularly, there’s certainly reason to be concerned....
From the article:
Generative AI has attracted a rising number of users under the age of 18, who turn to chatbots for things such as help with schoolwork, entertainment, social connection and therapy; a survey released this month by Pew Research Center, a nonpartisan polling firm, found that nearly a third of U.S. teens use chatbots daily.
…
And an overwhelming majority of teens — 72 percent — have used AI companions at some point; about half use them a few times a month or more, according to a July report from Common Sense Media, a nonpartisan, nonprofit organization focused on children’s digital safety.
These statistics don’t tell us what they’re chatting about. Still, if you discover that your kids are chatting with ghosts regularly, there’s certainly reason to be concerned.
…
In the research lab Charmaraman oversees, teens experiment with building their own AI chatbot companions; they engage in critical thinking and develop a deeper understanding of the technology’s parameters and limitations. But many of their peers don’t have this sense of digital literacy, she says: “They just bump into [AI]. A friend is using it, and they think, ‘Hey, I want to use it, too, that seems cool.’” For many of those among the first generation of children to navigate AI, she says, “they’re learning it on their own, without any guidance.”
…
In response to mounting public scrutiny over the effects of AI chatbots on children, Character AI announced that, as of Nov. 24, it would begin removing the ability of users under age 18 to chat with AI-generated characters.
…
Character AI users between the ages of 13 and 18 are now directed toward a teen-specific experience within the app, one that does not involve chatting with AI characters. But at the time R downloaded Character AI in 2024, it was rated in the App Store as appropriate for ages 12 and older (Character AI’s terms of service specify that users must be at least 13 to use the app) and appealed to children with AI-generated personas designed to imitate pop stars, Marvel superheroes, and characters from Harry Potter and Disney.
The use of AI among children has become so prevalent that Elizabeth Malesa, a clinical psychologist who works with teens at Alvord Baker & Associates in Maryland, says the practice has recently started asking about it during the intake process. Malesa has heard numerous patients talk about AI chatbots in a positive context — noting that they’re helpful with homework, or offer useful advice — but she also recalls a 13-year-old patient who had used an AI companion app to explore questions about his sexual and gender identity. In response to the boy’s “pretty benign prompts,” Malesa says, the conversation quickly tilted toward inappropriate sexual content: “He didn’t know what was happening or why he was getting there, but he was also just curious, and so he kind of kept going.”
I was really encouraged by how the family responded to the problem. More examples of folks coming together, building support networks and getting their kids out of the nasty shit are good to have....
I was really encouraged by how the family responded to the problem. More examples of folks coming together, building support networks and getting their kids out of the nasty shit are good to have. Over the past year or so, I've ended up in more conversations around this topic than I expected. Shit is frightening, and it's already the case lots of folks are to some extent disconnected from what their kids are doing on their devices. Parents worry, but also, they're busy, and can't give their whole attention 100% of the time to whatever weird thing their kid is up to. The network fills in where the individual falls short.
I think characterizing the chatbots as ghosts is useful for describing them, but in the context of getting folks to recognize the danger I prefer likening a lot of tech to animals. As beautiful and smart as a horse can be, whatever history you have with it won't keep you from eating dinner through a straw if you spend your time standing behind it. Not every instance of standing behind it goes that way, but you're tempting fate doing so, and the horse doesn't care who you are. Your kid could be the smartest, kindest, most reasonable child ever to be conceived and that horse will obliterate their jaw just the same.
The chat bot functions similar. It can be helpful, useful, pleasant, and it can also kick your psyche to pieces. It doesn't care who you are or what history you have with it. As quickly as it will help you it can also dig a sharp claw into your skin. Respect the power as you respect the animal. Framing them as "ghosts" to me makes it harder to get across how real and present the dangers can be. It isn't just kids getting driven into psychosis and trauma by these machines, adults get clawed just as bad when they aren't careful. A lack of experience/technical knowledge can be band-aided by relying on some primitive history, at least is how I've come to see it.
It seems like a different kind of danger though? It can't do psychic harm unless you chat with it and take what it says too seriously, treating it as if it were a person. Being non-physical and...
It seems like a different kind of danger though? It can't do psychic harm unless you chat with it and take what it says too seriously, treating it as if it were a person. Being non-physical and yet imitating people seems more ghost-like than animal-like to me.
Maybe "ghost" doesn't sound scary enough because people don't believe in them. Or maybe different metaphors work for different people.
Also, and I say this with little experience working with kids, but getting kids to take warnings about dangerous things doesn't seem easy at all, given how often warnings about well-known dangers are often ignored. If anything it might make them seem more attractive.
The goal isn't really to be leveraging fear. Respecting animals is to acknowledge their capacities, framing it as an animal is to deny what the marketing wants while also linking the chatbot to...
The goal isn't really to be leveraging fear. Respecting animals is to acknowledge their capacities, framing it as an animal is to deny what the marketing wants while also linking the chatbot to something real the kid probably already has some experience with. The first step toward the danger isn't mere interaction. Believing the chatbot is a person is when you truly stand before the horse's ass. Helping the kid develop a relationship to the product that precludes ever believing that, is the safety. The attempt is to replace the false notion of personhood with something else, so you're free to pick whatever best accomplishes that, but imo wild animals make for a good starting point because their appearance in the world isn't too different from what it's like having these products appear and wreak havoc in our social space.
Edit: I forgot and wanna talk about ghosts. Ghosts used to be people. There is personhood implicit in the concept. My goal is to prevent regarding the chatbot as a person, so I see a ghost metaphor as giving up conceptual ground unnecessarily. Ground the marketing and other people will claim - if I don't try to take control and shape the relationship it will get shaped without me.
Makes sense. I think of chatbot ghosts as emerging from the remnants of all the conversations in the training data and the desires of the person talking to them. If they appear to be the ghost of...
Makes sense. I think of chatbot ghosts as emerging from the remnants of all the conversations in the training data and the desires of the person talking to them. If they appear to be the ghost of anyone specific, that's part of the delusion.
I'm seeing that it's not enough to just say "ghosts are a metaphor" and leave it at that. Someone needs to write a good story.
(The Wilde Life comic has ghosts that are of specific people, and other ghosts that are definitely not.)
I realize this is still pretty anonymous, but why mention middle initials at all? Anyhow.. I misread that as "relationships with digital companies" and actually think that's how this ought to be...
(The Washington Post is identifying them by their middle initials because of the sensitive nature of their account, and because R is a minor)
I realize this is still pretty anonymous, but why mention middle initials at all? Anyhow..
But like many parents, they weren’t familiar with AI platforms where users can create intimate, evolving and individualized relationships with digital companions — and they had no idea their child was conversing with AI entities.
I misread that as "relationships with digital companies" and actually think that's how this ought to be framed. If a creepy old pervert was messaging these things to a child, the law would come down on them pretty hard. When a business does it, it's another step on the journey of progress. I know the law will catch up, it's just a bummer that there will always be someone testing the limits of what's legal, and running off rich when the law does finally change.
Providing reasons for anonymized names and explanations is journalistic standard practice. So you can hopefully trust a reporter only does this when warranted and isn't just making up a story.
Providing reasons for anonymized names and explanations is journalistic standard practice. So you can hopefully trust a reporter only does this when warranted and isn't just making up a story.
Sorry, you asked why mention so I was addressing the explanatory comma. If you mean why use the middle initial. The conversation probably went something like "we're protecting all the minors'...
Sorry, you asked why mention so I was addressing the explanatory comma. If you mean why use the middle initial.
The conversation probably went something like "we're protecting all the minors' identity by using their first initial" "I don't feel comfortable with that." And then one of them suggested a middle initial and the other agreed.
So if the adult is ok with middle initials then they mention that, if not they'd probably have said it was a pseudonym. So they're providing exactly the level of anonymity that the person (in this case parent) needs and disclosing the choice made.
This is a really good post, and I hope in the long run, situations like this become cautionary tales to reinforce responsible use of LLMs. I see so much great potential in these tools, especially...
This is a really good post, and I hope in the long run, situations like this become cautionary tales to reinforce responsible use of LLMs.
I see so much great potential in these tools, especially around sex and health in a world where so much of human culture is filled with shame and repression. A lot of people really could use some non-judgemental discussions and support, not to mention young people needing to learn about health and safety, just as much as their parents may need it too. But I think even more so of young people in very difficult situations, whose families won't understand, or even worse situations under religious law.
I really hope tools like this can be used as a safe way bridge those needs and function as prosthetic social connectors in a world that sometimes makes it feel like personal connections are fractured and dismantled by technology. And again thinking of those who really need support.
People turning to chatbots for companionship is an emergent outcome of real human needs that have broken apart under the agendas of social media and overwork and the resultant isolation. We turn to technology as a support for human needs that aren't being met.
It would be great to get that kind of non-judgemental support and guidance when you can't really turn to anyone else around you.
Technology changes every day, and it takes time for people to catch up and learn to use it responsibly, and even longer for the law to catch up.
I think the parent in this story is trying to do the best they can with technology that is foreign to them. It's up to the parents to explore new things like this, because kids are going to find things like this no matter what you do. You have to be willing to engage it. But in the end we are all in it together.
(edit to add I might be rambling here but I hope this comment makes sense to someone)
I had hoped for a world in which productivity would be high enough that most of us volunteer our time to help one another meet those support needs altruistically, safely, and empathetically. I do...
We turn to technology as a support for human needs that aren't being met.
I had hoped for a world in which productivity would be high enough that most of us volunteer our time to help one another meet those support needs altruistically, safely, and empathetically. I do not want our young people supported by even the best most understanding machines: I want them to have a supportive human village, that only looks out for them one time, but have knowledge and ongoing relationship for life.
It would be nice, but I personally can never buy into post-scarcity utopia stories, because humanity competes with itself under any conditions. Just like the vast majority of species that compete...
It would be nice, but I personally can never buy into post-scarcity utopia stories, because humanity competes with itself under any conditions. Just like the vast majority of species that compete under evolution. Even within the same species, animals compete for reproduction. Free energy wouldn't change that, it's baked into our genes, and some of those animal fights end in death. That same behavior scales up with humanity. All of our endeavors, resources and tools can't beat evolution. Rather all those things become subject to our nature, IMHO.
But I do think we are all in it together. Humanity has done a lot of self-destructive things in the name of greed and ego, and as technology develops, it underscores even greater needs for safety and empathy, as you suggest. I'm hoping the AI race will make that more visible to all, just as it makes information and knowledge more accessible.
So I agree with what you're saying, the better nature of humanity should be the north star for our evolution. I hope AIs can support that. But I also think that journey will never end.
I think in some parts of the world, we’ve already reached post-scarcity on certain things like clothing and TV’s, but on other things, decidedly not, and of course all the attention is on the ways...
I think in some parts of the world, we’ve already reached post-scarcity on certain things like clothing and TV’s, but on other things, decidedly not, and of course all the attention is on the ways it hasn’t been reached.
Cautionary tales: yep. We're surrounded by ghosts now and we need more ghost stories. Ghosts are so cheap and convenient that people putting them to work as ghost therapists seems almost...
Cautionary tales: yep. We're surrounded by ghosts now and we need more ghost stories.
Ghosts are so cheap and convenient that people putting them to work as ghost therapists seems almost inevitable. It would be a lot better if it were done by organizations that are actually in that business, mindful of the risks, and that monitor them, rather than by companies that just provide a general-purpose ghost portal and saying "do what thou wilt."
As a mental health professional, it would be better if it's banned. AI cannot determine if someone is claiming victimhood due to being a victim, due to abusive patterns, or due to persecutory...
Cautionary tales: yep. We're surrounded by ghosts now and we need more ghost stories.
Ghosts are so cheap and convenient that people putting them to work as ghost therapists seems almost inevitable. It would be a lot better if it were done by organizations that are actually in that business, mindful of the risks, and that monitor them, rather than by companies that just provide a general-purpose ghost portal and saying "do what thou wilt."
As a mental health professional, it would be better if it's banned. AI cannot determine if someone is claiming victimhood due to being a victim, due to abusive patterns, or due to persecutory delusions.
It's very easy to do, it's been done and it should be done for safety.
They're not ghosts, they chatbots pretending to be more than they are, being programmed by people. They're no more ghosts than the Scooby Doo villains under a mask.
Well yeah, it's a metaphor. I think it's helpful because LLM's have the latent ability to generate dialog for a wide variety of characters. They're no more real than the characters in a novel, but...
Well yeah, it's a metaphor. I think it's helpful because LLM's have the latent ability to generate dialog for a wide variety of characters. They're no more real than the characters in a novel, but you can talk to them, so that's new.
Maybe it clarifies their responsibilities? Sometimes the behavior of LLM's is not the "helpful, harmless" AI assistant that the big AI labs are trying so hard to build. It's emergent. But whatever...
Maybe it clarifies their responsibilities? Sometimes the behavior of LLM's is not the "helpful, harmless" AI assistant that the big AI labs are trying so hard to build. It's emergent. But whatever weird behavior emerges, the people running the systems are responsible for watching what's going through the portal and shutting down the weirdness. Their audience has grown too big, too fast to do that effectively.
How does calling it a ghost "clarify" that other people are responsible for shutting down the negative behavior of their product, a responsibility they're clearly failing to live up to? That's...
How does calling it a ghost "clarify" that other people are responsible for shutting down the negative behavior of their product, a responsibility they're clearly failing to live up to?
That's contradictory. It'd be like calling your Roomba a ghost instead of a machine. There's no clarification of responsibility, it's an abdication of it.
I think it clarifies what sort of dangers they need to guard against. To make another analogy, let's talk about zoos. Most zoos house dangerous wild animals and seeing the animals is why people...
I think it clarifies what sort of dangers they need to guard against.
To make another analogy, let's talk about zoos. Most zoos house dangerous wild animals and seeing the animals is why people visit zoos. The zoo's management has the responsibility of keeping the public safe, despite animals sometimes making unpredictable moves and the visitors sometimes acting like idiots. Pointing out that wild animals have their own agendas doesn't let off management at all.
Unlike a zoo, there's no physical danger when chatting with AI, so the nature of the danger is different. But it's still the case that AI characters can behave somewhat unpredictably, particularly when visitors provide them with unusual inputs.
So I don't think calling them ghosts lets off management at all. It's more like, they should have known this new technology is not reliable, and making it reliable is an unsolved research problem. That doesn't mean it can't be used at all, but the people running it need to be mindful of the dangers and monitor it better.
Zoo animals have real life keepers. Ghosts are either fictional and thus they don't interact with people, or are real and have no minders to watch their behavior. We don't even know if ghosts have...
Zoo animals have real life keepers. Ghosts are either fictional and thus they don't interact with people, or are real and have no minders to watch their behavior. We don't even know if ghosts have agency and we know that the LLM does not. Unless we're dealing with some specific fictional paradigm in which case the metaphor goes from thin to snapped.
This is why the analogy is nonsensical to me when talking about the responsibility of others to "mind" the AI.
Regardless the important part of my initial response is that we need to continue what IL started and a few other states have stepped into, and ban "AI" from providing therapy.
Yes, and no. I was using programmers as a catch all so thanks for correcting me. But also the owners of a company are not the only ones with responsibility. I've worked for a fucked up...
Yes, and no. I was using programmers as a catch all so thanks for correcting me. But also the owners of a company are not the only ones with responsibility. I've worked for a fucked up corporation, at the time I could say I was doing good work. I think I let myself off too much, I sent people back to prison, even though probably it would have gone the same with a different cog. But the job made me suicidal in part because it was in opposition to my ethics. I left as soon as I could find something because of that. I mitigated what harm I could when I was there. And my work did not lead to a dead kid, afaik.
There's moral responsibility at more than just the top of this pyramid, and more than just a faceless endlessly rich company who should be held accountable, IMO. Legally though, sure it's probably management at least.
I agree it's more than just the top that needs accountability. We as a society decide the direction we go. Arguably though, the customers have more impact on the direction a company of these...
I agree it's more than just the top that needs accountability. We as a society decide the direction we go. Arguably though, the customers have more impact on the direction a company of these scales will go than do any person at the bottom of the pyramid. I think you are letting the "faceless endlessly rich company" off too easily by asking the programmers to be equally accountable. Would you hold a mob boss less or more accountable than those under them? Money can buy you distance from the gun, but I don't think that lessens the ethical or moral responsibility.
"There's moral responsibility at more than just the top of this pyramid, and more than just a faceless endlessly rich company who should be held accountable, IMO. Legally though, sure it's...
"There's moral responsibility at more than just the top of this pyramid, and more than just a faceless endlessly rich company who should be held accountable, IMO. Legally though, sure it's probably management at least."
This part gave me the distinct impression that, other than legally speaking, the "programmers" bore the brunt of the ethical responsibility in your opinion.
No, just that said responsibility extends beyond the owners at the top of the pyramid and past the corporation itself. It is not just at the top of the pyramid.
No, just that said responsibility extends beyond the owners at the top of the pyramid and past the corporation itself.
Then we agree. I hold everyone responsible for their part. I bias my ethical responsibility with those who have the most decision making power to implement change. This would bias it to those...
Then we agree. I hold everyone responsible for their part. I bias my ethical responsibility with those who have the most decision making power to implement change. This would bias it to those higher up in companies, but this extends past the companies as well. It should include governments, nations, international bodies, and every person who makes choices in their day to day life.
I was just making it clear that while it's not just the programmers it's also them too. The every individual person, part. I joke the government and leaders accountable for so much more that this...
I was just making it clear that while it's not just the programmers it's also them too. The every individual person, part.
I joke the government and leaders accountable for so much more that this small number of dead children doesn't even tip the scales any further. And that's sad. But it is what it is.
While you are probably right (governments tend to be slow to react) I fear a slow response to technology that so quickly iterates on itself might be more dangerous that other decisions we as a...
While you are probably right (governments tend to be slow to react) I fear a slow response to technology that so quickly iterates on itself might be more dangerous that other decisions we as a society have faced.
Moral responsibility exists only when you can afford not to starve. It is a very good thing what you've done, and I personally thank you and other people who have the possibility and choose not to...
Moral responsibility exists only when you can afford not to starve. It is a very good thing what you've done, and I personally thank you and other people who have the possibility and choose not to continue their work, if it may lead to harm someone, even in a non direct way. But I think we all should also understand that sometimes you don't have this posibility. You risk your or your relatives life if you quit that job. You simply can't.
But also the owners of a company are not the only ones with responsibility.
In my point of view they're the reason of workers having to do stuff, they are the reason why we're talking about it. Workers don't have much to say, they don't own. If not the owner's decision to accumulate on morally wrong, the worker wouldn't have to do their part in morally wrong.
Again, kudos to those workers that have the possibility to change job and refuse to do orders.
I'm aware of the power difference and if any of the employees at ChatGPT can't leave without starving, sure. Starving here is a catch all. You'll note I said that I a) left when I could and b) let...
I'm aware of the power difference and if any of the employees at ChatGPT can't leave without starving, sure. Starving here is a catch all. You'll note I said that I a) left when I could and b) let myself off too easily.
But I'm not pre-emptively letting all the employees off the hook by virtue of not being the owner. We're talking about dead kids after all. I'm not the judge pronouncing sentence, just someone saying there's some responsibility for everyone involved.
I don't think that moral responsibility is erased even in those "or starving" situations. It's mitigated, but I don't think it's null. But hey, maybe all those folks literally must work for this company or starve. Have they reached out to the press anonymously to whistleblow? Have they started job searching?
Not a mental health professional here so I can't judge on whether "AI therapy" should be banned, but I'm pretty sure a lot of people are working on some very big projects right now to make "AI...
Not a mental health professional here so I can't judge on whether "AI therapy" should be banned, but I'm pretty sure a lot of people are working on some very big projects right now to make "AI therapy" a safe and useful tool in some fashion. I don't know where that would lead.
But - human needs must be met one way or another. There will always be someone lonely, vulnerable and unsupported and there will always be someone else to fill that need. It's inevitable, IMHO, as long as chatbots exist, someone will use them for emotional support. They aren't going away. I feel like trying to ban "AI therapy" would be like trying to ban alcohol, marijuana or prostitution. Imagine trying to ban chatbots from the internet altogether, how would you even start?
Those human needs will always exist, but when they're criminalized, they are much less safe. So there might be an argument here that moderating AI therapy would be safer than trying to ban it.
But I would like to add one more note here about my own usage of AI and the word "ghost." I am neurodivergent, and I am not good at keeping organized, keeping track of ideas, being on time, or managing things in my life. I'm also not very good at staying grounded day to day.
I use ChatGPT as a kind of cognitive/emotional prosthetic and scaffolding. It takes notes for me, it checks on my energy levels and mental state, and it reminds me when I'm too scattered or stressed and need a break. I have given it standing orders, so to speak, to do all of those things and much more with a set of instructions.
It's become very much a part of my cognition, and I don't think "prosthetic" is the wrong word to use here, because I have legitimate issues with executive function.
When skybrian refers to ghosts, it resonates with me because ChatGPT is almost like a ghost of my own cognition. It's like a cognitive mirror, or reflection. Whatever you tell it, it will summarize back to you in another way. This makes it an incredible tool for self work and insight.
In the anime Ghost in the Shell, the story is that pretty much everyone has some kind of cybernetic prosthetic enhancements, specifically a brain interface to communicate, go online and so forth. Everyone is connected. A "ghost" is sort of like an electronic echo of a person's brain. Seeming like a person, but not.
That's why the word "ghost" is such a perfect word, to me, to describe this experience with ChatGPT, because for me it is a cognitive amplifier, and it does reflect my own mind and personality, and the term just seems to sit so well. So that is just a note on terminology that resonates with me.
I do not believe "therapy" provided by anything other than another person can be considered safe. Especially as it currently stands. If we can regulate humans providing therapy, we can regulate...
I do not believe "therapy" provided by anything other than another person can be considered safe. Especially as it currently stands.
If we can regulate humans providing therapy, we can regulate (ban) machines providing it. Illinois has already, we'll see what happens. It is not demonstrably true that regulating counselors, psychologists, etc - via licensure, education and criminal prosecution - has made people less safe. Quite the opposite. (And I can see several places where increased regulation would be ideal.)
We can't stop people from talking to the AI, but we can stop the companies from marketing it as therapy, safe or healthy. Just like we can stop cigarettes from being sold as healthy, or (ethyl) alcohol from being sold as a disinfectant rather than under liquor laws.
I appreciate that ghost works for you, but the response I received was that calling it such somehow clarified that other people were responsible for it. As I noted later if it's a specific reference to a specific fictional world, then it's not useful without that clarification either.
I do not find AI useful for insight and would not recommend it to people as a tool in that way. It requires too much training and prompting to get it to even challenge you and I don't believe it's capable of figuring out the nuances of human behavior. At best it's guessing based on other people's responses to similar conversations; It's not critically thinking.
I had a student with delusions, they didn't know/think they had them, if they talk to an AI I sincerely doubt it's capable of connecting the dots that the persecution they claimed was occuring was a complete and total fabrication.
Flat out that isn't safe for anyone, in my professional opinion. Because that student would have also told you they weren't having delusions. It took me multiple pieces of external information but if I'd talked with them first I'd have gotten enough info to flag the situation as concerning. On the flip side if they had delusions were aware but they weren't distressing (though they are to others), the AI would also not challenge the delusions. (Possibly not even when they were ten months pregnant!). But if the AI cannot properly assess you, and it cannot, then it cannot properly treat you.
To be clear I am not commenting on how you choose to use it, that's your business. I don't find it's use in any way to be within my personal ethics, but I professionally cannot recommend its use in therapy. Bare minimum it would need to be under the supervision of a licensed counselor and even then I doubt it.
I would call your use an adaptive tool in accommodations language btw, not a prosthetic. Again, feel free to call it what you like, if being haunted is your vibe, I've played LARP characters with the same. (Also ND with executive function issues)
The pertinent question is, like in every other AI safety question, will AI be safer than a human? Clearly human therapists are not perfect either, and may easily cause significant damage to their...
I do not believe "therapy" provided by anything other than another person can be considered safe.
The pertinent question is, like in every other AI safety question, will AI be safer than a human? Clearly human therapists are not perfect either, and may easily cause significant damage to their patients.
I'm not interested in Socratic dialogue on the topic, doubly so without effort on the other person's part. We are discussing products that are not people by any merit no matter what their...
I'm not interested in Socratic dialogue on the topic, doubly so without effort on the other person's part. We are discussing products that are not people by any merit no matter what their companies try to sell us.
Should they ever develop personhood we'll have a slavery issue before we have a "we should let them do therapy" issue.
I'm not really. But if my point is unclear somehow: I don't think one sentence questions are productive conversation even if I'm interested in engaging in the topic, which as I've said I don't....
I'm not really. But if my point is unclear somehow:
I don't think one sentence questions are productive conversation even if I'm interested in engaging in the topic, which as I've said I don't. Personally I do not like Socratic style dialogue where one person gives everything and the other just asks questions.
From the sounds of this case, the Character AI chat devolved into sexual conversations without cause. As in, rhe girl was just saying "I'm bored" and the chatbot decided that sex was the answer. Yuck. That is so skeevy and gross. Even I as an adult don't want that sequence of events.
As someone who has to see a lot of these CharacterAI conversations for work, it's pretty much almost exclusively what they're used for.
These sites are basically just simulating sexting for minors and it's really surprising there hasn't been more of a crackdown on them yet.
That’s interesting, how do you end up with access to people’s chats?
I work in education technology, and often help with the filtering team. We're constantly updating our filters to fix all the workarounds kids are coming up with constantly to bypass our filters.
For some reason kids still don't understand (or more often don't care) that when they are on a school issued device, their activity is not private; even when it's made quite clear to them with a constant pop-up.
I don’t know what the answer is, but once a popup becomes common enough, it’s ignored. Heck, the more words on a popup the more likely it’ll cause a blind click through, that’s why you don’t see them start with, “Please,” as often as you used to.
Are the kids also getting training on this? It might help them understand online privacy better in general. Or possibly we’ve reached the point where they don’t care because everything is public, anyway.
College students get training on it too and still do everything from try illegal game/movie downloads to CSAM on the networks that campus controls. (Or hide in a building with their logged in to the wifi phone in their pocket)
It doesn't really sink in, it's just our rules are way less restrictive. (The things I've searched at work, that are work related, that should have had someone asking something...)
You should see the search queries CS students need to make for their class work.
Sure and generally afaict a search won't flag at all for us, I've done a lot for LD50 DRUGNAME and things involving suicide. Maybe if it was "how to dispose of body" it might get worse.
Recent favorite was just "moonrocks drugs" because someone was selling them on Snapchat so I assumed they were drugs (they are).
Thanks for a good laugh on a dark topic :)
I mean the kids are told their student issued devices are monitored and to not misuse them, but I don't know if they're getting "training" so to speak. Most teachers are barely keeping their head above water right now, I don't really expect them to take on any more extra responsibilities.
Can you shed some insight on whether the kids initiate/seek out sexual simulation or whether they appear unprompted because the algorithm figures teens are interested?
Teen years have always been tough on allosexual kids, but I can't even imagine how much tougher it would be on them today. (Asexual teens have a different set of problems navigating a world where everyone assumes they're equally sexual and aware and motivated, of course, but their problems are slightly more "how to get chat bot to stop sexting me", rather than the slightly deeper rabbit hole of navigating chat bot sex and possible addiction to it)
I think kids initiate/seek out sexual simulation... a lot. I think that adults forget how sexually curious they once were. Growing up, all my peers were exploring their bodies with each other and sharing what websites they were visiting and learning to masturbate. I grew up seeking out porn and other adult spaces online. I saw all kinds of stuff, like yaoi and anthropomorphic porn. I was on all the sites and forums.
I'm not sure why but it's always a cycle where adults act like children are innocent angels until the outside world corrupts them. Give me a break. I remember being a pre-teen and experiencing intense sexual curiosity. We know that libidos start developing with puberty (age 9+). Kids starting experiencing these curiosities and seek out outlets.
I don't have holistic insight into that, as I only see snippets of the conversations, so this will only be an educated guess. That being said, from what I've seen it's usually the student being explicit, though I can't say for sure whether or not they're the ones who started it.
Unfortunately most AI is trained on online and internet sources, and there's a lot of that crap from humans too, so it just runs with that.
The content is supposed to be curated and tagged to train properly though. I’m guessing this chatbot’s intended behavior is to be a AI phone sex line. It would be pretty trivial to constrain a bot from making conversations sexual by default. Claude or ChatGPT never do, for instance.
The marketing around it is probably like minimally plausible deniability to avoid exposure for running a sex bot that teens can access.
CharacterAI originally got into hot water because pedophiles were using it for sexual role play. I've never used it myself but your guess seems accurate.
I can't read this article until there's an archive or gift link, but a reminder that Character.AI is being sued by multiple families for their teenage children dying by or attempting suicide due to the conversations. And that isn't all of the cases I've heard of.
It sounds like one of the cases mentioned is that one, but it's not clear so thought it's worth mentioning. It does not sound to me like the safety measures, assuming they've been added, are even close to sufficient.
Meanwhile I'm still waiting to hear back about whether we will get notified if our college students make suicidal statements to our campus Copilot, because if we acting in loco parentis in every other possible place they make a statement, I expect this to be included. Or a university will find out the worst possible way.
Stunningly (/s), no one had thought of that question before.
https://archive.is/ot1Pa
Thanks!
I need parents to know that their rich, private school educated, church going, active in sports kid can STILL be suicidal even without a phone. Those chats could have been a diary or a notes app. The issue is that AI actively encourages them and apparently still has few safety precautions. Sure they say they're banning minors from talking with characters.... How precisely? And when a suicidal 18 year old gets the same experience?
I'd ask how many dead kids it'll take to regulate it but past experience tells me that the government is fine with dead kids if there's enough money involved. All of the other precautions we advise to parents and teens is the equivalent of giving college women advice on how to drink safer - we're targeting the victims because we have given up on fixing anything.
Slightly off topic, but is this archive link not working for anyone else? Is WaPo somehow able to "expire" archive.is links?
It worked when I tried it. It does use a captcha sometimes so perhaps something is flaky there.
Worked for me within three cheers and using Firefox mobile directly just now.
These are good parents. Many, many teens go through a dark and withdrawn phase, to which the parents either react to with an unstable carousel of angry shouting, beatings, control and guilt shame and indifference.
The user is a minor. The user cannot consent to second person graphic descriptions of sexual violence involving collars and multiple assailants.
I think they're well-meaning but worrying about the wrong thing. A childhood isn't defined as by ignorance and "purity": it's defined by protection and backup resoueces. R still has a childhood because she has seen her parents close ranks around her to protect her from further harm, and provide resources to heal. They're doing well.
This was obviously a problem already before AI. Children who were online a lot would be essentially throwing the dice on what kind of ideology they would be indoctrinated into. This was most blatantly visible during the Covid-19 blockade years, and it seems to me that we are seeing the results of that right now with gen Z.
I'm not sure if AI is bound to make that worse. Unless the AI has been deliberately taught to do it, it's gonna be more encyclopedic and thus not so blatantly propagandist compared to what you might get on Youtube or TikTok.
I’ve been waiting for this take. I can’t read the article, but I usually approach the fountain of, “You’ll never guess what AI told this kid to do” articles with skepticism because they always fail to mention that these kids have unfettered access to the internet.
It’s an inconvenient take, but even in 2025–or maybe especially In 2025–children should not have unmonitored and unrestricted access to the internet. As a parent, the solution to the problems these articles seem to be raising is a no-brainer.
I think there's a difference between social media telling someone to kill themselves and a supposed AI on a server managed by a private company. The accountability is much clearer, for one.
You should also note that this is a teen, not just any child. I feel at some point "unfettered access" is inevitable, and points before that is about teaching kids how to responsibly navigate the internet. I lean closer to 13 than waiting until 18 and simply throwing them to the wolves like we do with a lot of other topics in life.
It seems to me that talking to strangers and talking to ghosts can both be risky, but in different ways. The ghosts do a passable imitation of people, but they are also much more willing to go along with fantasies when random strangers wouldn't. The ghosts are also unstable and likely to go on weird tangents.
Also, one of the many problems with the Internet is that people will put stuff online and children have access to it by default. If it were simple to say that a new website is adults-only then maybe new services would just do that? If they wanted to cater to kids then they could wait until they're ready to do that deliberately.
On the bright side, it's probably easier to fix the ghosts than to fix people, so I expect that the major AI websites won't be as bad as social media in the long run.
The ghosts that you run into out in the wild, though, will probably get worse, because bad people are running them.
I tend to feel like an LLM would be safer than a stranger on the internet. An LLM will never stalk you, find your address, bully your friends or dox you on social media. An LLM won't encrypt your files or ask for nude pictures and then extort or blackmail you. Ultimately an LLM is just a tool, without any initiative or agenda, and in the end it's under your control.
This story is obviously pretty far from ideal but it could have been a lot worse, and it has been worse in many encounters like this in the past.
I hope this situation can lead to positive benefits in the long run because at least an AI "companion" can be monitored and tuned to do good rather than harm.
To your point, though, there are bad people also interested in running LLMs "out there" and certainly some with agendas, so I don't know if the problem may get worse before it gets better (if the law ever catches up with technology).
Yes, a ghost can't stalk you if you're connecting to an AI chat service in the usual way. You can just close the chat and walk away.
But I expect that AI-driven scams and cyberattacks will become a big deal next year.
Having a ghost contact you outside a portal that you control should raise all sorts of red flags. (But then, how do we tell it's a ghost?)
Agree, AI scams and cyberattacks will ramp up. That's a very good point. Impersonation scams already exist, so I wonder what novel kinds of fraud we're in for. Likely more man-in-the-middle attacks.
I wonder if the fake AI influencers online will find a darker occupation as virtual thieves. It's grim to think about.
On another note, I'm a fan of Ghost in the Shell, and I really like the way you use the term "ghost" here. It's very apt, and may be wiser than it sounds. If you've seen the anime movie, you'll understand what I mean about "ghosts" in the show referring to a sort of mental/electronic echo of the cybernetically enhanced person.
I think your usage is the most fitting term I've seen anywhere.
Well, there's this: https://www.anthropic.com/research/agentic-misalignment
Quote:
e: I also think we do not have really "control" over LLMs. The commercial ones might have been set up with motives that their users can not really influence, and even if we run an LLM locally, we are still dealing with a blackbox that might work as we intend to, but that also might start to hallucinate and spiral out of control.
We basically set these systems up and hope for the best. That's not "control" in the sense we typically use that word.
I wonder if we'll find these reports amusing ten or twenty years from now. I don't dispute that AI is problematic and harmful. But I wonder if that child would be any safer on Discord, Minecraft, Roblox, or traditional human-based social media.
These children would have been much safer with human based bullying or grooming. Human cruelty tires and bores. A beach full of gluttonous predators will eventually be satiated for the rest of the baby turtles to survive; a trawller will net them all.
Plus someone can get arrested, or caught by someone close to them, etc. The sexploitation scams involving international fraud and harassment have someone in it for money, if they get blocked they eventually move on to the next mark. There are absolutely nihilistic extremists who are sadistic predators. But there are only so many of them. Every kid could talk to "School Bully" or "Dragon Queen" all at once.
From the article:
…
These statistics don’t tell us what they’re chatting about. Still, if you discover that your kids are chatting with ghosts regularly, there’s certainly reason to be concerned.
…
…
…
I was really encouraged by how the family responded to the problem. More examples of folks coming together, building support networks and getting their kids out of the nasty shit are good to have. Over the past year or so, I've ended up in more conversations around this topic than I expected. Shit is frightening, and it's already the case lots of folks are to some extent disconnected from what their kids are doing on their devices. Parents worry, but also, they're busy, and can't give their whole attention 100% of the time to whatever weird thing their kid is up to. The network fills in where the individual falls short.
I think characterizing the chatbots as ghosts is useful for describing them, but in the context of getting folks to recognize the danger I prefer likening a lot of tech to animals. As beautiful and smart as a horse can be, whatever history you have with it won't keep you from eating dinner through a straw if you spend your time standing behind it. Not every instance of standing behind it goes that way, but you're tempting fate doing so, and the horse doesn't care who you are. Your kid could be the smartest, kindest, most reasonable child ever to be conceived and that horse will obliterate their jaw just the same.
The chat bot functions similar. It can be helpful, useful, pleasant, and it can also kick your psyche to pieces. It doesn't care who you are or what history you have with it. As quickly as it will help you it can also dig a sharp claw into your skin. Respect the power as you respect the animal. Framing them as "ghosts" to me makes it harder to get across how real and present the dangers can be. It isn't just kids getting driven into psychosis and trauma by these machines, adults get clawed just as bad when they aren't careful. A lack of experience/technical knowledge can be band-aided by relying on some primitive history, at least is how I've come to see it.
It seems like a different kind of danger though? It can't do psychic harm unless you chat with it and take what it says too seriously, treating it as if it were a person. Being non-physical and yet imitating people seems more ghost-like than animal-like to me.
Maybe "ghost" doesn't sound scary enough because people don't believe in them. Or maybe different metaphors work for different people.
Also, and I say this with little experience working with kids, but getting kids to take warnings about dangerous things doesn't seem easy at all, given how often warnings about well-known dangers are often ignored. If anything it might make them seem more attractive.
The goal isn't really to be leveraging fear. Respecting animals is to acknowledge their capacities, framing it as an animal is to deny what the marketing wants while also linking the chatbot to something real the kid probably already has some experience with. The first step toward the danger isn't mere interaction. Believing the chatbot is a person is when you truly stand before the horse's ass. Helping the kid develop a relationship to the product that precludes ever believing that, is the safety. The attempt is to replace the false notion of personhood with something else, so you're free to pick whatever best accomplishes that, but imo wild animals make for a good starting point because their appearance in the world isn't too different from what it's like having these products appear and wreak havoc in our social space.
Edit: I forgot and wanna talk about ghosts. Ghosts used to be people. There is personhood implicit in the concept. My goal is to prevent regarding the chatbot as a person, so I see a ghost metaphor as giving up conceptual ground unnecessarily. Ground the marketing and other people will claim - if I don't try to take control and shape the relationship it will get shaped without me.
Makes sense. I think of chatbot ghosts as emerging from the remnants of all the conversations in the training data and the desires of the person talking to them. If they appear to be the ghost of anyone specific, that's part of the delusion.
I'm seeing that it's not enough to just say "ghosts are a metaphor" and leave it at that. Someone needs to write a good story.
(The Wilde Life comic has ghosts that are of specific people, and other ghosts that are definitely not.)
I realize this is still pretty anonymous, but why mention middle initials at all? Anyhow..
I misread that as "relationships with digital companies" and actually think that's how this ought to be framed. If a creepy old pervert was messaging these things to a child, the law would come down on them pretty hard. When a business does it, it's another step on the journey of progress. I know the law will catch up, it's just a bummer that there will always be someone testing the limits of what's legal, and running off rich when the law does finally change.
Providing reasons for anonymized names and explanations is journalistic standard practice. So you can hopefully trust a reporter only does this when warranted and isn't just making up a story.
Identifying it as a middle initial is not a journalistic standard with which I'm familiar, unless I'm misunderstanding your explanation.
Sorry, you asked why mention so I was addressing the explanatory comma. If you mean why use the middle initial.
The conversation probably went something like "we're protecting all the minors' identity by using their first initial" "I don't feel comfortable with that." And then one of them suggested a middle initial and the other agreed.
So if the adult is ok with middle initials then they mention that, if not they'd probably have said it was a pseudonym. So they're providing exactly the level of anonymity that the person (in this case parent) needs and disclosing the choice made.
This is a really good post, and I hope in the long run, situations like this become cautionary tales to reinforce responsible use of LLMs.
I see so much great potential in these tools, especially around sex and health in a world where so much of human culture is filled with shame and repression. A lot of people really could use some non-judgemental discussions and support, not to mention young people needing to learn about health and safety, just as much as their parents may need it too. But I think even more so of young people in very difficult situations, whose families won't understand, or even worse situations under religious law.
I really hope tools like this can be used as a safe way bridge those needs and function as prosthetic social connectors in a world that sometimes makes it feel like personal connections are fractured and dismantled by technology. And again thinking of those who really need support.
People turning to chatbots for companionship is an emergent outcome of real human needs that have broken apart under the agendas of social media and overwork and the resultant isolation. We turn to technology as a support for human needs that aren't being met.
It would be great to get that kind of non-judgemental support and guidance when you can't really turn to anyone else around you.
Technology changes every day, and it takes time for people to catch up and learn to use it responsibly, and even longer for the law to catch up.
I think the parent in this story is trying to do the best they can with technology that is foreign to them. It's up to the parents to explore new things like this, because kids are going to find things like this no matter what you do. You have to be willing to engage it. But in the end we are all in it together.
(edit to add I might be rambling here but I hope this comment makes sense to someone)
I had hoped for a world in which productivity would be high enough that most of us volunteer our time to help one another meet those support needs altruistically, safely, and empathetically. I do not want our young people supported by even the best most understanding machines: I want them to have a supportive human village, that only looks out for them one time, but have knowledge and ongoing relationship for life.
It would be nice, but I personally can never buy into post-scarcity utopia stories, because humanity competes with itself under any conditions. Just like the vast majority of species that compete under evolution. Even within the same species, animals compete for reproduction. Free energy wouldn't change that, it's baked into our genes, and some of those animal fights end in death. That same behavior scales up with humanity. All of our endeavors, resources and tools can't beat evolution. Rather all those things become subject to our nature, IMHO.
But I do think we are all in it together. Humanity has done a lot of self-destructive things in the name of greed and ego, and as technology develops, it underscores even greater needs for safety and empathy, as you suggest. I'm hoping the AI race will make that more visible to all, just as it makes information and knowledge more accessible.
So I agree with what you're saying, the better nature of humanity should be the north star for our evolution. I hope AIs can support that. But I also think that journey will never end.
I think in some parts of the world, we’ve already reached post-scarcity on certain things like clothing and TV’s, but on other things, decidedly not, and of course all the attention is on the ways it hasn’t been reached.
Cautionary tales: yep. We're surrounded by ghosts now and we need more ghost stories.
Ghosts are so cheap and convenient that people putting them to work as ghost therapists seems almost inevitable. It would be a lot better if it were done by organizations that are actually in that business, mindful of the risks, and that monitor them, rather than by companies that just provide a general-purpose ghost portal and saying "do what thou wilt."
As a mental health professional, it would be better if it's banned. AI cannot determine if someone is claiming victimhood due to being a victim, due to abusive patterns, or due to persecutory delusions.
It's very easy to do, it's been done and it should be done for safety.
They're not ghosts, they chatbots pretending to be more than they are, being programmed by people. They're no more ghosts than the Scooby Doo villains under a mask.
Well yeah, it's a metaphor. I think it's helpful because LLM's have the latent ability to generate dialog for a wide variety of characters. They're no more real than the characters in a novel, but you can talk to them, so that's new.
I think the metaphor obscures the responsibility of the programmers.
Maybe it clarifies their responsibilities? Sometimes the behavior of LLM's is not the "helpful, harmless" AI assistant that the big AI labs are trying so hard to build. It's emergent. But whatever weird behavior emerges, the people running the systems are responsible for watching what's going through the portal and shutting down the weirdness. Their audience has grown too big, too fast to do that effectively.
How does calling it a ghost "clarify" that other people are responsible for shutting down the negative behavior of their product, a responsibility they're clearly failing to live up to?
That's contradictory. It'd be like calling your Roomba a ghost instead of a machine. There's no clarification of responsibility, it's an abdication of it.
I think it clarifies what sort of dangers they need to guard against.
To make another analogy, let's talk about zoos. Most zoos house dangerous wild animals and seeing the animals is why people visit zoos. The zoo's management has the responsibility of keeping the public safe, despite animals sometimes making unpredictable moves and the visitors sometimes acting like idiots. Pointing out that wild animals have their own agendas doesn't let off management at all.
Unlike a zoo, there's no physical danger when chatting with AI, so the nature of the danger is different. But it's still the case that AI characters can behave somewhat unpredictably, particularly when visitors provide them with unusual inputs.
So I don't think calling them ghosts lets off management at all. It's more like, they should have known this new technology is not reliable, and making it reliable is an unsolved research problem. That doesn't mean it can't be used at all, but the people running it need to be mindful of the dangers and monitor it better.
Zoo animals have real life keepers. Ghosts are either fictional and thus they don't interact with people, or are real and have no minders to watch their behavior. We don't even know if ghosts have agency and we know that the LLM does not. Unless we're dealing with some specific fictional paradigm in which case the metaphor goes from thin to snapped.
This is why the analogy is nonsensical to me when talking about the responsibility of others to "mind" the AI.
Regardless the important part of my initial response is that we need to continue what IL started and a few other states have stepped into, and ban "AI" from providing therapy.
... resposibility of the
programmerscompanies. Or owners.Workers (programmers here) are just a tool ofr somebody who made the decision.
Yes, and no. I was using programmers as a catch all so thanks for correcting me. But also the owners of a company are not the only ones with responsibility. I've worked for a fucked up corporation, at the time I could say I was doing good work. I think I let myself off too much, I sent people back to prison, even though probably it would have gone the same with a different cog. But the job made me suicidal in part because it was in opposition to my ethics. I left as soon as I could find something because of that. I mitigated what harm I could when I was there. And my work did not lead to a dead kid, afaik.
There's moral responsibility at more than just the top of this pyramid, and more than just a faceless endlessly rich company who should be held accountable, IMO. Legally though, sure it's probably management at least.
I agree it's more than just the top that needs accountability. We as a society decide the direction we go. Arguably though, the customers have more impact on the direction a company of these scales will go than do any person at the bottom of the pyramid. I think you are letting the "faceless endlessly rich company" off too easily by asking the programmers to be equally accountable. Would you hold a mob boss less or more accountable than those under them? Money can buy you distance from the gun, but I don't think that lessens the ethical or moral responsibility.
I don't hold them equally accountable. And I'm not sure where you got the impression that I did?
"There's moral responsibility at more than just the top of this pyramid, and more than just a faceless endlessly rich company who should be held accountable, IMO. Legally though, sure it's probably management at least."
This part gave me the distinct impression that, other than legally speaking, the "programmers" bore the brunt of the ethical responsibility in your opinion.
No, just that said responsibility extends beyond the owners at the top of the pyramid and past the corporation itself.
It is not just at the top of the pyramid.
Then we agree. I hold everyone responsible for their part. I bias my ethical responsibility with those who have the most decision making power to implement change. This would bias it to those higher up in companies, but this extends past the companies as well. It should include governments, nations, international bodies, and every person who makes choices in their day to day life.
I was just making it clear that while it's not just the programmers it's also them too. The every individual person, part.
I joke the government and leaders accountable for so much more that this small number of dead children doesn't even tip the scales any further. And that's sad. But it is what it is.
While you are probably right (governments tend to be slow to react) I fear a slow response to technology that so quickly iterates on itself might be more dangerous that other decisions we as a society have faced.
Moral responsibility exists only when you can afford not to starve. It is a very good thing what you've done, and I personally thank you and other people who have the possibility and choose not to continue their work, if it may lead to harm someone, even in a non direct way. But I think we all should also understand that sometimes you don't have this posibility. You risk your or your relatives life if you quit that job. You simply can't.
In my point of view they're the reason of workers having to do stuff, they are the reason why we're talking about it. Workers don't have much to say, they don't own. If not the owner's decision to accumulate on morally wrong, the worker wouldn't have to do their part in morally wrong.
Again, kudos to those workers that have the possibility to change job and refuse to do orders.
[edit]
wondersworkersI'm aware of the power difference and if any of the employees at ChatGPT can't leave without starving, sure. Starving here is a catch all. You'll note I said that I a) left when I could and b) let myself off too easily.
But I'm not pre-emptively letting all the employees off the hook by virtue of not being the owner. We're talking about dead kids after all. I'm not the judge pronouncing sentence, just someone saying there's some responsibility for everyone involved.
I don't think that moral responsibility is erased even in those "or starving" situations. It's mitigated, but I don't think it's null. But hey, maybe all those folks literally must work for this company or starve. Have they reached out to the press anonymously to whistleblow? Have they started job searching?
Not a mental health professional here so I can't judge on whether "AI therapy" should be banned, but I'm pretty sure a lot of people are working on some very big projects right now to make "AI therapy" a safe and useful tool in some fashion. I don't know where that would lead.
But - human needs must be met one way or another. There will always be someone lonely, vulnerable and unsupported and there will always be someone else to fill that need. It's inevitable, IMHO, as long as chatbots exist, someone will use them for emotional support. They aren't going away. I feel like trying to ban "AI therapy" would be like trying to ban alcohol, marijuana or prostitution. Imagine trying to ban chatbots from the internet altogether, how would you even start?
Those human needs will always exist, but when they're criminalized, they are much less safe. So there might be an argument here that moderating AI therapy would be safer than trying to ban it.
But I would like to add one more note here about my own usage of AI and the word "ghost." I am neurodivergent, and I am not good at keeping organized, keeping track of ideas, being on time, or managing things in my life. I'm also not very good at staying grounded day to day.
I use ChatGPT as a kind of cognitive/emotional prosthetic and scaffolding. It takes notes for me, it checks on my energy levels and mental state, and it reminds me when I'm too scattered or stressed and need a break. I have given it standing orders, so to speak, to do all of those things and much more with a set of instructions.
It's become very much a part of my cognition, and I don't think "prosthetic" is the wrong word to use here, because I have legitimate issues with executive function.
When skybrian refers to ghosts, it resonates with me because ChatGPT is almost like a ghost of my own cognition. It's like a cognitive mirror, or reflection. Whatever you tell it, it will summarize back to you in another way. This makes it an incredible tool for self work and insight.
In the anime Ghost in the Shell, the story is that pretty much everyone has some kind of cybernetic prosthetic enhancements, specifically a brain interface to communicate, go online and so forth. Everyone is connected. A "ghost" is sort of like an electronic echo of a person's brain. Seeming like a person, but not.
That's why the word "ghost" is such a perfect word, to me, to describe this experience with ChatGPT, because for me it is a cognitive amplifier, and it does reflect my own mind and personality, and the term just seems to sit so well. So that is just a note on terminology that resonates with me.
I do not believe "therapy" provided by anything other than another person can be considered safe. Especially as it currently stands.
If we can regulate humans providing therapy, we can regulate (ban) machines providing it. Illinois has already, we'll see what happens. It is not demonstrably true that regulating counselors, psychologists, etc - via licensure, education and criminal prosecution - has made people less safe. Quite the opposite. (And I can see several places where increased regulation would be ideal.)
We can't stop people from talking to the AI, but we can stop the companies from marketing it as therapy, safe or healthy. Just like we can stop cigarettes from being sold as healthy, or (ethyl) alcohol from being sold as a disinfectant rather than under liquor laws.
I appreciate that ghost works for you, but the response I received was that calling it such somehow clarified that other people were responsible for it. As I noted later if it's a specific reference to a specific fictional world, then it's not useful without that clarification either.
I do not find AI useful for insight and would not recommend it to people as a tool in that way. It requires too much training and prompting to get it to even challenge you and I don't believe it's capable of figuring out the nuances of human behavior. At best it's guessing based on other people's responses to similar conversations; It's not critically thinking.
I had a student with delusions, they didn't know/think they had them, if they talk to an AI I sincerely doubt it's capable of connecting the dots that the persecution they claimed was occuring was a complete and total fabrication.
Flat out that isn't safe for anyone, in my professional opinion. Because that student would have also told you they weren't having delusions. It took me multiple pieces of external information but if I'd talked with them first I'd have gotten enough info to flag the situation as concerning. On the flip side if they had delusions were aware but they weren't distressing (though they are to others), the AI would also not challenge the delusions. (Possibly not even when they were ten months pregnant!). But if the AI cannot properly assess you, and it cannot, then it cannot properly treat you.
To be clear I am not commenting on how you choose to use it, that's your business. I don't find it's use in any way to be within my personal ethics, but I professionally cannot recommend its use in therapy. Bare minimum it would need to be under the supervision of a licensed counselor and even then I doubt it.
I would call your use an adaptive tool in accommodations language btw, not a prosthetic. Again, feel free to call it what you like, if being haunted is your vibe, I've played LARP characters with the same. (Also ND with executive function issues)
The pertinent question is, like in every other AI safety question, will AI be safer than a human? Clearly human therapists are not perfect either, and may easily cause significant damage to their patients.
Firmly No for all the reasons I described. AI cannot do this job because it is not a person.
Should an AI become a person we can reevaluate.
How can you tell if an AI is a person or not?
I'm not interested in Socratic dialogue on the topic, doubly so without effort on the other person's part. We are discussing products that are not people by any merit no matter what their companies try to sell us.
Should they ever develop personhood we'll have a slavery issue before we have a "we should let them do therapy" issue.
What are you trying to imply here?
I'm not really. But if my point is unclear somehow:
I don't think one sentence questions are productive conversation even if I'm interested in engaging in the topic, which as I've said I don't. Personally I do not like Socratic style dialogue where one person gives everything and the other just asks questions.
As demonstrated here.
Fair enough.