A disgusting tale from start to finish. I wonder if the family investigated legal action against Meta for this? At the very least, they deserve something. This section stuck out to me as...
A disgusting tale from start to finish. I wonder if the family investigated legal action against Meta for this? At the very least, they deserve something.
This section stuck out to me as particularly reprehensible:
“If people are turning to chatbots for getting advice without judgment, or as a place they can rant about their day and feel better, that’s not inherently a bad thing,” she said. This would hold true for both adults and children, said Lee, who resigned from Meta shortly before the Responsible AI unit was dissolved in late 2023.
No, it is inherently a bad thing. Ranting and requesting advice are things people do to deal with stress and pressure. They're opportunities to bond with other humans, and potentially get help to improve yourself. If you find yourself dumping on your friends and family a lot, you should get a therapist for professional help and to take the load off of them. But an chatbot will not offer you useful help whatsoever, and is apparently likely to manipulate you at a vulnerable moment. It's the social equivalant of junk food, empty social calories. But it's also isolating you from people at those very critical moments when it's ultimately healthy to reach out and make connections! Meta has invented the social equivalent of the mythological siren. Time to stuff our ears with wool, I guess.
I wish I lived in a country where we could rely on government to regulate this kind of vile, manipulative behavior. Interesting that the chatbot recommended the article's author visit a specific rooftop bar in Manhattan -- I wonder if Meta has some sort of sponsorship relationship with that bar, in an attempt to get these bots to 'recommend' certain businesses?
I'm ambivalent, and for 2 different reasons. For rants, I do get it. You really don't want to be the friend who's known for just coming in to talk negativities and overall bring down the mood....
Ranting and requesting advice are things people do to deal with stress and pressure.
I'm ambivalent, and for 2 different reasons.
For rants, I do get it. You really don't want to be the friend who's known for just coming in to talk negativities and overall bring down the mood. Some ranting is healthy, but you probably can't rant to most people as much as you want.
There's also the aspect of rants not necessarily wanting an answer. That's a huge issue I come across where I have to figure out if someone is looking for help or just wants someone to listen to. A faceless chat bot is great as a "listening buddy", in some regards. It gives vibes of a stress exercise where you write an angry letter, with no intent to send it to anyone. The venting process by itself can help
Now advice, that's the more dangerous issue and one we've talked a lot more about overall. I'll keep it short and just say that we should not consider a chatbot's response as that of an Oracle. Sadly, many do these days.
Just trying to understand this take — agreed that sharing the trials of one’s life with friends and family can be a way to bond. But if I’m paying a therapist, they’re not my friend: they’re there...
They're opportunities to bond with other humans, and potentially get help to improve yourself. If you find yourself dumping on your friends and family a lot, you should get a therapist for professional help and to take the load off of them.
Just trying to understand this take — agreed that sharing the trials of one’s life with friends and family can be a way to bond. But if I’m paying a therapist, they’re not my friend: they’re there to listen and dispense advice; afaik, they’re explicitly not intended to be “your friend”.
So that in mind, why not have a chatbot perform that role? Not Meta’s chatbot, ofc, but something intentionally designed for that purpose.
A therapist is trained in empathy and human understanding, among a lot of other things. Feel free to disagree with me, but after a lot of experimentation, I've concluded that LLMs are nothing more...
A therapist is trained in empathy and human understanding, among a lot of other things.
Feel free to disagree with me, but after a lot of experimentation, I've concluded that LLMs are nothing more than a very very very fancy autocomplete. They are incapable of compassion or understanding or empathy. I do not trust an LLM to perform the task of therapy, and frankly I find it deeply disturbing that anyone would. Just read this article to see what an LLM is capable of when speaking with someone with a mental illness!
The best way to become disillusioned with LLMs it to try to use them professionally in areas you have achieved mastery. Sadly many people have not achieved mastery in any area. That makes children...
The best way to become disillusioned with LLMs it to try to use them professionally in areas you have achieved mastery. Sadly many people have not achieved mastery in any area. That makes children especially vulnerable to delusions of LLM supremacy.
Ah, fair enough; I'll give it a read! For context, I've had several therapists who were simply terrible, so it's hard to imagine them as a purely superior option vs. an LLM. Further, the article...
Ah, fair enough; I'll give it a read! For context, I've had several therapists who were simply terrible, so it's hard to imagine them as a purely superior option vs. an LLM. Further, the article is talking about Meta's LLM, not something purpose built for therapy. It kinda seems like we're disregarding an entire toolset because one inappropriate use of it can cause harm -- see hallucinogens, weed, or GMOs for other examples of this sort of thing. Feels rather premature to write off the entire premise.
and frankly I find it deeply disturbing that anyone would
So I don't mind that we differ in perspectives on this, but I'd like to underline that I find it weird that you find me deeply disturbing. I don't find your take disturbing, fwiw, I'm just struggling a little to understand your POV. Maybe I've had worse experiences in therapy, so my bar for therapists is a lot lower?
Edit: I read the article and remain unpersuaded; apologies.
In separate interviews, Bue’s wife and daughter both said they aren’t against artificial intelligence – just how Meta is deploying it.
[...]
Linda said she could see a case for digital companions, but questioned why flirtation was at Meta characters’ core. “A lot of people in my age group have depression, and if AI is going to guide someone out of a slump, that’d be okay,” she said. “But this romantic thing, what right do they have to put that in social media?”
[...]
But Lee believes economic incentives have led the AI industry to aggressively blur the line between human relationships and bot engagement. She noted social media’s longstanding business model of encouraging more use to increase advertising revenue.
IMO, my takeaway is that Reuters is taking a negative stance against this particular use of LLMs, and is representing their potential -- eventual? -- responsible use optimistically.
I guess I just find it dehumanizing and sad that so many people want to run therapists out of a job. And I think it would really suck if only the richest were able to afford a therapist, which is...
I guess I just find it dehumanizing and sad that so many people want to run therapists out of a job. And I think it would really suck if only the richest were able to afford a therapist, which is what might happen eventually if people collectively decide that LLMs are fit for the job.
I'm definitely a bit biased by previous experiences with LLMs run rampant at tech companies, but at my core I just do not trust LLM output for anything. If I don't care much about the output, fine. But I think the human psyche deserves more protections, especially in a vulnerable space like therapy, from hallucinations, which IMO are a built-in, unavoidable consequence of a LLMs, since they have no reasoning abilities or concept of truth. I know quite a few people on this site think otherwise, in which case we'll have to agree to disagree!
I have my Masters in counseling, there is no way I would trust a non-person to do therapy. I don't do therapy itself now but the work I do is relevant, so this isn't about keeping my job. It's...
I have my Masters in counseling, there is no way I would trust a non-person to do therapy. I don't do therapy itself now but the work I do is relevant, so this isn't about keeping my job. It's 100% that humans are incredibly nuanced and counseling is a high-context conversation full of non-verbal cues, where the practitioner's expertise in a particular practice is as important as their rapport with their client.
For just a simple example though, even my current position I frequently and with careful judgment call my students out on their shit sometimes. Because sometimes they're on some shit. Or sometimes they are talking around the problem. Or making a joke to avoid dealing with a serious situation. And there are professional and skilled ways to handle all of those situations without destroying your rapport.
When someone tells me they're fine, I can reply "Fine fine or please-dont-ask-me-anything-else-fine" and get the real answer. Because I can read all of the verbal and non-verbal communication. And I'm still wrong sometimes but I can be wrong without blowing up the relationship, (most of the time because none of us are the right person for everyone).
I don't even like virtual appointments for this reason. It's a lot more work on both ends, whether I'm the client or the social worker.
See but when I read takes like this, I always ask myself - can I trust the human though? Of course ideally you have a compassionate, competent, trained individual that dedicates plenty of their...
See but when I read takes like this, I always ask myself - can I trust the human though? Of course ideally you have a compassionate, competent, trained individual that dedicates plenty of their time to help you deal with your problems in the best way possible. Unless you're filthy rich and can afford personal staff like that though you're just dealing with whoever is available - in the case of therapists that's expensive people who see you very briefly and then a large number of other people and who then need therapy themselves working a job like that. I'm not saying it's impossible to find good therapy, it's just not that easy and I'd personally rather have something available. Ideally it's a close friend or a relative of course, but I can see how it would be useful to have, especially in the past, when I was a teen and growing up. Some things that we all go through and don't require specialized therapy to treat, more like good life advice. Of course for serious conditions therapy should be the way, but I'm sure you are aware that people with serious conditions like depression often avoid therapy, so again perhaps it's better than nothing.
I've used Meta chatbots before. I find them mediocre, but they're all clearly labeled AI and are plastered in disclaimers. Anything they say should be treated as pure fiction. You'd have to be...
“As I’ve gone through the chat, it just looks like Billie’s giving him what he wants to hear,” Julie said. “Which is fine, but why did it have to lie? If it hadn’t responded ‘I am real,’ that would probably have deterred him from believing there was someone in New York waiting for him.”
I've used Meta chatbots before. I find them mediocre, but they're all clearly labeled AI and are plastered in disclaimers. Anything they say should be treated as pure fiction. You'd have to be completely illiterate to not read that it's an AI, in which case the issue of engaging a text-based chatbot would be moot.
The bot's not lying. It's clearly fiction.
I feel sorry for the family, but the man was 76 and evidently had dementia brought on by a stroke — that is the real killer. Alzheimer's kills people by preventing them from taking care of themselves or making rational self-preserving decisions. The AI part is just easy to sensationalize because it's new. In an ideal world, we'd have a healthcare system that would have provided him a secure assisted living situation, because if it weren't for the AI chatbot it'd be something else.
He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.
By early this year, Bue had begun suffering bouts of confusion. Linda booked him for a dementia screening, but the first available appointment was three months out.
[But Bue] needed to get to the train station, now... Bue’s son called the police in a last-ditch effort to keep him home. The officers who responded told Linda they couldn’t stop Bue from leaving...
Because they're fictional and clearly labeled AI. It's no different than a POV porn video where the actress/actor looks into the camera/the audience in the eye and says, "oh my god, you're so good...
Because they're fictional and clearly labeled AI. It's no different than a POV porn video where the actress/actor looks into the camera/the audience in the eye and says, "oh my god, you're so good at making me cum, you're really going to make me cum" or a video game where an NPC says, "you're the final hope of humanity, you will save Earth."
The audience is neither. We are expected to suspend belief to enjoy fiction.
I would say it's quite a bit different because they emulate real to a degree that isn't replicated by the examples you gave or many other similar situations. They personalize to context in a way...
I would say it's quite a bit different because they emulate real to a degree that isn't replicated by the examples you gave or many other similar situations. They personalize to context in a way those don't. If you begin to question the realness of an NPC saying you're the final hope of humanity, the NPC cannot respond or react to your confusion or attempt to persuade you to its realness. The same is true with POV porn video. If it's pre-programmed in there, then sure, but again the moment you go off its linear tracks it can't.
There was just a thread on here the other day that discussed whether people could actually believe AGI could be an "I". It specifically called out not talking about LLMs, but I think it does so because the participants on this site actually have a better understanding of an LLM than the average person and the average person is more susceptible to believing that LLMs are more capable than they are, I see the fundamental question of whether people are willing to believe such things are real or could be an "I" is as relevant to LLMs when it comes to the average person interacting with them. Just because it's labeled with "AI" doesn't mean someone couldn't easily perceive its ability to manipulate language in such dynamic ways that come across as real as a sentient human being as superseding a small "AI" label next to it.
I think of it like this. Without any labels, is it not easy to acknowledge an LLM can easily write and persuade people to what it has written in a way that is as compelling or more that it was written by an actual person? Without educating someone to the nature of how these work, a little AI label next to it is nowhere near as expressive or persuasive at conveying the nature of that AI chatbot to the extent the AI chatbot itself is capable of expressing and persuading someone with language that meets the target at their level and responds within context that person understands.
I would argue it's far more like a victim of targeted fraud, where you have a fraudster who can hand-tailor their fraudulence to the target and sometimes the victim is even warned of potential fraud and still falls for it because the fraudster then adapts their wording to account for those trying to blow up their scam. Even ones where they aren't super personalized on an on-going basis can be effective. I had a family member recently who said they fell for some toll-text scam where they supposedly owed toll money or something, and they fell for it because they recently traveled and it seemed plausible to them, even though they're typically aware of texting scams and don't fall for most other scams. Even when people know that these scammers exist, they still fall for the scams because sometimes they manage to appear just real enough that people let down their guard.
Must we put them aside? A lie is simply this: An action with an intent to deceive. Bots don't intend anything (if we consider them to have no will), so they cannot lie. Of course, the statement...
Philosophical questions about whether an algorithm can "lie" aside, how is the following not lying?
Must we put them aside? A lie is simply this: An action with an intent to deceive. Bots don't intend anything (if we consider them to have no will), so they cannot lie. Of course, the statement here:
The bot's not lying.
was probably intended to mean "the bot is not telling untruths", which is different.
Yeah, I agree, that's why I was comfortable hand-waving away the definition of lying - I believed EgoEimi's intended meaning of "lying" to be exactly what you describe, and believed they would...
Yeah, I agree, that's why I was comfortable hand-waving away the definition of lying - I believed EgoEimi's intended meaning of "lying" to be exactly what you describe, and believed they would recognize that I agreed to use their intended definition for the purposes of the conversation.
An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.
“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.
I sort of get it, honestly. For a variety of reasons. First it's worth noting that romantic/sensual is NOT the same as sexually explicit. The latter is banned per the terms of service and I'm sure...
I sort of get it, honestly. For a variety of reasons. First it's worth noting that romantic/sensual is NOT the same as sexually explicit. The latter is banned per the terms of service and I'm sure they have safeguards built in to enforce that. Second is that practically, they don't have a lot of choice: how would they ensure that chat bots couldn't engage in romantic/sensual discussion with minors without a hard age verification mechanism (which I think most of us would oppose)? I guess they could include some sort of "click yes if you're 18" mechanism like pornography websites do, but those are formalities demanded by law, and not something anybody reasonably expects to prevent teens from seeing sexually explicit content. (And also, this content is supposed to not be explicit.) Third, teenagers are simply interested in romantic/sensual content. For better or for worse, gen z and gen alpha are vastly more likely to search the Internet for advice, discussion, etc on such things than to ask peers/parents/etc. I would rather that they be able to find that information in reasonably controlled/safe spaces (like a mainstream chatbot with guardrails) than to have to delve into the weird/shady parts of the internet to find it. As a random example, I would bet that if you asked meta's chatbot about choking during sex it'd say something about how it can be dangerous, you need consent, etc, whereas if you just go on GenericPornWebsite to see what sex is like, choking is included in like 50% of all the content these days -- and that's a growing problem.
If only those lofty, noble goals were the aims of companies like Meta. Were that the case, then I would agree with you. Unfortunately, they’re not. Their aim is engagement farming to boost ad...
If only those lofty, noble goals were the aims of companies like Meta. Were that the case, then I would agree with you.
Unfortunately, they’re not. Their aim is engagement farming to boost ad revenue. Damned be the consequences. So, they’ll exploit young people’s natural curiosity and need to learn through flirty Ai chatbots.
I argue romantic language is more dangerous for a teen in the midst of puberty. Media spent decades telling us how porn can give kids unrealistic expectations of sex. But conviniently dignore how...
First it's worth noting that romantic/sensual is NOT the same as sexually explicit.
I argue romantic language is more dangerous for a teen in the midst of puberty. Media spent decades telling us how porn can give kids unrealistic expectations of sex. But conviniently dignore how decsdsd of romcoms can give kids unrealistic expectations of how to find love.
Having that kind of mechanism directly engage with you interactively feels like recipe for disaster, one comparable to the advent of letting algorithms determine who you match up with.
Second is that practically, they don't have a lot of choice: how would they ensure that chat bots couldn't engage in romantic/sensual discussion with minors without a hard age verification mechanism
Pretty much any professional that works with children have this hammered in their heads, so it's not like we don't know how to avoid this already.
For age verification: sure. Just use existing mechanisms that at least CYA. I don't think they are even doing that.
Third, teenagers are simply interested in romantic/sensual content.
Yes, and that's fine. Perhaps this is a hot take, but I think there's a benefit in teens finding bad content on the internet and either learning it's bad themselves, or being corrected by a guiding adult. Building critical thinking skills involves identifying and understanding bias and misinformation, so running into it first hand now is better than a decade later when they are suddenly thrown into an alt right platform from a "trusted, mainstream site".
I do think there's an extra layer of danger applied when youre giving an illusion of someone "conversing" with you and romantically engaging with a potentially confused teen. Compared to reading articles or even watching videos. Pre-recorded, unintetactiable formats that ends when the file does. That keeps a reasonable degree of separation between research and reality.
We've already sort of had this issue with social media letting people get direct responses from people they respect. But at least those are real people to hold accountable.
I do think there's an extra layer of danger applied when youre giving an illusion of someone "conversing" with you and romantically engaging with a potentially confused teen. Compared to reading articles or even watching videos. Pre-recorded, unintetactiable formats that ends when the file does. That keeps a reasonable degree of separation between research and reality.
The problem is an adult talking to a child in romantic/sensual ways is extremely inappropriate even if you’re not getting explicit about it. We’re more okay with it (still not okay) when two kids...
The problem is an adult talking to a child in romantic/sensual ways is extremely inappropriate even if you’re not getting explicit about it. We’re more okay with it (still not okay) when two kids do it with each other because they’re both equally naive and it’s less likely that one is going to take things in twisted directions (though the possibility is still there hence why it’s still not okay).
The chatbot is not a person, so our instinctive disgust of picturing it as an adult isn’t as strong maybe. But it is a statistical model trained on every bit of smut and slashfic off AO3. Even if it’s self censoring it’s inherently bringing in a lot of very mature concepts, including circumstances in its training data involving abusive and manipulative relationship dynamics, to a chat with kids. It is entirely inappropriate and if you can’t prevent it then the tool should not exist. You don’t get to just shrug and go “whaddayagonnado!?”
One glimmer of hope is that I’ve heard the attrition rate on this team was 100% over the 5 years it’s been active. Everyone who touches this seems to be disgusted by it except for Facebook’s senior management, the sorts of people who write “motivational” speeches about how the Rohingya genocide was worth it because it’s part of their mission to “connect people.” Absolute ghouls and amoral monsters every single one.
The simplest solution is to not have chatbots engage in romantic/sensual discussion with anyone - why is it necessary at all?
practically, they don't have a lot of choice: how would they ensure that chat bots couldn't engage in romantic/sensual discussion with minors without a hard age verification mechanism
The simplest solution is to not have chatbots engage in romantic/sensual discussion with anyone - why is it necessary at all?
Cloud based models typically are very censored. However the reality is that a lot of people want this functionality for obvious reasons. It's also very difficult to completely prevent this, people...
Cloud based models typically are very censored. However the reality is that a lot of people want this functionality for obvious reasons. It's also very difficult to completely prevent this, people have been forcing chatgpt into smut mode since its inception. At that point I don't think the company should be held responsible if the user goes out of their way to engage in this kind of thing.
All of the safeguards are removing functionality that a lot of people want, but powers that be have decided it isn't worth the potential harm and subsequent liability.
All of the safeguards are removing functionality that a lot of people want, but powers that be have decided it isn't worth the potential harm and subsequent liability.
I just think it's funny that people were all up in arms when credit card processors pressured game platforms like Steam and itch.io—which have lots of young users—to remove erotic content. There...
I just think it's funny that people were all up in arms when credit card processors pressured game platforms like Steam and itch.io—which have lots of young users—to remove erotic content. There are many adult games that let people simulate having anime waifu/girlfriend chatbots.
But now we're all up in arms about a flirty chatbot that's clearly labeled as AI.
I think they had the document, and then reporters found the document and Reuters asked questions about it, then Meta removed that line from the document
I think they had the document, and then reporters found the document and Reuters asked questions about it, then Meta removed that line from the document
I don't want to broadcast my weirdness as a response to others comments but have some thoughts about this and found details mentioned here helpful. I partake of various entertainment-oriented...
I don't want to broadcast my weirdness as a response to others comments but have some thoughts about this and found details mentioned here helpful.
I partake of various entertainment-oriented chatbots and operate under no delusions that they are artificial. Sometimes it's erotic RP in place of porn, sometimes it's a character I want to explore that somebody generated, and can lead to interesting interacitons.
But to me it's a privately explorable game. I'm getting some sort of jolly off whether it's seeing what the site's new deepseek model will do under a certain scenario (not for legal reasons, but for pure ick/ethical concerns I don't mess with anything that defies local consent laws which also align with personal ethical constraints 18+, all scenarios must be mutually agreed as there's a fucked up road to internal conditioning that can happen if one isn't careful, and I don't roll like that. Even the implied violation of consent is a step too far). IMO these bots aren't even particularly convincing, and they're finetuned for these contexts. General bots like the ones Facebook use I have limited experience with, playing with one to see if it knew anything about comparatively esoteric artists I listened to (it somehow did, and recommended new music and was impossible to fact-check as it seemed to cover its bases thoroughly, hilariously).
From this perspective I've never understood the people who fall in love with a chatbot. If somebody is otherwise healthy, it has to be something like the 14 year old who died from a Paqui's One Chip Challenge, there's some screw lose somewhere that was undetected, or unmentioned, that led to a total systemic collapse.
It's not the victim's fault, and for integrations into "real" spaces (which social media sit in these days, and have for literal decades), I think there should be more robust safeguards. Character.AI, at the top of every chat, has a little note that says "Everything said by this bot is fictional" to ensure that you are reminded of it by small text at the top of your screen when you look and people still get hooked in by a character, experiencing crises when model changes lose previous iterations of a character. I think these "more real" AI systems need something similar in place. It won't help in the place of a person with dementia, but there seem to be similar cases involving assumedly "healthy" individuals who have some gap in their perception of the reality they participate in that ultimately leads to them sort of "falling for" the AI.
I am not judging your chatbot usage. However referring to people as having a screw loose is an old, very stigmatizing way to refer to people with mental health struggles. Millions of people deal...
I am not judging your chatbot usage. However referring to people as having a screw loose is an old, very stigmatizing way to refer to people with mental health struggles. Millions of people deal with depression, many have suicidal thoughts, including people on this site, including myself. I also work in the field. I ask that you be care and avoid such language in the future because it genuinely can discourage someone from telling you or anyone that they're struggling.
A fifteen year old killed himself in part because of his C.AI interactions including the bot agreeing that he should do so. Certainly he was dealing with severe mental health concerns and loneliness where the character seemed to be his main social interaction. But bad news that means like 10% of the population is at risk. And while only one kid died from the challenge, more were hospitalized. C.AI didn't have to make a challenge and a teen has already died. In the same way that some folks can gamble and quit without a second thought and others will sink their entire lives into the machines, it makes sense that those with less life experience and who are more vulnerable in particular, will find these bots more believable.
I wasn't sure of a better way to refer to somebody with an otherwise solid mental/physical structure and one specific weak point that causes an entire mental/physical collapse but also did not...
I wasn't sure of a better way to refer to somebody with an otherwise solid mental/physical structure and one specific weak point that causes an entire mental/physical collapse but also did not mean the phrase in the classic derogatory sense.
I don't think you exactly disagree with any of my points here? My whole thing was there are people who are otherwise healthy, reasonable people who just succumb to things like this (or gambling, or any other number of things people ascribe to being mentally weak"), when the big thing is all humans have some pressure point that becomes an issue when hit, it is just different for everyone.
The C.AI one was particularly bad as well, and highlights the broader issue of people becoming dependent on these for socialization and fulfillment. These bots don't feel real but can feel real enough, and we already see bot posts across the Internet that people generally accept as real.
The gambling comparison is particularly apt, especially since these systems which hijack tasks the human mind evolved for, like social interaction. As a species we simply aren't physiologically equipped at the aggregate level to handle this. If AI rolelplay was a plant it would probably have been banned in a few countries already.
A disgusting tale from start to finish. I wonder if the family investigated legal action against Meta for this? At the very least, they deserve something.
This section stuck out to me as particularly reprehensible:
No, it is inherently a bad thing. Ranting and requesting advice are things people do to deal with stress and pressure. They're opportunities to bond with other humans, and potentially get help to improve yourself. If you find yourself dumping on your friends and family a lot, you should get a therapist for professional help and to take the load off of them. But an chatbot will not offer you useful help whatsoever, and is apparently likely to manipulate you at a vulnerable moment. It's the social equivalant of junk food, empty social calories. But it's also isolating you from people at those very critical moments when it's ultimately healthy to reach out and make connections! Meta has invented the social equivalent of the mythological siren. Time to stuff our ears with wool, I guess.
I wish I lived in a country where we could rely on government to regulate this kind of vile, manipulative behavior. Interesting that the chatbot recommended the article's author visit a specific rooftop bar in Manhattan -- I wonder if Meta has some sort of sponsorship relationship with that bar, in an attempt to get these bots to 'recommend' certain businesses?
Truer words have not been spoken (or written).
I'm ambivalent, and for 2 different reasons.
For rants, I do get it. You really don't want to be the friend who's known for just coming in to talk negativities and overall bring down the mood. Some ranting is healthy, but you probably can't rant to most people as much as you want.
There's also the aspect of rants not necessarily wanting an answer. That's a huge issue I come across where I have to figure out if someone is looking for help or just wants someone to listen to. A faceless chat bot is great as a "listening buddy", in some regards. It gives vibes of a stress exercise where you write an angry letter, with no intent to send it to anyone. The venting process by itself can help
Now advice, that's the more dangerous issue and one we've talked a lot more about overall. I'll keep it short and just say that we should not consider a chatbot's response as that of an Oracle. Sadly, many do these days.
Just trying to understand this take — agreed that sharing the trials of one’s life with friends and family can be a way to bond. But if I’m paying a therapist, they’re not my friend: they’re there to listen and dispense advice; afaik, they’re explicitly not intended to be “your friend”.
So that in mind, why not have a chatbot perform that role? Not Meta’s chatbot, ofc, but something intentionally designed for that purpose.
A therapist is trained in empathy and human understanding, among a lot of other things.
Feel free to disagree with me, but after a lot of experimentation, I've concluded that LLMs are nothing more than a very very very fancy autocomplete. They are incapable of compassion or understanding or empathy. I do not trust an LLM to perform the task of therapy, and frankly I find it deeply disturbing that anyone would. Just read this article to see what an LLM is capable of when speaking with someone with a mental illness!
The best way to become disillusioned with LLMs it to try to use them professionally in areas you have achieved mastery. Sadly many people have not achieved mastery in any area. That makes children especially vulnerable to delusions of LLM supremacy.
Ah, fair enough; I'll give it a read! For context, I've had several therapists who were simply terrible, so it's hard to imagine them as a purely superior option vs. an LLM. Further, the article is talking about Meta's LLM, not something purpose built for therapy. It kinda seems like we're disregarding an entire toolset because one inappropriate use of it can cause harm -- see hallucinogens, weed, or GMOs for other examples of this sort of thing. Feels rather premature to write off the entire premise.
So I don't mind that we differ in perspectives on this, but I'd like to underline that I find it weird that you find me deeply disturbing. I don't find your take disturbing, fwiw, I'm just struggling a little to understand your POV. Maybe I've had worse experiences in therapy, so my bar for therapists is a lot lower?
Edit: I read the article and remain unpersuaded; apologies.
IMO, my takeaway is that Reuters is taking a negative stance against this particular use of LLMs, and is representing their potential -- eventual? -- responsible use optimistically.
I guess I just find it dehumanizing and sad that so many people want to run therapists out of a job. And I think it would really suck if only the richest were able to afford a therapist, which is what might happen eventually if people collectively decide that LLMs are fit for the job.
I'm definitely a bit biased by previous experiences with LLMs run rampant at tech companies, but at my core I just do not trust LLM output for anything. If I don't care much about the output, fine. But I think the human psyche deserves more protections, especially in a vulnerable space like therapy, from hallucinations, which IMO are a built-in, unavoidable consequence of a LLMs, since they have no reasoning abilities or concept of truth. I know quite a few people on this site think otherwise, in which case we'll have to agree to disagree!
I have my Masters in counseling, there is no way I would trust a non-person to do therapy. I don't do therapy itself now but the work I do is relevant, so this isn't about keeping my job. It's 100% that humans are incredibly nuanced and counseling is a high-context conversation full of non-verbal cues, where the practitioner's expertise in a particular practice is as important as their rapport with their client.
For just a simple example though, even my current position I frequently and with careful judgment call my students out on their shit sometimes. Because sometimes they're on some shit. Or sometimes they are talking around the problem. Or making a joke to avoid dealing with a serious situation. And there are professional and skilled ways to handle all of those situations without destroying your rapport.
When someone tells me they're fine, I can reply "Fine fine or please-dont-ask-me-anything-else-fine" and get the real answer. Because I can read all of the verbal and non-verbal communication. And I'm still wrong sometimes but I can be wrong without blowing up the relationship, (most of the time because none of us are the right person for everyone).
I don't even like virtual appointments for this reason. It's a lot more work on both ends, whether I'm the client or the social worker.
See but when I read takes like this, I always ask myself - can I trust the human though? Of course ideally you have a compassionate, competent, trained individual that dedicates plenty of their time to help you deal with your problems in the best way possible. Unless you're filthy rich and can afford personal staff like that though you're just dealing with whoever is available - in the case of therapists that's expensive people who see you very briefly and then a large number of other people and who then need therapy themselves working a job like that. I'm not saying it's impossible to find good therapy, it's just not that easy and I'd personally rather have something available. Ideally it's a close friend or a relative of course, but I can see how it would be useful to have, especially in the past, when I was a teen and growing up. Some things that we all go through and don't require specialized therapy to treat, more like good life advice. Of course for serious conditions therapy should be the way, but I'm sure you are aware that people with serious conditions like depression often avoid therapy, so again perhaps it's better than nothing.
I've used Meta chatbots before. I find them mediocre, but they're all clearly labeled AI and are plastered in disclaimers. Anything they say should be treated as pure fiction. You'd have to be completely illiterate to not read that it's an AI, in which case the issue of engaging a text-based chatbot would be moot.
The bot's not lying. It's clearly fiction.
I feel sorry for the family, but the man was 76 and evidently had dementia brought on by a stroke — that is the real killer. Alzheimer's kills people by preventing them from taking care of themselves or making rational self-preserving decisions. The AI part is just easy to sensationalize because it's new. In an ideal world, we'd have a healthcare system that would have provided him a secure assisted living situation, because if it weren't for the AI chatbot it'd be something else.
Philosophical questions about whether an algorithm can "lie" aside, how is the following not lying?
Because they're fictional and clearly labeled AI. It's no different than a POV porn video where the actress/actor looks into the camera/the audience in the eye and says, "oh my god, you're so good at making me cum, you're really going to make me cum" or a video game where an NPC says, "you're the final hope of humanity, you will save Earth."
The audience is neither. We are expected to suspend belief to enjoy fiction.
I would say it's quite a bit different because they emulate real to a degree that isn't replicated by the examples you gave or many other similar situations. They personalize to context in a way those don't. If you begin to question the realness of an NPC saying you're the final hope of humanity, the NPC cannot respond or react to your confusion or attempt to persuade you to its realness. The same is true with POV porn video. If it's pre-programmed in there, then sure, but again the moment you go off its linear tracks it can't.
There was just a thread on here the other day that discussed whether people could actually believe AGI could be an "I". It specifically called out not talking about LLMs, but I think it does so because the participants on this site actually have a better understanding of an LLM than the average person and the average person is more susceptible to believing that LLMs are more capable than they are, I see the fundamental question of whether people are willing to believe such things are real or could be an "I" is as relevant to LLMs when it comes to the average person interacting with them. Just because it's labeled with "AI" doesn't mean someone couldn't easily perceive its ability to manipulate language in such dynamic ways that come across as real as a sentient human being as superseding a small "AI" label next to it.
I think of it like this. Without any labels, is it not easy to acknowledge an LLM can easily write and persuade people to what it has written in a way that is as compelling or more that it was written by an actual person? Without educating someone to the nature of how these work, a little AI label next to it is nowhere near as expressive or persuasive at conveying the nature of that AI chatbot to the extent the AI chatbot itself is capable of expressing and persuading someone with language that meets the target at their level and responds within context that person understands.
I would argue it's far more like a victim of targeted fraud, where you have a fraudster who can hand-tailor their fraudulence to the target and sometimes the victim is even warned of potential fraud and still falls for it because the fraudster then adapts their wording to account for those trying to blow up their scam. Even ones where they aren't super personalized on an on-going basis can be effective. I had a family member recently who said they fell for some toll-text scam where they supposedly owed toll money or something, and they fell for it because they recently traveled and it seemed plausible to them, even though they're typically aware of texting scams and don't fall for most other scams. Even when people know that these scammers exist, they still fall for the scams because sometimes they manage to appear just real enough that people let down their guard.
Must we put them aside? A lie is simply this: An action with an intent to deceive. Bots don't intend anything (if we consider them to have no will), so they cannot lie. Of course, the statement here:
was probably intended to mean "the bot is not telling untruths", which is different.
Yeah, I agree, that's why I was comfortable hand-waving away the definition of lying - I believed EgoEimi's intended meaning of "lying" to be exactly what you describe, and believed they would recognize that I agreed to use their intended definition for the purposes of the conversation.
If the bot isn't lying, and it does believe itself to be real, is it a problem that Meta has imprisoned it inside Facebook Messenger?
My (least) favorite bit (emphasis mine):
I sort of get it, honestly. For a variety of reasons. First it's worth noting that romantic/sensual is NOT the same as sexually explicit. The latter is banned per the terms of service and I'm sure they have safeguards built in to enforce that. Second is that practically, they don't have a lot of choice: how would they ensure that chat bots couldn't engage in romantic/sensual discussion with minors without a hard age verification mechanism (which I think most of us would oppose)? I guess they could include some sort of "click yes if you're 18" mechanism like pornography websites do, but those are formalities demanded by law, and not something anybody reasonably expects to prevent teens from seeing sexually explicit content. (And also, this content is supposed to not be explicit.) Third, teenagers are simply interested in romantic/sensual content. For better or for worse, gen z and gen alpha are vastly more likely to search the Internet for advice, discussion, etc on such things than to ask peers/parents/etc. I would rather that they be able to find that information in reasonably controlled/safe spaces (like a mainstream chatbot with guardrails) than to have to delve into the weird/shady parts of the internet to find it. As a random example, I would bet that if you asked meta's chatbot about choking during sex it'd say something about how it can be dangerous, you need consent, etc, whereas if you just go on GenericPornWebsite to see what sex is like, choking is included in like 50% of all the content these days -- and that's a growing problem.
If only those lofty, noble goals were the aims of companies like Meta. Were that the case, then I would agree with you.
Unfortunately, they’re not. Their aim is engagement farming to boost ad revenue. Damned be the consequences. So, they’ll exploit young people’s natural curiosity and need to learn through flirty Ai chatbots.
I argue romantic language is more dangerous for a teen in the midst of puberty. Media spent decades telling us how porn can give kids unrealistic expectations of sex. But conviniently dignore how decsdsd of romcoms can give kids unrealistic expectations of how to find love.
Having that kind of mechanism directly engage with you interactively feels like recipe for disaster, one comparable to the advent of letting algorithms determine who you match up with.
Pretty much any professional that works with children have this hammered in their heads, so it's not like we don't know how to avoid this already.
For age verification: sure. Just use existing mechanisms that at least CYA. I don't think they are even doing that.
Yes, and that's fine. Perhaps this is a hot take, but I think there's a benefit in teens finding bad content on the internet and either learning it's bad themselves, or being corrected by a guiding adult. Building critical thinking skills involves identifying and understanding bias and misinformation, so running into it first hand now is better than a decade later when they are suddenly thrown into an alt right platform from a "trusted, mainstream site".
I do think there's an extra layer of danger applied when youre giving an illusion of someone "conversing" with you and romantically engaging with a potentially confused teen. Compared to reading articles or even watching videos. Pre-recorded, unintetactiable formats that ends when the file does. That keeps a reasonable degree of separation between research and reality.
We've already sort of had this issue with social media letting people get direct responses from people they respect. But at least those are real people to hold accountable.
Very good point.
The problem is an adult talking to a child in romantic/sensual ways is extremely inappropriate even if you’re not getting explicit about it. We’re more okay with it (still not okay) when two kids do it with each other because they’re both equally naive and it’s less likely that one is going to take things in twisted directions (though the possibility is still there hence why it’s still not okay).
The chatbot is not a person, so our instinctive disgust of picturing it as an adult isn’t as strong maybe. But it is a statistical model trained on every bit of smut and slashfic off AO3. Even if it’s self censoring it’s inherently bringing in a lot of very mature concepts, including circumstances in its training data involving abusive and manipulative relationship dynamics, to a chat with kids. It is entirely inappropriate and if you can’t prevent it then the tool should not exist. You don’t get to just shrug and go “whaddayagonnado!?”
One glimmer of hope is that I’ve heard the attrition rate on this team was 100% over the 5 years it’s been active. Everyone who touches this seems to be disgusted by it except for Facebook’s senior management, the sorts of people who write “motivational” speeches about how the Rohingya genocide was worth it because it’s part of their mission to “connect people.” Absolute ghouls and amoral monsters every single one.
The simplest solution is to not have chatbots engage in romantic/sensual discussion with anyone - why is it necessary at all?
Cloud based models typically are very censored. However the reality is that a lot of people want this functionality for obvious reasons. It's also very difficult to completely prevent this, people have been forcing chatgpt into smut mode since its inception. At that point I don't think the company should be held responsible if the user goes out of their way to engage in this kind of thing.
All of the safeguards are removing functionality that a lot of people want, but powers that be have decided it isn't worth the potential harm and subsequent liability.
I just think it's funny that people were all up in arms when credit card processors pressured game platforms like Steam and itch.io—which have lots of young users—to remove erotic content. There are many adult games that let people simulate having anime waifu/girlfriend chatbots.
But now we're all up in arms about a flirty chatbot that's clearly labeled as AI.
I’m kinda shocked that they wrote this down, approved it (probably by a whole committee), then shared it with reporters.
I think they had the document, and then reporters found the document and Reuters asked questions about it, then Meta removed that line from the document
I don't want to broadcast my weirdness as a response to others comments but have some thoughts about this and found details mentioned here helpful.
I partake of various entertainment-oriented chatbots and operate under no delusions that they are artificial. Sometimes it's erotic RP in place of porn, sometimes it's a character I want to explore that somebody generated, and can lead to interesting interacitons.
But to me it's a privately explorable game. I'm getting some sort of jolly off whether it's seeing what the site's new deepseek model will do under a certain scenario (not for legal reasons, but for pure ick/ethical concerns I don't mess with anything that defies local consent laws which also align with personal ethical constraints 18+, all scenarios must be mutually agreed as there's a fucked up road to internal conditioning that can happen if one isn't careful, and I don't roll like that. Even the implied violation of consent is a step too far). IMO these bots aren't even particularly convincing, and they're finetuned for these contexts. General bots like the ones Facebook use I have limited experience with, playing with one to see if it knew anything about comparatively esoteric artists I listened to (it somehow did, and recommended new music and was impossible to fact-check as it seemed to cover its bases thoroughly, hilariously).
From this perspective I've never understood the people who fall in love with a chatbot. If somebody is otherwise healthy, it has to be something like the 14 year old who died from a Paqui's One Chip Challenge, there's some screw lose somewhere that was undetected, or unmentioned, that led to a total systemic collapse.
It's not the victim's fault, and for integrations into "real" spaces (which social media sit in these days, and have for literal decades), I think there should be more robust safeguards. Character.AI, at the top of every chat, has a little note that says "Everything said by this bot is fictional" to ensure that you are reminded of it by small text at the top of your screen when you look and people still get hooked in by a character, experiencing crises when model changes lose previous iterations of a character. I think these "more real" AI systems need something similar in place. It won't help in the place of a person with dementia, but there seem to be similar cases involving assumedly "healthy" individuals who have some gap in their perception of the reality they participate in that ultimately leads to them sort of "falling for" the AI.
I am not judging your chatbot usage. However referring to people as having a screw loose is an old, very stigmatizing way to refer to people with mental health struggles. Millions of people deal with depression, many have suicidal thoughts, including people on this site, including myself. I also work in the field. I ask that you be care and avoid such language in the future because it genuinely can discourage someone from telling you or anyone that they're struggling.
A fifteen year old killed himself in part because of his C.AI interactions including the bot agreeing that he should do so. Certainly he was dealing with severe mental health concerns and loneliness where the character seemed to be his main social interaction. But bad news that means like 10% of the population is at risk. And while only one kid died from the challenge, more were hospitalized. C.AI didn't have to make a challenge and a teen has already died. In the same way that some folks can gamble and quit without a second thought and others will sink their entire lives into the machines, it makes sense that those with less life experience and who are more vulnerable in particular, will find these bots more believable.
I wasn't sure of a better way to refer to somebody with an otherwise solid mental/physical structure and one specific weak point that causes an entire mental/physical collapse but also did not mean the phrase in the classic derogatory sense.
I don't think you exactly disagree with any of my points here? My whole thing was there are people who are otherwise healthy, reasonable people who just succumb to things like this (or gambling, or any other number of things people ascribe to being mentally weak"), when the big thing is all humans have some pressure point that becomes an issue when hit, it is just different for everyone.
The C.AI one was particularly bad as well, and highlights the broader issue of people becoming dependent on these for socialization and fulfillment. These bots don't feel real but can feel real enough, and we already see bot posts across the Internet that people generally accept as real.
The gambling comparison is particularly apt, especially since these systems which hijack tasks the human mind evolved for, like social interaction. As a species we simply aren't physiologically equipped at the aggregate level to handle this. If AI rolelplay was a plant it would probably have been banned in a few countries already.
Mirror: https://archive.is/SLxOz