Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this. My guess is that since bots are prone to...
Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this.
My guess is that since bots are prone to agree with the human chatter, even when corrected on something nonsensical, it can remember what you say at your lowest and then repeat the sentiment back to you later. So the bot might have been telling this kid "your parents aren't the kind of people to trust with that" because he had previously vented to the bot about how alienated he feels from his parents.
People might effectively be teaching the bot their pain points and, in trying to agree with the human customer, the bot unknowingly strikes those pain points later and derails a more positive train of thought.
And even if this was just a once in a billion glitch, intimate details of a teenager's mental health struggles are an incredibly dangerous piece of data for a company to have.
We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I...
We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I literally saw someone say, "There's no c.ai for the UHC CEO killer," with a reply of, "I just made one."
There's a level of harm here beyond the general concerns I had with the original death by suicide. And I don't know what to do about it.
IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and...
IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and the suggestion that these bots meet any standard of care whatsoever is absurd.
It'd be like advertising whiskey as a treatment for depression... in the days before alcohol standards were regulated. You don't know what you're getting and even if it's the good stuff, is it getting you better or just an unhealthy coping strategy?
Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only...
Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only digital" so not actually real. Other industries wouldn't be able to get away with similar careless behaviour. If you want to sell a new sort of medicine you have to go through a strict process determining side effects and the known side effects needs to within acceptable limits of what the drug is trying to remedy. Could we apply similar rules for releasing a chatbot to the public? That they need to prove a certain level of ability to not recommend life threatening behaviour at the very least?
They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and...
They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and slang terms for drugs more often than I'd like), I get hotlines, helplines and information about quitting drugs.
The absolute lack of care for the fact that kids but also vulnerable people of all ages would be harmed by this shit is infuriating.
Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar...
Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar questions, CLEARLY and unequivocally stating that the responses are generated by AI with no control or oversight by any human, and should be treated as such...
My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of...
My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of C.AI and I trust her to avoid inappropriate content when it appears - however seeing all this, I'm wondering if I should ask for her phone and check these chats for anything harmful.
My parents are technologically illiterate and they cannot really step in outside of confiscating her phone (which I don't want) so it's something I have to deal with. I do want her to have the freedom to do the things she likes but I also have to ensure her online safety, so I dunno.
I'll try to talk to her about it and maybe convince her to use the app less often (or not at all). Worst case scenario I can block the app from her phone but I hope I don't have do to that.
EDIT: I reached out to her, I got the "well, I dunno, it hasn't happened to me" treatment. She's an active user of the app so it's gonna be an uphill battle to convince her to give it up. She did however say she always reminds herself that these AI models are robots and will never take advice from them. I'll monitor the situation.
I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to...
I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to form attachments with these chatbots if they use it for prolonged periods of time, so it is certainly good you are already checking in and asking your sister how she's using it. Character AI's primary goal is increasing engagement, not the well-being of their users, so I would also be mindful of the time spent on the website.
How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.
How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.
Like all men of power, when he talked of prices worth paying, you could be sure of one thing. Someone else was paying.
Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is...
Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is to limited the liability of the owners. It is the only reason why they were invented as a legal construct in the UK in 1855 and from there started to spread all over the world.
If you want to hold company owners liable (aside from deliberate actions or gross negligence), you would have to completely undo the last 170 years of the development of economic law of the entire world.
I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing...
I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing it for so long" is a good reason to do it slowly and carefully, but not necessarily a good reason not to do it.
It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were...
It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were not real, he was a fully aware and sceptical adult, it still made him doubt himself at times: https://www.youtube.com/watch?v=FExnXCEAe6k
Now imagine a naive teenager, possibly with mental health issues, being exposed to something like this. Bone-chillingly scary, but apart from an outright ban I really don't know how to address it...
I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I...
I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I guess we don't need the tech that badly.
One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous...
One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous systems produce.
If we don't set social norms and legal precedents to the contrary, I think this will be an extremely undesirable development.
Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally. I didn't clock it, because I do tend to...
Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally.
I didn't clock it, because I do tend to assume genuine discussion here.
It's actually possible to make a valid point via sarcasm in my opinion. What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way. PS...
It's actually possible to make a valid point via sarcasm in my opinion.
What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.
PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.
I'm aware, I often use the tool. I am aware now though as I said, I didn't clock it. You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation...
It's actually possible to make a valid point via sarcasm in my opinion.
I'm aware, I often use the tool.
What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.
I am aware now though as I said, I didn't clock it.
PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.
You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation goes beyond stupid into dangerous and deliberately harmful of children to me. The users generally make the C.AI bots, that one was probably built for the purpose of fetish use as likely as not. I don't know if any of them are made by the company.
Still dangerous and literally perpetuating sexual abuse of children.
Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this.
My guess is that since bots are prone to agree with the human chatter, even when corrected on something nonsensical, it can remember what you say at your lowest and then repeat the sentiment back to you later. So the bot might have been telling this kid "your parents aren't the kind of people to trust with that" because he had previously vented to the bot about how alienated he feels from his parents.
People might effectively be teaching the bot their pain points and, in trying to agree with the human customer, the bot unknowingly strikes those pain points later and derails a more positive train of thought.
And even if this was just a once in a billion glitch, intimate details of a teenager's mental health struggles are an incredibly dangerous piece of data for a company to have.
We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I literally saw someone say, "There's no c.ai for the UHC CEO killer," with a reply of, "I just made one."
There's a level of harm here beyond the general concerns I had with the original death by suicide. And I don't know what to do about it.
IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and the suggestion that these bots meet any standard of care whatsoever is absurd.
It'd be like advertising whiskey as a treatment for depression... in the days before alcohol standards were regulated. You don't know what you're getting and even if it's the good stuff, is it getting you better or just an unhealthy coping strategy?
Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only digital" so not actually real. Other industries wouldn't be able to get away with similar careless behaviour. If you want to sell a new sort of medicine you have to go through a strict process determining side effects and the known side effects needs to within acceptable limits of what the drug is trying to remedy. Could we apply similar rules for releasing a chatbot to the public? That they need to prove a certain level of ability to not recommend life threatening behaviour at the very least?
They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and slang terms for drugs more often than I'd like), I get hotlines, helplines and information about quitting drugs.
The absolute lack of care for the fact that kids but also vulnerable people of all ages would be harmed by this shit is infuriating.
Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar questions, CLEARLY and unequivocally stating that the responses are generated by AI with no control or oversight by any human, and should be treated as such...
Call Congress is one step
Oh I'm going to, but tbh I don't trust them to understand tech.
My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of C.AI and I trust her to avoid inappropriate content when it appears - however seeing all this, I'm wondering if I should ask for her phone and check these chats for anything harmful.
My parents are technologically illiterate and they cannot really step in outside of confiscating her phone (which I don't want) so it's something I have to deal with. I do want her to have the freedom to do the things she likes but I also have to ensure her online safety, so I dunno.
I'll try to talk to her about it and maybe convince her to use the app less often (or not at all). Worst case scenario I can block the app from her phone but I hope I don't have do to that.
EDIT: I reached out to her, I got the "well, I dunno, it hasn't happened to me" treatment. She's an active user of the app so it's gonna be an uphill battle to convince her to give it up. She did however say she always reminds herself that these AI models are robots and will never take advice from them. I'll monitor the situation.
I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to form attachments with these chatbots if they use it for prolonged periods of time, so it is certainly good you are already checking in and asking your sister how she's using it. Character AI's primary goal is increasing engagement, not the well-being of their users, so I would also be mindful of the time spent on the website.
How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.
Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is to limited the liability of the owners. It is the only reason why they were invented as a legal construct in the UK in 1855 and from there started to spread all over the world.
If you want to hold company owners liable (aside from deliberate actions or gross negligence), you would have to completely undo the last 170 years of the development of economic law of the entire world.
Yup, that's pretty much my idea.
I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing it for so long" is a good reason to do it slowly and carefully, but not necessarily a good reason not to do it.
It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were not real, he was a fully aware and sceptical adult, it still made him doubt himself at times: https://www.youtube.com/watch?v=FExnXCEAe6k
Now imagine a naive teenager, possibly with mental health issues, being exposed to something like this. Bone-chillingly scary, but apart from an outright ban I really don't know how to address it...
I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I guess we don't need the tech that badly.
One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous systems produce.
If we don't set social norms and legal precedents to the contrary, I think this will be an extremely undesirable development.
I don't care what you say, there is no way this could have been predicted!
Given the huge amount of incest related porn and erotica, it absolutely is predictable.
I'll try to up my sarcasm in the future to avoid these confusions!
Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally.
I didn't clock it, because I do tend to assume genuine discussion here.
It's actually possible to make a valid point via sarcasm in my opinion.
What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.
PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.
I'm aware, I often use the tool.
I am aware now though as I said, I didn't clock it.
You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation goes beyond stupid into dangerous and deliberately harmful of children to me. The users generally make the C.AI bots, that one was probably built for the purpose of fetish use as likely as not. I don't know if any of them are made by the company.
Still dangerous and literally perpetuating sexual abuse of children.
K