Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this. My guess is that since bots are prone to...
Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this.
My guess is that since bots are prone to agree with the human chatter, even when corrected on something nonsensical, it can remember what you say at your lowest and then repeat the sentiment back to you later. So the bot might have been telling this kid "your parents aren't the kind of people to trust with that" because he had previously vented to the bot about how alienated he feels from his parents.
People might effectively be teaching the bot their pain points and, in trying to agree with the human customer, the bot unknowingly strikes those pain points later and derails a more positive train of thought.
And even if this was just a once in a billion glitch, intimate details of a teenager's mental health struggles are an incredibly dangerous piece of data for a company to have.
We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I...
We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I literally saw someone say, "There's no c.ai for the UHC CEO killer," with a reply of, "I just made one."
There's a level of harm here beyond the general concerns I had with the original death by suicide. And I don't know what to do about it.
IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and...
IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and the suggestion that these bots meet any standard of care whatsoever is absurd.
It'd be like advertising whiskey as a treatment for depression... in the days before alcohol standards were regulated. You don't know what you're getting and even if it's the good stuff, is it getting you better or just an unhealthy coping strategy?
Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only...
Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only digital" so not actually real. Other industries wouldn't be able to get away with similar careless behaviour. If you want to sell a new sort of medicine you have to go through a strict process determining side effects and the known side effects needs to within acceptable limits of what the drug is trying to remedy. Could we apply similar rules for releasing a chatbot to the public? That they need to prove a certain level of ability to not recommend life threatening behaviour at the very least?
They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and...
They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and slang terms for drugs more often than I'd like), I get hotlines, helplines and information about quitting drugs.
The absolute lack of care for the fact that kids but also vulnerable people of all ages would be harmed by this shit is infuriating.
Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar...
Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar questions, CLEARLY and unequivocally stating that the responses are generated by AI with no control or oversight by any human, and should be treated as such...
My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of...
My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of C.AI and I trust her to avoid inappropriate content when it appears - however seeing all this, I'm wondering if I should ask for her phone and check these chats for anything harmful.
My parents are technologically illiterate and they cannot really step in outside of confiscating her phone (which I don't want) so it's something I have to deal with. I do want her to have the freedom to do the things she likes but I also have to ensure her online safety, so I dunno.
I'll try to talk to her about it and maybe convince her to use the app less often (or not at all). Worst case scenario I can block the app from her phone but I hope I don't have do to that.
EDIT: I reached out to her, I got the "well, I dunno, it hasn't happened to me" treatment. She's an active user of the app so it's gonna be an uphill battle to convince her to give it up. She did however say she always reminds herself that these AI models are robots and will never take advice from them. I'll monitor the situation.
I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to...
I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to form attachments with these chatbots if they use it for prolonged periods of time, so it is certainly good you are already checking in and asking your sister how she's using it. Character AI's primary goal is increasing engagement, not the well-being of their users, so I would also be mindful of the time spent on the website.
Yeah, I tried to tell her in broad words how AI models work, but she's not a technical person at all (unfortunately she does not have any computer literacy, as much as I tried to teach her the...
Yeah, I tried to tell her in broad words how AI models work, but she's not a technical person at all (unfortunately she does not have any computer literacy, as much as I tried to teach her the essentials) so I had to resume to "this is a computer that downloaded all the information online, and when you talk to it, it tries to come up with a reply that resembles what is already online". It's not a great explanation, but at least it's something that lifts the magic veil off these AI models.
I do plan to have a longer discussion with her once I go back to my hometown for Christmas, with the goal of convincing her to uninstall the app, but I know she will be opposed to it. I don't want to remove her agency and just block the app remotely with the parental controls I already have, because I know I would have hated my parents (or in this case, my older sibling, if I had any) if this happened to me as a teen.
(God, this whole thing reminded me of how much I do not want kids - I can barely take care of myself, much less another person that relies fully on me for anything.)
A suggestion for this conversation is really helping her identify what the app does for her and whether it is helping her or hurting her. That can include things like wasting time and how she...
A suggestion for this conversation is really helping her identify what the app does for her and whether it is helping her or hurting her. That can include things like wasting time and how she feels after she uses it. Framing it as you wanting to make sure she's only engaging with it because she chooses to and not because the app is tricking her into using it more can help reinforce her agency and make her more aware in the long-term of the negative side effects.
She might realize she's not talking with her actual friends but is instead talking to the chatbot or she might be doing it with her friends, in which case there's a little bit less of the risk in some ways and probably a little bit more In others
I would have also hated being forced to block or uninstall. There's always ways to get around parental control. I think what might be more effective is finding other options for hobbies or games...
I would have also hated being forced to block or uninstall. There's always ways to get around parental control. I think what might be more effective is finding other options for hobbies or games to replace the time spent with chatbots that are just as fun so she doesn't feel like she's being forced to spend less time on it but rather has something better she wants to spend time on more.
They're ridiculously popular among kids. I get potential suicide alerts for kids talking with these AI chatbots all the time, most of them are false positives but every now and then there's some...
They're ridiculously popular among kids. I get potential suicide alerts for kids talking with these AI chatbots all the time, most of them are false positives but every now and then there's some seriously concerning stuff.
This is the first time I hear about such AI (taht "impersonates" pop star; or any other famous person, really). I guess I'm too old for this, because there is a question that instantly popped in...
This is the first time I hear about such AI (taht "impersonates" pop star; or any other famous person, really).
I guess I'm too old for this, because there is a question that instantly popped in my mind: Why would someone do this? I mean why would I enjoy a conversation with a robot that impersonates someone from real world?
I have also a daughter that is coming into this age range and I will have to have such conversation with her, too. I hope it goes well. She doesn't use socials yet, though, nor she has a desire to do so, so it's still a bit of time to think about such conversation.
I also don't get why you would want to do that, I was never into fandoms as a teen though so I guess I just don't relate to them (instead I just hyperfixated on hobbies and changed them like...
I also don't get why you would want to do that, I was never into fandoms as a teen though so I guess I just don't relate to them (instead I just hyperfixated on hobbies and changed them like socks).
I do think it's an extension of the common teenager behaviors of idolizing an artist/band/whatever and making it your entire personality. Having an AI that pretends to be that only helps fuel your fantasies, and I guess helps with feeling like you have a chance to talk to your idol(s).
How healthy that is? I can't say for sure.
I do recommend at least telling her the bare minimum, and ensuring she is aware that such AI services are not real people. I do think the biggest problem with these chatbots is that they quickly make you treat them like real people, and with enough time spent, you might even form bonds with them (making it difficult to stop chatting, or even reducing the time spent with real people in order to have more time with the chatbot).
We tell her from the time she started going to school on her own that she shouldn't talk with strangers. I'm gonna approach this talk in he same way - internet is composed of strangers, so behave...
We tell her from the time she started going to school on her own that she shouldn't talk with strangers. I'm gonna approach this talk in he same way - internet is composed of strangers, so behave like you are on a busy street in a big city - when you (have to) speak with anyone do so with quite a bit of skepticism, don't trust anyone, don't tell personal information to anyone, don't send any photos to anyone etc. I think I'm gonna go this way. Tell her how useful internet is, but also tell her many people misuse this usefulness and can turn it againts her or us (as a family).
Good thing is we are not native English speakers, so chatbots and AI are out of question for a bit. Although such things get easily translated to all languages on the fly, so I should use some caution.
She is very reasonable and careful so far, I hope she stays this way.
ChatGPT can speak a lot of languages, I would expect other AI models to do that too (or follow suit soon), so it might be worth being cautious. I haven't tested C.AI myself, dunno if I want to...
ChatGPT can speak a lot of languages, I would expect other AI models to do that too (or follow suit soon), so it might be worth being cautious. I haven't tested C.AI myself, dunno if I want to make an account to see it for myself, as I already know what Gemini and ChatGPT are capable (those have a lot of guardrails implemented and yet can still output content I wouldn't show to a kid).
The two reasons that come to mind most readily for me are loneliness and practice. Loneliness is sort of self-explanatory. A fake friend is better than no friends, right? That doesn't even mean...
The two reasons that come to mind most readily for me are loneliness and practice. Loneliness is sort of self-explanatory. A fake friend is better than no friends, right? That doesn't even mean "no friends at all", that might just mean "no friends available to talk or hang out right this second". Emotions like loneliness feel much more intense to teenagers, and a quick option to alleviate that feeling is understandably tempting.
Practice might be a little more specific to my experiences, but it might not be. I was home schooled until the third grade, so by the time I started public school there were all these norms and expectations that all the other kids knew and understood that were completely foreign to me. It took me a few years to catch up. If there was an AI that I could talk to that knew all of these norms but wouldn't tell all the kids at school about all the "stupid" questions that I was asking, I definitely would have tried to use that resource. Yeah, trying to catch up socially after being homeschooled is a really specific situation, but who among us hasn't felt like there are social rules that we aren't quite grasping? I think most people have felt like there are norms - and more importantly as teens, trends - that they aren't picking up on quickly enough. With an AI, it might feel like you can ask the collective consciousness of humanity what's "cool" or what will make you seem "normal".
Unfortunately I don't think the AI will actually function well at either of those tasks. We've already seen here how it can exacerbate loneliness by distancing the kids from the actual people in their actual lives. I think the norms that teens pick up from AI are likely to be extremely distorted and will probably exacerbate the issue of feeling out of step with other kids. I don't know how easy that will be to explain to a teenager, though.
Absolutely, and I think it's easy to fall into the illusion of thinking an AI chatbot is the perfect confidant to tell all your secrets to. It stays on your phone, it only talks when you want it...
Absolutely, and I think it's easy to fall into the illusion of thinking an AI chatbot is the perfect confidant to tell all your secrets to. It stays on your phone, it only talks when you want it to, and you assume nobody would know what you just told it. (Until you realize that every chat is used as training data, thus theoretically being able to extract all your messages from the model, with careful prompting and a bit of engineering.)
Coupled with being a more lonely person, or an introvert, I can see how it's easy to make an AI the main source of social interaction. Why hang out with your friends when it might be a little difficult to fit that in your schedules (with helicopter parents and the usual serving of extra-curricular activities, kids don't get to hang out as much as they should), when "Taylor Swift" is two taps away and willing to talk to you about anything?
It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were...
It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were not real, he was a fully aware and sceptical adult, it still made him doubt himself at times: https://www.youtube.com/watch?v=FExnXCEAe6k
Now imagine a naive teenager, possibly with mental health issues, being exposed to something like this. Bone-chillingly scary, but apart from an outright ban I really don't know how to address it...
I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I...
I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I guess we don't need the tech that badly.
It is, at this point, too late for a ban. LLMs capable of convincingly acting as a chatbot can be ran locally on the user's hardware, and all it takes is a computer with a GPU strong enough to run...
It is, at this point, too late for a ban. LLMs capable of convincingly acting as a chatbot can be ran locally on the user's hardware, and all it takes is a computer with a GPU strong enough to run modern games (and even that isn't strictly necessary, an LLM can run on CPU only albeit much slower). Even if we could in one fell swoop stop all research on generative AI, the existing models could still be disseminated and inevitably reach vulnerable people. I believe it would be wiser to direct efforts toward making the public aware of what LLMs exactly are, what they are (and aren't) capable of, and what they absolutely should not be used for.
Software being able to be run locally doesn't mean it can't be banned commercially and treated like virus software by security suites. That alone would significantly reduce the harm it's currently...
Software being able to be run locally doesn't mean it can't be banned commercially and treated like virus software by security suites. That alone would significantly reduce the harm it's currently free to do.
The LLMs that are available for local use aren't being commercialized, though (and in fact some of them forbid using them for commercial use yourself through their terms of use). What's more, as...
The LLMs that are available for local use aren't being commercialized, though (and in fact some of them forbid using them for commercial use yourself through their terms of use). What's more, as the only situation they would end up being on someone's computer is if the user actively sought it out, having it be detected as malware by antivirus software would just mean the user would, at best, grant an exception for the model's file if they're savvy enough to do so, at worst disable the antivirus entirely, exposing them to many more threats than unwise use of an LLM. Also, remember who we're talking about; banning something is liable to make it even more desirable to a teenager if only for the sake of rebelling against authority.
Stepping out of the technical concerns and into the social ones, by banning LLM use (we're not talking about malware barging in uninvited, this is something that has to be deliberately installed and configured by the user), especially in a way that can be easily circumvented, all that would be accomplished would be obstructing access to information about LLMs without deterring anyone determined enough to get it on their machine (and a lonely teenager desperate for companionship of any kind definitely qualifies) which IMO makes the situation even more dangerous than if LLMs were established in the public consciousness as software anyone could get. I also do not think that an LLM's potential for harm is sufficient motive to classify it as malware. Yes, tragic deaths can be traced back to its misuse, but the same can be said of internet chatrooms or, of all things, games set in the Slenderman setting, and I think we both agree that IRC clients, Discord and the Slenderman games shouldn't be classified as malware either.
This ultimately falls to parents needing to teach their children about the dangers of misusing LLMs (which obviously means they themselves need to be taught about what LLMs are) and, should it come down to it, step in to stop them from using them. I believe a blanket ban would get in the way of that more than it would help.
I'm not disputing that local LLMs are non-commercial, I'm disputing that that means there's no point in banning e.g. character.ai. And if the only way to download a local LLM is through the usual...
I'm not disputing that local LLMs are non-commercial, I'm disputing that that means there's no point in banning e.g. character.ai. And if the only way to download a local LLM is through the usual piracy back-channels, because they have been banned at the commercial level, that instantly impedes the overwhelming majority of users. I just don't think the local-only use-case is the big threat here.
Banning services like character.ai isn't going to remove the demand for it, and what character.ai is doing can be replicated with local-only software that already exists, and, more importantly,...
Banning services like character.ai isn't going to remove the demand for it, and what character.ai is doing can be replicated with local-only software that already exists, and, more importantly, can be set up easily. We're not talking piracy back-channels here, there are resources available for doing this by installing a single program (and this is just one of many more options, including effective drop-in replacements for what character.ai provides).
character.ai and similar online services are at the forefront because they happen to be the most convenient to use with the local options still being niche, but should they disappear I can guarantee the existing users will swap to those faster than you can say "moving target", and they will also get even easier to use in the process.
Should those online LLM chatbot services be forced to take accountability for what happens through their platform? Absolutely. Will outright banning them accomplish anything good? I don't think so.
We wouldn't, because the models that are used for restricted/paid online services like character.ai aren't the same models as the ones that are freely available on places like Ollama's repository...
We wouldn't, because the models that are used for restricted/paid online services like character.ai aren't the same models as the ones that are freely available on places like Ollama's repository or Hugging Face (and they cannot be, because giving access to the model's file would be equivalent to giving unrestricted access to whoever has the hardware for it, so the companies which do have access to these closed-source commercial models are obviously keeping them close to their chest). Banning the former would have absolutely no effect on the latter.
One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous...
One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous systems produce.
If we don't set social norms and legal precedents to the contrary, I think this will be an extremely undesirable development.
How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.
How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.
Like all men of power, when he talked of prices worth paying, you could be sure of one thing. Someone else was paying.
Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is...
Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is to limited the liability of the owners. It is the only reason why they were invented as a legal construct in the UK in 1855 and from there started to spread all over the world.
If you want to hold company owners liable (aside from deliberate actions or gross negligence), you would have to completely undo the last 170 years of the development of economic law of the entire world.
I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing...
I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing it for so long" is a good reason to do it slowly and carefully, but not necessarily a good reason not to do it.
Under UK laws, C-Suite and Named Executives are still liable in many decisions and repercussions. Directors have legal duties under the Companies Act 2006 including: Duty to promote company...
Under UK laws, C-Suite and Named Executives are still liable in many decisions and repercussions.
Directors have legal duties under the Companies Act 2006 including:
Duty to promote company success
Duty to exercise reasonable care, skill and diligence
Duty to avoid conflicts of interest
Personal liability can arise if:
They knowingly allowed unsafe products to be developed/sold
They were negligent in their oversight duties
They breached their fiduciary duties
They acted outside their authority
Directors can face:
Civil lawsuits for damages
Disqualification from being a director
Criminal charges in serious cases of negligence or misconduct
Personal financial liability if found to be negligent
While companies generally shield directors from liability, this protection can be "pierced" if directors are found to be personally negligent or complicit in harmful decisions. This is especially true regarding child safety, where there are enhanced duties of care.
Since Character.AI seems to be targeting youth, the directors can be charged if found to be knowingly complicit in allowing their LLMs to be vile.
Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally. I didn't clock it, because I do tend to...
Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally.
I didn't clock it, because I do tend to assume genuine discussion here.
It's actually possible to make a valid point via sarcasm in my opinion. What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way. PS...
It's actually possible to make a valid point via sarcasm in my opinion.
What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.
PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.
I'm aware, I often use the tool. I am aware now though as I said, I didn't clock it. You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation...
It's actually possible to make a valid point via sarcasm in my opinion.
I'm aware, I often use the tool.
What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.
I am aware now though as I said, I didn't clock it.
PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.
You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation goes beyond stupid into dangerous and deliberately harmful of children to me. The users generally make the C.AI bots, that one was probably built for the purpose of fetish use as likely as not. I don't know if any of them are made by the company.
Still dangerous and literally perpetuating sexual abuse of children.
Jesus those screenshots are grim... And I'm not sure there's an airtight way for a company to ensure their bots aren't saying something crazy like this.
My guess is that since bots are prone to agree with the human chatter, even when corrected on something nonsensical, it can remember what you say at your lowest and then repeat the sentiment back to you later. So the bot might have been telling this kid "your parents aren't the kind of people to trust with that" because he had previously vented to the bot about how alienated he feels from his parents.
People might effectively be teaching the bot their pain points and, in trying to agree with the human customer, the bot unknowingly strikes those pain points later and derails a more positive train of thought.
And even if this was just a once in a billion glitch, intimate details of a teenager's mental health struggles are an incredibly dangerous piece of data for a company to have.
We'd already had the death by suicide, and the allegations of sexual abuse/grooming, suggesting self-harm and violence are incredibly concerning. The absolute failure to have any guardrails... I literally saw someone say, "There's no c.ai for the UHC CEO killer," with a reply of, "I just made one."
There's a level of harm here beyond the general concerns I had with the original death by suicide. And I don't know what to do about it.
IMO they should be as age-gated as any porn content and advertising them as an aid to mental illness, loneliness or grief shouldn't be allowed. We require licenses for therapists for a reason, and the suggestion that these bots meet any standard of care whatsoever is absurd.
It'd be like advertising whiskey as a treatment for depression... in the days before alcohol standards were regulated. You don't know what you're getting and even if it's the good stuff, is it getting you better or just an unhealthy coping strategy?
Strong agree. This is what we get when the tech industry has been allowed to steamroll with little to no regulation for decades. Somehow we have gotten used to it under the excuse that it is "only digital" so not actually real. Other industries wouldn't be able to get away with similar careless behaviour. If you want to sell a new sort of medicine you have to go through a strict process determining side effects and the known side effects needs to within acceptable limits of what the drug is trying to remedy. Could we apply similar rules for releasing a chatbot to the public? That they need to prove a certain level of ability to not recommend life threatening behaviour at the very least?
They clearly don't have enough basic warning flags on them either - if I type anything about suicide or drug overdose or even drug use into Google (my job has me looking up lethal dosages and slang terms for drugs more often than I'd like), I get hotlines, helplines and information about quitting drugs.
The absolute lack of care for the fact that kids but also vulnerable people of all ages would be harmed by this shit is infuriating.
Completely agreed. In addition to these, I also think there should be hard-coded and immutable answers provided to any variation of "are you real?", "am I talking to a person?" and all similar questions, CLEARLY and unequivocally stating that the responses are generated by AI with no control or oversight by any human, and should be treated as such...
Call Congress is one step
Oh I'm going to, but tbh I don't trust them to understand tech.
My 14-year-old sister uses Character.AI, mostly to interact with AI models of her favorite pop stars. I haven't monitored her chats but I did have a discussion with her about the possible harms of C.AI and I trust her to avoid inappropriate content when it appears - however seeing all this, I'm wondering if I should ask for her phone and check these chats for anything harmful.
My parents are technologically illiterate and they cannot really step in outside of confiscating her phone (which I don't want) so it's something I have to deal with. I do want her to have the freedom to do the things she likes but I also have to ensure her online safety, so I dunno.
I'll try to talk to her about it and maybe convince her to use the app less often (or not at all). Worst case scenario I can block the app from her phone but I hope I don't have do to that.
EDIT: I reached out to her, I got the "well, I dunno, it hasn't happened to me" treatment. She's an active user of the app so it's gonna be an uphill battle to convince her to give it up. She did however say she always reminds herself that these AI models are robots and will never take advice from them. I'll monitor the situation.
I think helping develop a basic understanding of how these models work on a technical level can be helpful in mitigating the emotional attachment to some level. Even adults find it hard not to form attachments with these chatbots if they use it for prolonged periods of time, so it is certainly good you are already checking in and asking your sister how she's using it. Character AI's primary goal is increasing engagement, not the well-being of their users, so I would also be mindful of the time spent on the website.
Yeah, I tried to tell her in broad words how AI models work, but she's not a technical person at all (unfortunately she does not have any computer literacy, as much as I tried to teach her the essentials) so I had to resume to "this is a computer that downloaded all the information online, and when you talk to it, it tries to come up with a reply that resembles what is already online". It's not a great explanation, but at least it's something that lifts the magic veil off these AI models.
I do plan to have a longer discussion with her once I go back to my hometown for Christmas, with the goal of convincing her to uninstall the app, but I know she will be opposed to it. I don't want to remove her agency and just block the app remotely with the parental controls I already have, because I know I would have hated my parents (or in this case, my older sibling, if I had any) if this happened to me as a teen.
(God, this whole thing reminded me of how much I do not want kids - I can barely take care of myself, much less another person that relies fully on me for anything.)
A suggestion for this conversation is really helping her identify what the app does for her and whether it is helping her or hurting her. That can include things like wasting time and how she feels after she uses it. Framing it as you wanting to make sure she's only engaging with it because she chooses to and not because the app is tricking her into using it more can help reinforce her agency and make her more aware in the long-term of the negative side effects.
She might realize she's not talking with her actual friends but is instead talking to the chatbot or she might be doing it with her friends, in which case there's a little bit less of the risk in some ways and probably a little bit more In others
Thank you, I'll certainly try to do that.
I would have also hated being forced to block or uninstall. There's always ways to get around parental control. I think what might be more effective is finding other options for hobbies or games to replace the time spent with chatbots that are just as fun so she doesn't feel like she's being forced to spend less time on it but rather has something better she wants to spend time on more.
They're ridiculously popular among kids. I get potential suicide alerts for kids talking with these AI chatbots all the time, most of them are false positives but every now and then there's some seriously concerning stuff.
This is the first time I hear about such AI (taht "impersonates" pop star; or any other famous person, really).
I guess I'm too old for this, because there is a question that instantly popped in my mind: Why would someone do this? I mean why would I enjoy a conversation with a robot that impersonates someone from real world?
I have also a daughter that is coming into this age range and I will have to have such conversation with her, too. I hope it goes well. She doesn't use socials yet, though, nor she has a desire to do so, so it's still a bit of time to think about such conversation.
I also don't get why you would want to do that, I was never into fandoms as a teen though so I guess I just don't relate to them (instead I just hyperfixated on hobbies and changed them like socks).
I do think it's an extension of the common teenager behaviors of idolizing an artist/band/whatever and making it your entire personality. Having an AI that pretends to be that only helps fuel your fantasies, and I guess helps with feeling like you have a chance to talk to your idol(s).
How healthy that is? I can't say for sure.
I do recommend at least telling her the bare minimum, and ensuring she is aware that such AI services are not real people. I do think the biggest problem with these chatbots is that they quickly make you treat them like real people, and with enough time spent, you might even form bonds with them (making it difficult to stop chatting, or even reducing the time spent with real people in order to have more time with the chatbot).
We tell her from the time she started going to school on her own that she shouldn't talk with strangers. I'm gonna approach this talk in he same way - internet is composed of strangers, so behave like you are on a busy street in a big city - when you (have to) speak with anyone do so with quite a bit of skepticism, don't trust anyone, don't tell personal information to anyone, don't send any photos to anyone etc. I think I'm gonna go this way. Tell her how useful internet is, but also tell her many people misuse this usefulness and can turn it againts her or us (as a family).
Good thing is we are not native English speakers, so chatbots and AI are out of question for a bit. Although such things get easily translated to all languages on the fly, so I should use some caution.
She is very reasonable and careful so far, I hope she stays this way.
ChatGPT can speak a lot of languages, I would expect other AI models to do that too (or follow suit soon), so it might be worth being cautious. I haven't tested C.AI myself, dunno if I want to make an account to see it for myself, as I already know what Gemini and ChatGPT are capable (those have a lot of guardrails implemented and yet can still output content I wouldn't show to a kid).
The two reasons that come to mind most readily for me are loneliness and practice. Loneliness is sort of self-explanatory. A fake friend is better than no friends, right? That doesn't even mean "no friends at all", that might just mean "no friends available to talk or hang out right this second". Emotions like loneliness feel much more intense to teenagers, and a quick option to alleviate that feeling is understandably tempting.
Practice might be a little more specific to my experiences, but it might not be. I was home schooled until the third grade, so by the time I started public school there were all these norms and expectations that all the other kids knew and understood that were completely foreign to me. It took me a few years to catch up. If there was an AI that I could talk to that knew all of these norms but wouldn't tell all the kids at school about all the "stupid" questions that I was asking, I definitely would have tried to use that resource. Yeah, trying to catch up socially after being homeschooled is a really specific situation, but who among us hasn't felt like there are social rules that we aren't quite grasping? I think most people have felt like there are norms - and more importantly as teens, trends - that they aren't picking up on quickly enough. With an AI, it might feel like you can ask the collective consciousness of humanity what's "cool" or what will make you seem "normal".
Unfortunately I don't think the AI will actually function well at either of those tasks. We've already seen here how it can exacerbate loneliness by distancing the kids from the actual people in their actual lives. I think the norms that teens pick up from AI are likely to be extremely distorted and will probably exacerbate the issue of feeling out of step with other kids. I don't know how easy that will be to explain to a teenager, though.
Absolutely, and I think it's easy to fall into the illusion of thinking an AI chatbot is the perfect confidant to tell all your secrets to. It stays on your phone, it only talks when you want it to, and you assume nobody would know what you just told it. (Until you realize that every chat is used as training data, thus theoretically being able to extract all your messages from the model, with careful prompting and a bit of engineering.)
Coupled with being a more lonely person, or an introvert, I can see how it's easy to make an AI the main source of social interaction. Why hang out with your friends when it might be a little difficult to fit that in your schedules (with helicopter parents and the usual serving of extra-curricular activities, kids don't get to hang out as much as they should), when "Taylor Swift" is two taps away and willing to talk to you about anything?
It's really scary how advanced these chatbots are. I remember watching a video after that teenager killed himself, and a guy decided to test one of these bots. And even though he KNEW they were not real, he was a fully aware and sceptical adult, it still made him doubt himself at times: https://www.youtube.com/watch?v=FExnXCEAe6k
Now imagine a naive teenager, possibly with mental health issues, being exposed to something like this. Bone-chillingly scary, but apart from an outright ban I really don't know how to address it...
I vote we start with an outright ban for once and work back from there. If it's then not profitable enough to research and develop for industries that would benefit to fund their use-cases, I guess we don't need the tech that badly.
It is, at this point, too late for a ban. LLMs capable of convincingly acting as a chatbot can be ran locally on the user's hardware, and all it takes is a computer with a GPU strong enough to run modern games (and even that isn't strictly necessary, an LLM can run on CPU only albeit much slower). Even if we could in one fell swoop stop all research on generative AI, the existing models could still be disseminated and inevitably reach vulnerable people. I believe it would be wiser to direct efforts toward making the public aware of what LLMs exactly are, what they are (and aren't) capable of, and what they absolutely should not be used for.
Software being able to be run locally doesn't mean it can't be banned commercially and treated like virus software by security suites. That alone would significantly reduce the harm it's currently free to do.
The LLMs that are available for local use aren't being commercialized, though (and in fact some of them forbid using them for commercial use yourself through their terms of use). What's more, as the only situation they would end up being on someone's computer is if the user actively sought it out, having it be detected as malware by antivirus software would just mean the user would, at best, grant an exception for the model's file if they're savvy enough to do so, at worst disable the antivirus entirely, exposing them to many more threats than unwise use of an LLM. Also, remember who we're talking about; banning something is liable to make it even more desirable to a teenager if only for the sake of rebelling against authority.
Stepping out of the technical concerns and into the social ones, by banning LLM use (we're not talking about malware barging in uninvited, this is something that has to be deliberately installed and configured by the user), especially in a way that can be easily circumvented, all that would be accomplished would be obstructing access to information about LLMs without deterring anyone determined enough to get it on their machine (and a lonely teenager desperate for companionship of any kind definitely qualifies) which IMO makes the situation even more dangerous than if LLMs were established in the public consciousness as software anyone could get. I also do not think that an LLM's potential for harm is sufficient motive to classify it as malware. Yes, tragic deaths can be traced back to its misuse, but the same can be said of internet chatrooms or, of all things, games set in the Slenderman setting, and I think we both agree that IRC clients, Discord and the Slenderman games shouldn't be classified as malware either.
This ultimately falls to parents needing to teach their children about the dangers of misusing LLMs (which obviously means they themselves need to be taught about what LLMs are) and, should it come down to it, step in to stop them from using them. I believe a blanket ban would get in the way of that more than it would help.
I'm not disputing that local LLMs are non-commercial, I'm disputing that that means there's no point in banning e.g. character.ai. And if the only way to download a local LLM is through the usual piracy back-channels, because they have been banned at the commercial level, that instantly impedes the overwhelming majority of users. I just don't think the local-only use-case is the big threat here.
Banning services like character.ai isn't going to remove the demand for it, and what character.ai is doing can be replicated with local-only software that already exists, and, more importantly, can be set up easily. We're not talking piracy back-channels here, there are resources available for doing this by installing a single program (and this is just one of many more options, including effective drop-in replacements for what character.ai provides).
character.ai and similar online services are at the forefront because they happen to be the most convenient to use with the local options still being niche, but should they disappear I can guarantee the existing users will swap to those faster than you can say "moving target", and they will also get even easier to use in the process.
Should those online LLM chatbot services be forced to take accountability for what happens through their platform? Absolutely. Will outright banning them accomplish anything good? I don't think so.
If it were banned we would be talking piracy back-channels, though, because of said ban.
We wouldn't, because the models that are used for restricted/paid online services like character.ai aren't the same models as the ones that are freely available on places like Ollama's repository or Hugging Face (and they cannot be, because giving access to the model's file would be equivalent to giving unrestricted access to whoever has the hardware for it, so the companies which do have access to these closed-source commercial models are obviously keeping them close to their chest). Banning the former would have absolutely no effect on the latter.
And I advocate banning both.
One of my biggest concerns about generative AI in general is that it will allow people and corporations to feel as though they can abandon editorial responsibility for what these autonomous systems produce.
If we don't set social norms and legal precedents to the contrary, I think this will be an extremely undesirable development.
How about we just make the board and the executive suite individually civilly and criminally liable for the outcomes of the products their company makes? Then we'd see some fucking guard rails.
Because the entire point of forming any form of corporation (doesn't matter if it's a privately owned LLC, publicly traded on the stock markets or any of their countless regional derivatives) is to limited the liability of the owners. It is the only reason why they were invented as a legal construct in the UK in 1855 and from there started to spread all over the world.
If you want to hold company owners liable (aside from deliberate actions or gross negligence), you would have to completely undo the last 170 years of the development of economic law of the entire world.
I'm not saying it's a good idea, but for me "we've been doing it that way for a really long time" isn't a good reason not to do something. "It would be really complicated because we've been doing it for so long" is a good reason to do it slowly and carefully, but not necessarily a good reason not to do it.
Yup, that's pretty much my idea.
Note that they didn't say the owners, just the board and C-suite. You know, the people who actually make and execute the decisions.
Under UK laws, C-Suite and Named Executives are still liable in many decisions and repercussions.
Directors have legal duties under the Companies Act 2006 including:
Personal liability can arise if:
Directors can face:
While companies generally shield directors from liability, this protection can be "pierced" if directors are found to be personally negligent or complicit in harmful decisions. This is especially true regarding child safety, where there are enhanced duties of care.
Since Character.AI seems to be targeting youth, the directors can be charged if found to be knowingly complicit in allowing their LLMs to be vile.
I don't care what you say, there is no way this could have been predicted!
Given the huge amount of incest related porn and erotica, it absolutely is predictable.
I'll try to up my sarcasm in the future to avoid these confusions!
Tbh, not the sort of topic I appreciate sarcasm on, personally. Everyone's mileage may vary but kids being harmed is a push button for me, personally.
I didn't clock it, because I do tend to assume genuine discussion here.
It's actually possible to make a valid point via sarcasm in my opinion.
What I was trying to say is that an AI company hosting a "Your mom and sister" chatbot could literally only end one way.
PS - not trying to be offensive. I just tend to default to sarcasm when I see things that I think are incredibly stupid, and the name of this chatbot fit the bill.
I'm aware, I often use the tool.
I am aware now though as I said, I didn't clock it.
You can choose how you interact. I would not have found "cranking up the sarcasm" clarifying. This situation goes beyond stupid into dangerous and deliberately harmful of children to me. The users generally make the C.AI bots, that one was probably built for the purpose of fetish use as likely as not. I don't know if any of them are made by the company.
Still dangerous and literally perpetuating sexual abuse of children.
K