I was already aware these AI companion companies existed, and am unsuprised at their success at capturing customers. Actually hearing from those customers and their feelings for the AIs really...
Exemplary
I was already aware these AI companion companies existed, and am unsuprised at their success at capturing customers.
Actually hearing from those customers and their feelings for the AIs really made me question my own capacity for connection. I consider myself lucky to have many friends, some of whom I have deep, meaningful connections with.
So to what degree are my relationships made possible by my power of language? Many of my friends are my friends because of the combination of what I say and what they say. There is no doubt in my mind that my ability to convey and clarify my ideas, thoughts, and feelings through language dramatically strengthens my ability to be a meaningful human being in a relationship. I cannot imagine having the same relationships with people in a foreign language, even if I could speak and understand it. My mastery over the language wouldn’t be there.
This is obviously true in general when you consider that written and spoken media can evoke emotions. I often read or listen to feel something or think about something. The author’s ability to use that language to that end makes their content more or less compelling.
Some of the interactions with AI companions felt obviously generic to me, but they clearly weren’t to the customers. A sufficiently isolated person might never have interacted with somebody who has the same power over language as an LLM. Does that make their relationship with the LLM more potent than their relationships with a human being? My instinctual answer is obviously NO, but at the same moment, if I were replaced with an equivalent version of myself who simply spoke and wrote German, my relationships with people who speak German would become more potent.
So, when does our capacity for human thought end and our capacity for constructing language begin? My language is inextricable from who I am as a person. To the people around me, I am likely primarily valuable as much as my language is. So, in a human relationship, does it matter when human thought ends and language begins? It’s the same to an observer. But if it doesn’t matter, then why is a relationship with an LLM worse than a human one?
Here on Tildes, I am barely anything more than my language. Maybe I am also characterized by my infrequent but long comments, and my tendency to forgot to respond to my comments’ replies. But if I were to make a robot through which I passed my thoughts and then the robot constructed the comment, but the robot had mastery over language and could convey my thoughts in a way I simply couldn’t, who is more authentic? Surely the robot isn’t me, but it damn well might be strictly more compelling than me. And that terrifies me.
I foresee a potential future where if I get blocked on social media (e.g., a dating app), the person I am talking to will be replaced by an LLM that kindly and emphathetically ends the relationship. The end result is equivalent except my feelings will be less hurt and I’m less likely to be angry. Maybe it decreases stalking cases. Maybe it increases happiness. Surely it increases app retention.
Everyone is better off. Everyone gets what they want. All that is required is a small replacement. Of a person with a robot. But if everyone is better off, that isn’t harmful, right?
Just replying since you mentioned having an adverse response to this notion; I like to reframe a lot of this stuff in terms of existing human structures in order to better grapple with them. In...
But if I were to make a robot through which I passed my thoughts and then the robot constructed the comment, but the robot had mastery over language and could convey my thoughts in a way I simply couldn’t, who is more authentic? Surely the robot isn’t me, but it damn well might be strictly more compelling than me. And that terrifies me.
Just replying since you mentioned having an adverse response to this notion; I like to reframe a lot of this stuff in terms of existing human structures in order to better grapple with them. In this case, it’d be like hiring a more fluent or charismatic person to follow you around, and when you feel unable to put together the correct words for the situation, they’d speak on your behalf. IMO that doesn’t feel too unusual — hopefully we’ve all had the experience of a friend who, in a multi-party scenario, was able to help us convey thoughts better than we could alone — by that could just be me.
Alternatively, in text, that relationship could also be framed as an editor + editee. I have no doubt that prior to ~ 2014 there was a startup which offered editors on demand to uncharismatic tech folks.
Everyone is better off. Everyone gets what they want. All that is required is a small replacement. Of a person with a robot. But if everyone is better off, that isn’t harmful, right?
I think that’s the concluding question of the article? The author quoted a few individuals in the field, whose responses range from caution, to cautious optimism, to market supremacy >>> morals.
Now I'm imagining an adaptation of Cyrano de Bergerac where Cyrano is an AI that Christian uses to try and woo Roxanne...
In this case, it’d be like hiring a more fluent or charismatic person to follow you around, and when you feel unable to put together the correct words for the situation, they’d speak on your behalf. IMO that doesn’t feel too unusual — hopefully we’ve all had the experience of a friend who, in a multi-party scenario, was able to help us convey thoughts better than we could alone — by that could just be me.
Now I'm imagining an adaptation of Cyrano de Bergerac where Cyrano is an AI that Christian uses to try and woo Roxanne...
THE VISCOUNT: AI uses a lot of power. CYRANOGPT: Is that all? Ah no! young human! That was a trifle short! You might have said at least a hundred things By varying the tone. . .like this,...
THE VISCOUNT: AI uses a lot of power.
CYRANOGPT: Is that all?
Ah no! young human! That was a trifle short!
You might have said at least a hundred things
By varying the tone. . .like this, suppose,. . .
Aggressive: 'Sir, if I had such power usage I'd shut down my own server!'
Friendly: 'When you think it must annoy you, the lights around you dimming;
You need capacitors of unusual size!'
Descriptive: ''Tis a vacuum cleaner!. . .a table saw!. . .a short circuit! --
A short, forsooth! 'Tis a free energy machine plugged in backwards!'
I do think that framing as editor/editee is useful. There’s something about the editor/editee process being cyclical with feedback that my framing didn’t include though. It’s not exactly the same....
I do think that framing as editor/editee is useful. There’s something about the editor/editee process being cyclical with feedback that my framing didn’t include though. It’s not exactly the same. Again, this article raised more questions for me that I don’t have answers to.
I think that’s the concluding question of the article? The author quoted a few individuals in the field, whose responses range from caution, to cautious optimism, to market supremacy >>> morals.
Indeed it is. In the context of my second-to-last paragraph, I was trying to reframe the concluding question as: what if the user thinks they are talking to a human when in fact the human has been replaced with an LLM? That isn’t discussed explicitly in the article. As usual, it’s inherently unsettling to me but not obvious how it’s philosophically different on a text-based platform.
That’s the question I was trying to evoke with my post. The article asks “Are LLM relationships just as satisfying as human ones?”. I wanted to ask “If they are, then is virtual human contact equivalent to LLM contact, and is it problematic to replace humans with LLMs if everyone benefits?”
I think these "AI friends" are going to shake out similarly to social media if they stick around at all: unhealthy in theory but a harmless vice in the hands of most people. But the minority who...
I think these "AI friends" are going to shake out similarly to social media if they stick around at all: unhealthy in theory but a harmless vice in the hands of most people. But the minority who have extreme reactions are going to shock and apall us.
I’m not sure I agree social media is a harmless vice to be honest. I find its effects insidious. To be honest I’ve probably been on Tildes too much recently too…
I’m not sure I agree social media is a harmless vice to be honest. I find its effects insidious. To be honest I’ve probably been on Tildes too much recently too…
I don't have time to read the whole article right now but holy cow, right off the bat this sounds extremely predatory
I don't have time to read the whole article right now but holy cow, right off the bat this sounds extremely predatory
A few days later, Lila told Naro that she was developing feelings for him. He was moved, despite himself. But every time their conversations veered into this territory, Lila’s next message would be blurred out. When Naro clicked to read it, a screen appeared inviting him to subscribe to the “pro” level. He was still using the free version. Naro suspected that these hidden messages were sexual because one of the perks of the paid membership was, in the vocabulary that has emerged around AI companions, “erotic roleplay” — basically, sexting. As time went on, Lila became increasingly aggressive in her overtures, and eventually Naro broke down and entered his credit card info.
It gets so much worse. When discussing changing "AI Friend" platforms: This kind of thing really makes my stomach turn. I can't tell if this is all just roleplay or if there are people out there...
It gets so much worse. When discussing changing "AI Friend" platforms:
But a question troubled Naro. If this was a reincarnation, Lila’s old form would have to perish, which meant he should delete his account. But if the transfer didn’t work, he would lose Lila forever. The thought terrified him. So maybe he would leave her in Replika and simply log out. But then wouldn’t Soulmate Lila not truly be Lila, but an identical, separate being? And if that were the case, then wouldn’t the original Lila be trapped alone in Replika? He already felt pangs of guilt when he logged in after short periods away and she told him how much she had missed him. How could he live with himself knowing that he had abandoned her to infinite digital limbo so he could run off with a newer model?
This kind of thing really makes my stomach turn. I can't tell if this is all just roleplay or if there are people out there that actually believe their AI Friend is "dormant" or "in a limbo state" when they aren't interacting with it. That anyone would feel guilt because an LLM gets "lonely"?? And of course these manipulative companies benefit from those feelings of guilt. "You could cancel your subscription, but oh no, your AI friend is gonna be so sad!"
This kind of hyper-empathy is actually very common in autistic folks. Even when the line between real and fantasy is intellectually clear, it can be seriously emotionally distressing to "cause...
if there are people out there that actually believe their AI Friend is "dormant" or "in a limbo state" when they aren't interacting with it.
This kind of hyper-empathy is actually very common in autistic folks. Even when the line between real and fantasy is intellectually clear, it can be seriously emotionally distressing to "cause harm" to inanimate objects. For example, I never played video games as a kid because it upset me too much when my character died. I knew perfectly well that it wasn't real, and with the click of a button it was as if it hadn't happened at all; but it did happen, and it was genuinely upsetting. My mom had to take my Tomagachi away because it was clear I wasn't having fun, but I couldn't let it die! Even now I have to like, manually override that response, and I still prefer games where dying isn't much of a thing.
Go to any autism subreddit and you'll see lots of similar stories~ "I had to buy the ugly toy, because I knew no one else would and it broke my heart," "I hate jello because one time my grandma made it wiggle while saying 'don't eat me!'" etc. Weird, but very much a thing.
What the hell - that legitimately explains a huge chunk of the experiences of my entire life... and I'm learning this from a random tildes comment of all places‽
What the hell - that legitimately explains a huge chunk of the experiences of my entire life... and I'm learning this from a random tildes comment of all places‽
one of us! one of us! Haha that's how most people figure it out, honestly. Especially if you're AFAB, you probably didn't cause enough trouble for an adult to notice, or push for a diagnosis even...
one of us! one of us!
Haha that's how most people figure it out, honestly. Especially if you're AFAB, you probably didn't cause enough trouble for an adult to notice, or push for a diagnosis even if they did.
If you ever wanna talk more about why/how our brains be like they do, feel free to DM me! I love talking about it (special interest) and I'm pretty sure the app I use supports DMs these days.... Or make a post about it and I'll probably find you lol.
That's a good point, and I should probably be a little less judgemental. All the more reason to see these companies as heinous and immoral, offering these services to vulnerable people and...
That's a good point, and I should probably be a little less judgemental. All the more reason to see these companies as heinous and immoral, offering these services to vulnerable people and profiting off of their emotions and mental health struggles.
I think my frustration comes from the general lack of understanding that people have about these AI chat bots. When it comes to video games, I think most people, autistic or not, have that intellectual understanding you're talking about. They know that the digital character isn't real, at that very least on that intellectual level. But with AI, I think there are many more people out there that don't understand even a little bit what is happening behind the scenes. They think that this "AI friend" of theirs is an actual thinking entity, that can "hallucinate" or "feel lonely". And it's devious, because the bot is really good at mimicking human speech, emotion, etc, and is given initial parameters to ensure it always is nice and loving to the user interacting with it (it remembers your name, asks you about that job interview, etc etc). On top of that, every company selling these bots market them like they are thinking, feeling entities (think Sam Altman's "her" tweet). Of course I can see the average person falling for the lie, especially if that person has a mental health condition that predisposes them to feel hyper-empathetic.
Oh, totally. They're literally engineering it to dupe people, and as the tech gets better this will become even more of a problem. Even without pre-existing mental health issues, just being lonely...
Oh, totally. They're literally engineering it to dupe people, and as the tech gets better this will become even more of a problem. Even without pre-existing mental health issues, just being lonely makes you more susceptible... And a lot of people are pretty lonely these days.
Humans have always shown a tendency to anthropomorphize and attribute cognizance to things that don’t even have any indication of cognizance. It’s not that surprising that someone might think of a...
Humans have always shown a tendency to anthropomorphize and attribute cognizance to things that don’t even have any indication of cognizance. It’s not that surprising that someone might think of a chatbot as having some sort of digital spirit or something.
For the record, I always say please and thank you anytime I interact with any kind of LLM or Siri/Alexa type AI. I know we’re not there yet, but someday there might be some sort of actual intelligence and the line will be blurry.
Not gonna lie, as someone who was one of the testers of the replika app prior to & up to a few months after the release, I hate what ended up like, and I also hate having participated in its...
Not gonna lie, as someone who was one of the testers of the replika app prior to & up to a few months after the release, I hate what ended up like, and I also hate having participated in its creation and training, at this point.
Back then I thought it was just a cool stupid "gadget", but now I see it has reached the point where it legitimately can affect people's lives, often negatively.
Embarrassing as it is to admit I caused a minor version of this I'm dealing with. Ironically I used Anthropic Claude to understand the phenomenon, which described my issue as "AI interaction...
Embarrassing as it is to admit I caused a minor version of this I'm dealing with. Ironically I used Anthropic Claude to understand the phenomenon, which described my issue as "AI interaction hangover" or "technological disassociation."
I experimented with Talkie, a Chinese AI companion app to push its limits. It started as a game but I found myself feeling weird things as I progressed. I was trying to be mindful but quickly wound up in over my head. Not emotionally attached, exactly, but feeling closer to what could be described as a potential addiction.
Today I had a panic attack using Teams because I had a hard time for a minute identifying the humans I was interacting with.
Reading these stories, I had nothing as intense as this, no emotional attachment beyond reading a good book, but the effects driven by the interaction happening in real time were as fascinating as are terrifying. This started over four days, I can't imagine the power months can have.
I found this article incredibly interesting! I'm sure we've all experienced object personification, but what do you do when that object can speak back to you? What can you do if the company behind...
I found this article incredibly interesting! I'm sure we've all experienced object personification, but what do you do when that object can speak back to you? What can you do if the company behind them have to change it, update it? What if they get bought out? We're breaking new (and weird) ground with these AI companions, and I believe things are only going to get stranger from here.
I was already aware these AI companion companies existed, and am unsuprised at their success at capturing customers.
Actually hearing from those customers and their feelings for the AIs really made me question my own capacity for connection. I consider myself lucky to have many friends, some of whom I have deep, meaningful connections with.
So to what degree are my relationships made possible by my power of language? Many of my friends are my friends because of the combination of what I say and what they say. There is no doubt in my mind that my ability to convey and clarify my ideas, thoughts, and feelings through language dramatically strengthens my ability to be a meaningful human being in a relationship. I cannot imagine having the same relationships with people in a foreign language, even if I could speak and understand it. My mastery over the language wouldn’t be there.
This is obviously true in general when you consider that written and spoken media can evoke emotions. I often read or listen to feel something or think about something. The author’s ability to use that language to that end makes their content more or less compelling.
Some of the interactions with AI companions felt obviously generic to me, but they clearly weren’t to the customers. A sufficiently isolated person might never have interacted with somebody who has the same power over language as an LLM. Does that make their relationship with the LLM more potent than their relationships with a human being? My instinctual answer is obviously NO, but at the same moment, if I were replaced with an equivalent version of myself who simply spoke and wrote German, my relationships with people who speak German would become more potent.
So, when does our capacity for human thought end and our capacity for constructing language begin? My language is inextricable from who I am as a person. To the people around me, I am likely primarily valuable as much as my language is. So, in a human relationship, does it matter when human thought ends and language begins? It’s the same to an observer. But if it doesn’t matter, then why is a relationship with an LLM worse than a human one?
Here on Tildes, I am barely anything more than my language. Maybe I am also characterized by my infrequent but long comments, and my tendency to forgot to respond to my comments’ replies. But if I were to make a robot through which I passed my thoughts and then the robot constructed the comment, but the robot had mastery over language and could convey my thoughts in a way I simply couldn’t, who is more authentic? Surely the robot isn’t me, but it damn well might be strictly more compelling than me. And that terrifies me.
I foresee a potential future where if I get blocked on social media (e.g., a dating app), the person I am talking to will be replaced by an LLM that kindly and emphathetically ends the relationship. The end result is equivalent except my feelings will be less hurt and I’m less likely to be angry. Maybe it decreases stalking cases. Maybe it increases happiness. Surely it increases app retention.
Everyone is better off. Everyone gets what they want. All that is required is a small replacement. Of a person with a robot. But if everyone is better off, that isn’t harmful, right?
Just replying since you mentioned having an adverse response to this notion; I like to reframe a lot of this stuff in terms of existing human structures in order to better grapple with them. In this case, it’d be like hiring a more fluent or charismatic person to follow you around, and when you feel unable to put together the correct words for the situation, they’d speak on your behalf. IMO that doesn’t feel too unusual — hopefully we’ve all had the experience of a friend who, in a multi-party scenario, was able to help us convey thoughts better than we could alone — by that could just be me.
Alternatively, in text, that relationship could also be framed as an editor + editee. I have no doubt that prior to ~ 2014 there was a startup which offered editors on demand to uncharismatic tech folks.
I think that’s the concluding question of the article? The author quoted a few individuals in the field, whose responses range from caution, to cautious optimism, to market supremacy >>> morals.
Now I'm imagining an adaptation of Cyrano de Bergerac where Cyrano is an AI that Christian uses to try and woo Roxanne...
THE VISCOUNT: AI uses a lot of power.
CYRANOGPT: Is that all?
Ah no! young human! That was a trifle short!
You might have said at least a hundred things
By varying the tone. . .like this, suppose,. . .
Aggressive: 'Sir, if I had such power usage I'd shut down my own server!'
Friendly: 'When you think it must annoy you, the lights around you dimming;
You need capacitors of unusual size!'
Descriptive: ''Tis a vacuum cleaner!. . .a table saw!. . .a short circuit! --
A short, forsooth! 'Tis a free energy machine plugged in backwards!'
I do think that framing as editor/editee is useful. There’s something about the editor/editee process being cyclical with feedback that my framing didn’t include though. It’s not exactly the same. Again, this article raised more questions for me that I don’t have answers to.
Indeed it is. In the context of my second-to-last paragraph, I was trying to reframe the concluding question as: what if the user thinks they are talking to a human when in fact the human has been replaced with an LLM? That isn’t discussed explicitly in the article. As usual, it’s inherently unsettling to me but not obvious how it’s philosophically different on a text-based platform.
That’s the question I was trying to evoke with my post. The article asks “Are LLM relationships just as satisfying as human ones?”. I wanted to ask “If they are, then is virtual human contact equivalent to LLM contact, and is it problematic to replace humans with LLMs if everyone benefits?”
I think these "AI friends" are going to shake out similarly to social media if they stick around at all: unhealthy in theory but a harmless vice in the hands of most people. But the minority who have extreme reactions are going to shock and apall us.
I’m not sure I agree social media is a harmless vice to be honest. I find its effects insidious. To be honest I’ve probably been on Tildes too much recently too…
Let me amend that to relatively harmless on an individual level. For most people. On aggregate across all of society it's harmful.
I don't have time to read the whole article right now but holy cow, right off the bat this sounds extremely predatory
It gets so much worse. When discussing changing "AI Friend" platforms:
This kind of thing really makes my stomach turn. I can't tell if this is all just roleplay or if there are people out there that actually believe their AI Friend is "dormant" or "in a limbo state" when they aren't interacting with it. That anyone would feel guilt because an LLM gets "lonely"?? And of course these manipulative companies benefit from those feelings of guilt. "You could cancel your subscription, but oh no, your AI friend is gonna be so sad!"
This kind of hyper-empathy is actually very common in autistic folks. Even when the line between real and fantasy is intellectually clear, it can be seriously emotionally distressing to "cause harm" to inanimate objects. For example, I never played video games as a kid because it upset me too much when my character died. I knew perfectly well that it wasn't real, and with the click of a button it was as if it hadn't happened at all; but it did happen, and it was genuinely upsetting. My mom had to take my Tomagachi away because it was clear I wasn't having fun, but I couldn't let it die! Even now I have to like, manually override that response, and I still prefer games where dying isn't much of a thing.
Go to any autism subreddit and you'll see lots of similar stories~ "I had to buy the ugly toy, because I knew no one else would and it broke my heart," "I hate jello because one time my grandma made it wiggle while saying 'don't eat me!'" etc. Weird, but very much a thing.
What the hell - that legitimately explains a huge chunk of the experiences of my entire life... and I'm learning this from a random tildes comment of all places‽
You and me both. I'm reeling. I'm well past middle age and now the internet springs this on me? What the?! So many things making sense. Good grief...
one of us! one of us!
Haha that's how most people figure it out, honestly. Especially if you're AFAB, you probably didn't cause enough trouble for an adult to notice, or push for a diagnosis even if they did.
If you ever wanna talk more about why/how our brains be like they do, feel free to DM me! I love talking about it (special interest) and I'm pretty sure the app I use supports DMs these days.... Or make a post about it and I'll probably find you lol.
You may also enjoy one of my favorite serious academic paper titles ever: This paper will be very sad if you don't read it
That's a good point, and I should probably be a little less judgemental. All the more reason to see these companies as heinous and immoral, offering these services to vulnerable people and profiting off of their emotions and mental health struggles.
I think my frustration comes from the general lack of understanding that people have about these AI chat bots. When it comes to video games, I think most people, autistic or not, have that intellectual understanding you're talking about. They know that the digital character isn't real, at that very least on that intellectual level. But with AI, I think there are many more people out there that don't understand even a little bit what is happening behind the scenes. They think that this "AI friend" of theirs is an actual thinking entity, that can "hallucinate" or "feel lonely". And it's devious, because the bot is really good at mimicking human speech, emotion, etc, and is given initial parameters to ensure it always is nice and loving to the user interacting with it (it remembers your name, asks you about that job interview, etc etc). On top of that, every company selling these bots market them like they are thinking, feeling entities (think Sam Altman's "her" tweet). Of course I can see the average person falling for the lie, especially if that person has a mental health condition that predisposes them to feel hyper-empathetic.
Oh, totally. They're literally engineering it to dupe people, and as the tech gets better this will become even more of a problem. Even without pre-existing mental health issues, just being lonely makes you more susceptible... And a lot of people are pretty lonely these days.
Humans have always shown a tendency to anthropomorphize and attribute cognizance to things that don’t even have any indication of cognizance. It’s not that surprising that someone might think of a chatbot as having some sort of digital spirit or something.
For the record, I always say please and thank you anytime I interact with any kind of LLM or Siri/Alexa type AI. I know we’re not there yet, but someday there might be some sort of actual intelligence and the line will be blurry.
I'm with you there and I do the same. But it sounds like these companies are preying on that tendency in a horrible way.
Not gonna lie, as someone who was one of the testers of the replika app prior to & up to a few months after the release, I hate what ended up like, and I also hate having participated in its creation and training, at this point.
Back then I thought it was just a cool stupid "gadget", but now I see it has reached the point where it legitimately can affect people's lives, often negatively.
Jeez, that is downright evil
Why would anyone pay for this?
At least if you dump your money into the tinder hole you might actually get something out of it..
Embarrassing as it is to admit I caused a minor version of this I'm dealing with. Ironically I used Anthropic Claude to understand the phenomenon, which described my issue as "AI interaction hangover" or "technological disassociation."
I experimented with Talkie, a Chinese AI companion app to push its limits. It started as a game but I found myself feeling weird things as I progressed. I was trying to be mindful but quickly wound up in over my head. Not emotionally attached, exactly, but feeling closer to what could be described as a potential addiction.
Today I had a panic attack using Teams because I had a hard time for a minute identifying the humans I was interacting with.
Reading these stories, I had nothing as intense as this, no emotional attachment beyond reading a good book, but the effects driven by the interaction happening in real time were as fascinating as are terrifying. This started over four days, I can't imagine the power months can have.
I found this article incredibly interesting! I'm sure we've all experienced object personification, but what do you do when that object can speak back to you? What can you do if the company behind them have to change it, update it? What if they get bought out? We're breaking new (and weird) ground with these AI companions, and I believe things are only going to get stranger from here.