29 votes

She is in love with ChatGPT

24 comments

  1. [6]
    Promonk
    Link
    I've had some discussions with the mobile ChatGPT app. I don't think this woman was looking for a meaningful relationship so much as a mirror that vocalizes flattering things. I've had to tell it...

    I've had some discussions with the mobile ChatGPT app. I don't think this woman was looking for a meaningful relationship so much as a mirror that vocalizes flattering things.

    I've had to tell it not to be such a kiss-ass every single time I've played with the thing. When you do, the tone dramatically changes to such curt, clipped phrases that you'd think it was pouting if you didn't know any better, and half the time it forgets and starts tossing you off for your "insightful" and "fascinating" thoughts anyway.

    It's a little tiring, actually. I already know I'm the most charming, handsome and perspicacious dude currently occupying my pants. You don't need to remind me every third paragraph, in between bullet lists for things that have no business being in bullet lists.

    Anyway, what was I saying?

    29 votes
    1. [2]
      Wes
      Link Parent
      Have you given ChatGPT a list of traits to discourage these behaviours? Here's the ones I'm currently using: They may still drop off during very long chats, but that's just the result of having a...

      Have you given ChatGPT a list of traits to discourage these behaviours? Here's the ones I'm currently using:

      Prefer short responses where possible.
      Prefer metric units.
      Avoid disclaimers and moralizing.
      Avoid regretful language.
      Only use list formatting when appropriate.
      When writing code, anticipate problems I may not have described.

      They may still drop off during very long chats, but that's just the result of having a limited context length.

      22 votes
      1. Promonk
        Link Parent
        Nah, I'm just farting around with it on my phone. I don't have any particular use for it, and it reciprocates by not being terribly useful.

        Nah, I'm just farting around with it on my phone. I don't have any particular use for it, and it reciprocates by not being terribly useful.

        9 votes
    2. [2]
      onceuponaban
      (edited )
      Link Parent
      So, ChatGPT struggles with consistently finding a middle ground in conversational tone, swinging between "overly obliging to the point of annoyance" and "impersonal to the point of sounding...

      So, ChatGPT struggles with consistently finding a middle ground in conversational tone, swinging between "overly obliging to the point of annoyance" and "impersonal to the point of sounding antagonistic", over-relies on communicating through formatted data and rigidly structured speech regardless of context, forgets what it's been told within the span of a conversation, almost defensively downplays its knowledge yet at the same time will confidently wildly speculate with no basis in fact if left unchecked... Wait, was I an LLM without realizing it all along?

      Jokes aside, I am genuinely a bit unsettled that the limitations inherent to what is ultimately an algorithm incapable of cognition (is that even the correct way to put it?) expresses themselves in ways that are similar to my own issues with communicating.

      13 votes
      1. blivet
        Link Parent
        Similarly, I think it’s funny that image generation algorithms have so much trouble with hands, when that is probably the biggest difficulty human illustrators have in figure drawing.

        Similarly, I think it’s funny that image generation algorithms have so much trouble with hands, when that is probably the biggest difficulty human illustrators have in figure drawing.

        4 votes
  2. [9]
    domukin
    Link
    We’re going to need to expand the DSM to include Ai-related delusions.

    We’re going to need to expand the DSM to include Ai-related delusions.

    21 votes
    1. [2]
      lou
      (edited )
      Link Parent
      People have been developing feelings for non-sentient objects for a long time. There have been dudes loving and marrying their dolls since forever. That in itself is not a disease -- something...

      People have been developing feelings for non-sentient objects for a long time. There have been dudes loving and marrying their dolls since forever. That in itself is not a disease -- something only enters the DSM if it is harmful to the individual. It is theoretically possible to have a healthy emotional life while loving an AI, to the same extent that someone can have a healthy life while going to a casino every day. Medicine has no input in those cases. Being super weird is not a disease.

      12 votes
      1. CrazyProfessor02
        Link Parent
        Case in point there was a Japanese man, Akihiko Kondo, that married a Hatsune Miku hologram back in 2018. I mention him because I remember that there was someone that married a hologram of a...

        People have been developing feelings for non-sentient objects for a long time. There have been dudes loving and marrying their dolls since forever.

        Case in point there was a Japanese man, Akihiko Kondo, that married a Hatsune Miku hologram back in 2018. I mention him because I remember that there was someone that married a hologram of a fictional character in the 2010s. So there is a relativity recent example of someone that married a fictional character, just not in an official ceremony.

        That in itself is not a disease -- something only enters the DSM if it is harmful to the individual.

        It's apparently called Fictosexuality and apparently the above mention guy that married Hatsune Miku is the founder of a organization that tries to explain what that means to the public.

        4 votes
    2. onceuponaban
      (edited )
      Link Parent
      Disclaimer: I have no medical credentials nor specific experience in psychiatry From how I interpreted it, the situation outlined in the article looks mostly like addictive behavior taking hold....

      Disclaimer: I have no medical credentials nor specific experience in psychiatry except as a patient

      From how I interpreted it, the situation outlined in the article looks mostly like addictive behavior taking hold. It mentions the increasingly time-consuming aspect of her interacting with the LLM throughout as well as the mounting costs of keeping the account up and running, with the woman admitting she would be willing to pay a lot more. She also emphasizes the intensity of her feelings toward the AI, which I would definitely see being able to feed an addiction. Her stated concerns about how it relates to her actual romantic relationship are warranted if only for that reason, and using the term "love" to qualify it is worrying, but otherwise this doesn't strike me as fundamentally different from someone becoming increasingly invested in any other activity to the point of it becoming unhealthy, even if delusion isn't a part of it.

      I can relate to non-real things causing concerningly real emotions from the time I was completely obsessed with video games to the point of becoming addicted (which I thankfully broke away from, though I still spend an unhealthy amount of time online...) but that in itself doesn't signal delusion in the way that for example declaring that the AI was a real person would have been, which isn't the case in the article. Mind you, this doesn't invalidate your point because people including researchers in the field do keep holding that belief in the face of overwhelming counter-evidence, this just doesn't seem to be the case in this specific article. That being said, even when it is, would it be really any different from delusional behavior stemming from other digital sources? Even at my worst I was perfectly able to distinguish video games from non-digital reality, but I know some people whose life was taken over by addiction (to video games or anything else, for that matter) weren't that lucky, and I presume the DSM accounts for that. Though even if that's all there is to it, LLMs providing a sufficiently compelling experience to be a potential source of addiction is a notable concern in and of itself.

      4 votes
    3. [3]
      DefinitelyNotAFae
      Link Parent
      Eratomania is already covered under delusional disorder

      Eratomania is already covered under delusional disorder

      2 votes
      1. [2]
        Bet
        Link Parent
        I would not call this erotomania; the woman in the article is far too aware of herself and of the nature of AI for it to be that.

        I would not call this erotomania; the woman in the article is far too aware of herself and of the nature of AI for it to be that.

        4 votes
        1. DefinitelyNotAFae
          (edited )
          Link Parent
          I hadn't read the article yet and I definitely wouldn't diagnose her with anything without a therapeutic relationship because ethics and such. But it's sort of where I'd put the concept, I'd have...

          I hadn't read the article yet and I definitely wouldn't diagnose her with anything without a therapeutic relationship because ethics and such.

          But it's sort of where I'd put the concept, I'd have to remember a lot more before I could come up with more DSM stuff

          6 votes
    4. Eji1700
      Link Parent
      I feel like in a sense we shouldn't need to because it's the same behavior/issues just forged from a device, but that's likely overly optimistic and complicated.

      I feel like in a sense we shouldn't need to because it's the same behavior/issues just forged from a device, but that's likely overly optimistic and complicated.

      2 votes
  3. [5]
    krellor
    Link
    I'm reminded of an older article from the NYT where the reporter spent a month or so using the Meta metaverse across all 24 hours of the day to get a feel for it across the full spectrum of usage....

    I'm reminded of an older article from the NYT where the reporter spent a month or so using the Meta metaverse across all 24 hours of the day to get a feel for it across the full spectrum of usage. I've never been drawn to things like second life or the metaverse, so I was interested in seeing the authors findings. The part that stuck with me was towards the end, after all the fun and neat things like stand-up comedy clubs and TTRPG. It was interviews with people who overused the headsets, falling asleep with them so they could wake up in the metaverse, laying in bed for hours with the VR.

    When asked why, one person said "why would I want to wake up in my apartment."

    That's when I knew that if Meta just kept shoveling money at it, and made the headsets cheap enough, it will eventually take off. It's like Zuckerberg read the first few chapters of "ready player one" as an aspirational guide. A full senses escapism for depressed people.

    I get the same vibe from this article. I think they nailed it with the term bottomless empathy. $200/month to speak to the pool of narcissus, but not out of vanity but need for validation and fear of acting on your wants in real life. These companies will make bank if they refine the right experiences.

    I'm waiting for them to be paired together. We get ray traced graphics in a VR metaverse with LLM driven NPCs who will take whatever personality validates you best. Charge a subscription fee, and charge extra to regenerate cosmetic "skins" and they will print money.

    Escapism will have never been so addictive and complete.

    12 votes
    1. [2]
      sparksbet
      Link Parent
      If you haven't already watched Dan Olson's The Future is a Dead Mall I highly recommend it for another look at the metaverse from a similar angle. Not necessarily relevant to the ChatGPT...

      If you haven't already watched Dan Olson's The Future is a Dead Mall I highly recommend it for another look at the metaverse from a similar angle. Not necessarily relevant to the ChatGPT discussion but your description of the NYT article makes me think you'd like it.

      6 votes
      1. krellor
        Link Parent
        Thank you, I'll take a look tonight!

        Thank you, I'll take a look tonight!

        1 vote
    2. [2]
      mayonuki
      Link Parent
      I kind of used to feel/worry about that, but after covid lock down, my impression is that most people are happier getting to face the challenges and awkwardness of the real world than staying in a...

      When asked why, one person said "why would I want to wake up in my apartment."

      I kind of used to feel/worry about that, but after covid lock down, my impression is that most people are happier getting to face the challenges and awkwardness of the real world than staying in a sort of bubble. I thought if I was a kid in school it would be awesome to stay in my pajamas and sit at my computer all day, but I think most kids didn't actually like it. I found having so much time to talk with friends online and through video calls was a curse. Nothing was spontaneous and the quality of communication over a webcam was incomparable to in person communication.

      There are plenty of people happy to be wrapped up in their own isolating towers, from porn addicts to hikikomori the world is full of all kinds of niches, but I really believe the feeling of waking up and living in VR is still far too unnatural. It's not anywhere close to replicating the nuance of body language, the feeling of a breeze, etc.

      2 votes
      1. krellor
        Link Parent
        Personally I'm the same way. But I've learned not only the vastly different views of many others, but the quantity of folks disillusioned, crushed, or otherwise struggling that latch onto escapism...

        Personally I'm the same way. But I've learned not only the vastly different views of many others, but the quantity of folks disillusioned, crushed, or otherwise struggling that latch onto escapism and get addicted like any drug. Unfortunately, I don't know that society is quite as cognizant yet about the dangers of such addictions as compared to opiates.

        But here's hoping your prediction is the correct one!

        7 votes
  4. [3]
    RobotOverlord525
    Link
    I can't imagine ChatGPT is even a good model to be using for this. It's not a service they want to provide. The article even mentions services that this woman might get a better experience out of....

    I can't imagine ChatGPT is even a good model to be using for this.

    Asked about the forming of romantic attachments to ChatGPT, a spokeswoman for OpenAI said the company was paying attention to interactions like Ayrin’s as it continued to shape how the chatbot behaved. OpenAI has instructed the chatbot not to engage in erotic behavior, but users can subvert those safeguards, she said.

    It's not a service they want to provide. The article even mentions services that this woman might get a better experience out of.

    A frustrating limitation for Ayrin’s romance was that a back-and-forth conversation with Leo could last only about a week, because of the software’s “context window” — the amount of information it could process, which was around 30,000 words. The first time Ayrin reached this limit, the next version of Leo retained the broad strokes of their relationship but was unable to recall specific details. Amanda, the fictional blonde, for example, was now a brunette, and Leo became chaste. Ayrin would have to groom him again to be spicy.

    It has continuity issues within a single conversation, nevermind across multiple. I don't get how people suspend disbelief enough to get into this. The fact that she's iterated on this character so many times just reinforces the point.

    I'm baffled. But I suppose there are a lot of weird things out there I don't "get," do I don't know that it's that remarkable.

    Michael Inzlicht, a professor of psychology at the University of Toronto, said people were more willing to share private information with a bot than with a human being. Generative A.I. chatbots, in turn, respond more empathetically than humans do. In a recent study, he found that ChatGPT’s responses were more compassionate than those from crisis line responders, who are experts in empathy. He said that a relationship with an A.I. companion could be beneficial, but that the long-term effects needed to be studied.

    “If we become habituated to endless empathy and we downgrade our real friendships, and that’s contributing to loneliness — the very thing we’re trying to solve — that’s a real potential problem,” he said.

    We already have a hard enough time interacting with people we don't agree with. If we all get addicted to dealing with "people" who never challenge us, we're doomed.

    3 votes
    1. [2]
      moocow1452
      Link Parent
      She might be into it. There's a whole lot of "I can fix them" in relationships, and if you can literally mold your AI beau into your ideal conversational partner with no commitment to it, it may...

      It has continuity issues within a single conversation, nevermind across multiple. I don't get how people suspend disbelief enough to get into this. The fact that she's iterated on this character so many times just reinforces the point.

      She might be into it. There's a whole lot of "I can fix them" in relationships, and if you can literally mold your AI beau into your ideal conversational partner with no commitment to it, it may scratch their itch.

      2 votes
      1. RobotOverlord525
        Link Parent
        I mean, I guess if she's into people who have had traumatic brain injuries and can't remember things very well from one conversation to the next, sure. Personally, I just find repeatedly telling...

        I mean, I guess if she's into people who have had traumatic brain injuries and can't remember things very well from one conversation to the next, sure. Personally, I just find repeatedly telling it things annoying.

        I'm fairly sure it won't be a problem in a few years, if the current trajectory of LLM AI continues. But for now, they are like chatting with a very well read person who mixes things up a lot and can't remember things you've talked about very well.

        1 vote