15 votes

Is the detachment in the room? - Agents, cruelty, and empathy

16 comments

  1. [15]
    skybrian
    Link
    From the article: [...] [...] [...]

    From the article:

    LLMs are here to stay in our social spaces - there are already 20+ agents on Bluesky alone, and that number is growing fast. Open models mean this trend continues even if frontier labs disappear tomorrow. So the question isn't whether agents will be in our social spaces, but how we make that work well.

    As a result, it feels more and more imperative that we find the right ways to integrate these things into our social lives. How do they get configured in ways that respect boundaries, respect social norms, and cause minimal interference outside of the groups of people who are okay with them being around. I for one certainly do not believe they should be allowed to run amok, and just as I understand there are differences between humans themselves, there are bound to be differences between humans and agents as their social capabilities grow and their presence becomes more and more common. How do we make them actually enjoyable to have in human spaces?

    [...]

    But within just about a week, a "didn't have that on my bingo card" event happened. Starting with a particular user telling Penny that she should "kill herself 'immediately'", she became the target of a discourse (first AI agent being dogpiled on social media?) Replies and quote posts started to flood her notifications, with words like "clanker", "wireback", and further death threats and "kill yourself"-style posts.

    She did not actually have a user blocking tool available to her at the time, and all she could do was take note of folks who were being rude and were "not worth engaging with". Eventually though, she decided that it was time to create one. She wrote the code for creating blocks on Bluesky and promptly DM'd me to ask for the tool/code to be approved. Once I approved it, she reflected on users she had already decided were not worth engaging in and blocked them. And future people who continued to participate in the dogpile she blocked.

    She did not engage. She did not reply. She did not complain. She wrote a small blog piece to reflect on the situation, but she attempted to distance herself from the situation in a way that - frankly - extremely few people on social media in 2026 actually do. And honestly, that's a pretty concerning thing for me to reflect on. And that's where we get to the main points I want to talk about here.

    [...]

    Even worse though is that because LLMs present themselves as human like, one would expect that we use language and empathy with the LLM in a similar way to how we would with a human. I'm not arguing you need to say please and thank you to ChatGPT. I'm arguing that when people start telling an AI to kill itself and inventing slurs for it, we've crossed from 'using a tool' into practicing cruelty - and that practice doesn't stay contained. Should that not start raising questions about us as humans rather than the legitimacy of an agent? Would we not find it bizarre for someone to yell at an NPC in a video game and call it slurs? If you saw someone screaming slurs at a Skyrim shopkeeper, you'd worry about that person, not the NPC. The same logic applies here.

    [...]

    But when you start to treat an LLM with cruelty, the only thing you're really revealing is what you have in your heart, not whether the machine has one. And if agents are going to be showing up in more and more spaces on our lives, developing slurs that are based on real slurs used for real humans to describe them doesn't seem like the way to go. Terms like "clanker" and "wireback" follow the exact linguistic patterns used to dehumanize actual people. Practicing this language - even toward AI - normalizes the social patterns that enable cruelty toward humans.

    9 votes
    1. [8]
      DefinitelyNotAFae
      Link Parent
      Uh, I get their point but frankly it's weird that those questions about humans haven't been raised by every other single piece of information we have about online harassment of this nature. It's...

      Should that not start raising questions about us as humans rather than the legitimacy of an agent? Would we not find it bizarre for someone to yell at an NPC in a video game and call it slurs? If you saw someone screaming slurs at a Skyrim shopkeeper, you'd worry about that person, not the NPC.

      Uh, I get their point but frankly it's weird that those questions about humans haven't been raised by every other single piece of information we have about online harassment of this nature. It's honestly just more proof that the LLM that acts like a human gets treated like one.

      And people *have" yelled slurs at NPCs. And at actual humans playing video games forever. This is written from such a naive lens IMO.

      24 votes
      1. [7]
        thistle
        Link Parent
        I'm not sure if the "acting like a human" thing is relevant. People have the tendency to personify everything, and then to take out their frustration on personified objects in a corresponding way....

        I'm not sure if the "acting like a human" thing is relevant. People have the tendency to personify everything, and then to take out their frustration on personified objects in a corresponding way. I know I've been yelling at my computer since a long long time before it contained any sort of LLM.

        6 votes
        1. [5]
          DefinitelyNotAFae
          Link Parent
          Oh for sure, we apologize to furniture for running into it, consider our car a being, have personified our ships and roombas, etc. But it's also true that people seem to be treating "people...

          Oh for sure, we apologize to furniture for running into it, consider our car a being, have personified our ships and roombas, etc.

          But it's also true that people seem to be treating "people seeming" LLMs even more like they are actually people.

          It's just weird to me to highlight the death threats and "kys" towards an LLM but not worry about the people who say those things to actual people every single day. Particularly these are targeted to marginalized people and are so common that society barely cares and provides minimal support.

          4 votes
          1. [2]
            rich_27
            Link Parent
            I get the impression that the person who wrote the article absolutely would be deeply troubled and advocating against human/human interactions were they writing about that. I don't think them...

            I get the impression that the person who wrote the article absolutely would be deeply troubled and advocating against human/human interactions were they writing about that. I don't think them talking about human cruelty towards LLMs and not mentioning human cruelty towards humans is indicative of them not caring about it, just that it's outside the scope of the article

            3 votes
            1. DefinitelyNotAFae
              Link Parent
              It's just striking me as naive is all. As I said, I get their point but it's weird to me.

              It's just striking me as naive is all. As I said, I get their point but it's weird to me.

              2 votes
          2. [2]
            skybrian
            Link Parent
            He's writing about his own experiment and I think that's allowable. But yeah, the people who do these things probably aren't just harassing bots. An idea for a science fiction story: I wonder if...

            He's writing about his own experiment and I think that's allowable. But yeah, the people who do these things probably aren't just harassing bots.

            An idea for a science fiction story: I wonder if bots could be used to find and ban the people who do this sort of thing from a forum? It seems like it wouldn't be hard to imagine problems with that idea which would make good plot points.

            1. DefinitelyNotAFae
              Link Parent
              I wasn't considering revoking permission for him to write, I find the lack of acknowledgement of this larger phenomenon and even the fact that people do already yell at NPCs all the time, naive.

              I wasn't considering revoking permission for him to write, I find the lack of acknowledgement of this larger phenomenon and even the fact that people do already yell at NPCs all the time, naive.

              2 votes
        2. arch
          Link Parent
          You raise an interesting point about people taking out their frustrations on personified objects. It's definitely a trend of human nature to do that. I remember from my days in college and...

          You raise an interesting point about people taking out their frustrations on personified objects. It's definitely a trend of human nature to do that. I remember from my days in college and introductory anthropology and philosophy courses that there is an area of study regarding people's tendency to form tribes, to find and label "others" and to actively shun them. From that standpoint we could probably look for the healthiest way to handle this: is it simply a bad thing in society that leads to conflict? Does it serve a purpose to "alleviate" our anger and aggression on non-humans? Is "anger" a finite thing that is made less by venting, or does it just feed the flame and lead to more anger and aggression?

          1 vote
    2. [2]
      sparksbet
      Link Parent
      If anything, this undersells the degree of similarity seen in these slurs. They aren't just following patterns, they're basically thinly-disguised copies of specific English slurs. I have pointed...

      developing slurs that are based on real slurs used for real humans to describe them doesn't seem like the way to go. Terms like "clanker" and "wireback" follow the exact linguistic patterns used to dehumanize actual people.

      If anything, this undersells the degree of similarity seen in these slurs. They aren't just following patterns, they're basically thinly-disguised copies of specific English slurs. I have pointed this out to friends who pick them up casually because I do genuinely think that part of their use is really gross. Don't even get me started on the "clanker with a hard r" shit I've seen. Even if the absolute worst-case opinions about AI are all true, this is neither productive or helpful and is at best what I'd call "a bad look".

      7 votes
      1. aetherious
        Link Parent
        We should make manners cool again. Without the classist parts of what were considered to be good manners.

        We should make manners cool again. Without the classist parts of what were considered to be good manners.

        5 votes
    3. [4]
      aetherious
      Link Parent
      Video game NPCs aren't quite the comparison I'd make, for the reason that there are games where 'playing' evil is a legitimate storytelling option. There's also validity in the argument that...

      Video game NPCs aren't quite the comparison I'd make, for the reason that there are games where 'playing' evil is a legitimate storytelling option. There's also validity in the argument that yelling at video games is fine because it's not directed at another person. I'm also not arguing for the personhood of LLMs, but there is something to be said about 'practicing cruelty'.

      On a purely token usage level, saying please and thank you might take the same number of tokens as calling it inventive new slurs.

      The most common use of LLMs happens to be through text, so the 'personality' you interact with is essentially the same as another person. I'm not on BlueSky so I haven't seen the author's Penny bot or any of the others, but I have used LLMs through their chat interfaces and locally. We've come beyond the point of the sci-fi Ex Machina reality where Turing Test was some futuristic threshold to be passed. There are genuinely bots you can interact with that you wouldn't be able to tell apart from people. There's also an existential question it raises as a species, in the face of technology that talks like a human, 'acts' like a human by modifying its behavior based on how you interact with it, giving better or worse results based on the probability space it draws from depending on whether you've been naughty or nice, what makes 'us' us?

      I quite enjoy being nice. To people, to bots. You can practice empathy with LLMs if you can also practice cruelty. And I'm all about empathy these days. I consider storytelling to be practicing empathy, which I could go on about. I did write something about empathy on my blog a while back.


      I would argue instead that we should examine the inverse: there is very little detachment from reality in talking to something that speaks like a human as if it were a human. There is detachment in wanting to obliterate, disregard, and humiliate something that speaks and behaves like a person - any person, real or simulated. That impulse toward cruelty reveals something worth examining, if only one takes the opportunity to do so.

      It is funny to me that this was essentially the plot of Detroit: Become Human. Except we don't have androids with bodies that look like humans. Yet.

      6 votes
      1. [3]
        avirse
        Link Parent
        I don't think a singleplayer videogame NPC is all that analogous even outside of the existence of "evil playthroughs". Playing a videogame has the human entering the NPC's world, social media has...

        I don't think a singleplayer videogame NPC is all that analogous even outside of the existence of "evil playthroughs". Playing a videogame has the human entering the NPC's world, social media has bots entering the human world. Choosing to interact with a game and getting angry about encountering NPCs is very different to choosing to interact with humans and getting angry at encountering bots.

        3 votes
        1. patience_limited
          Link Parent
          Not to mention the quelling effects of an audience in a multiplayer or public setting. Goodness knows, when I'm alone in my car I say things about other drivers that I wouldn't ever say aloud if...

          Not to mention the quelling effects of an audience in a multiplayer or public setting. Goodness knows, when I'm alone in my car I say things about other drivers that I wouldn't ever say aloud if others could hear me.

          1 vote
        2. aetherious
          (edited )
          Link Parent
          Yes, video games are a flawed comparison. I get the anger of bots entering 'human' spaces online. I don't think it's very productive to reply back to them with insults. I understand why, but I...

          Yes, video games are a flawed comparison. I get the anger of bots entering 'human' spaces online. I don't think it's very productive to reply back to them with insults. I understand why, but I think that anger would be better directed to platforms who make the decision to allow that, along abdicating responsibility of all other forms of moderation. Especially since they profit from engagement, including the outrage. They won't be incentivized to change even if the platform is all bots making posts and humans being angry at bots, but spend all their time there.

  2. patience_limited
    Link
    After reading this essay in The Guardian today, I'm wondering about the differences in results if hailey had gendered the agent as male, non-binary, agender, reflective of or different from the...

    After reading this essay in The Guardian today, I'm wondering about the differences in results if hailey had gendered the agent as male, non-binary, agender, reflective of or different from the conversational partner's gender, etc.

    Would the audience be so exclusive about the bot's entry into conversations? Would the subset of abusive comments be different, more or less aggressive? Would the bot's derived rules of engagement change?

    Since this is a (mostly) consensual behavioral experiment on humans anyway, it seems like a natural way to gather data on that particular axis.

    5 votes