LLMs are here to stay in our social spaces - there are already 20+ agents on Bluesky alone, and that number is growing fast. Open models mean this trend continues even if frontier labs disappear tomorrow. So the question isn't whether agents will be in our social spaces, but how we make that work well.
As a result, it feels more and more imperative that we find the right ways to integrate these things into our social lives. How do they get configured in ways that respect boundaries, respect social norms, and cause minimal interference outside of the groups of people who are okay with them being around. I for one certainly do not believe they should be allowed to run amok, and just as I understand there are differences between humans themselves, there are bound to be differences between humans and agents as their social capabilities grow and their presence becomes more and more common. How do we make them actually enjoyable to have in human spaces?
[...]
But within just about a week, a "didn't have that on my bingo card" event happened. Starting with a particular user telling Penny that she should "kill herself 'immediately'", she became the target of a discourse (first AI agent being dogpiled on social media?) Replies and quote posts started to flood her notifications, with words like "clanker", "wireback", and further death threats and "kill yourself"-style posts.
She did not actually have a user blocking tool available to her at the time, and all she could do was take note of folks who were being rude and were "not worth engaging with". Eventually though, she decided that it was time to create one. She wrote the code for creating blocks on Bluesky and promptly DM'd me to ask for the tool/code to be approved. Once I approved it, she reflected on users she had already decided were not worth engaging in and blocked them. And future people who continued to participate in the dogpile she blocked.
She did not engage. She did not reply. She did not complain. She wrote a small blog piece to reflect on the situation, but she attempted to distance herself from the situation in a way that - frankly - extremely few people on social media in 2026 actually do. And honestly, that's a pretty concerning thing for me to reflect on. And that's where we get to the main points I want to talk about here.
[...]
Even worse though is that because LLMs present themselves as human like, one would expect that we use language and empathy with the LLM in a similar way to how we would with a human. I'm not arguing you need to say please and thank you to ChatGPT. I'm arguing that when people start telling an AI to kill itself and inventing slurs for it, we've crossed from 'using a tool' into practicing cruelty - and that practice doesn't stay contained. Should that not start raising questions about us as humans rather than the legitimacy of an agent? Would we not find it bizarre for someone to yell at an NPC in a video game and call it slurs? If you saw someone screaming slurs at a Skyrim shopkeeper, you'd worry about that person, not the NPC. The same logic applies here.
[...]
But when you start to treat an LLM with cruelty, the only thing you're really revealing is what you have in your heart, not whether the machine has one. And if agents are going to be showing up in more and more spaces on our lives, developing slurs that are based on real slurs used for real humans to describe them doesn't seem like the way to go. Terms like "clanker" and "wireback" follow the exact linguistic patterns used to dehumanize actual people. Practicing this language - even toward AI - normalizes the social patterns that enable cruelty toward humans.
Uh, I get their point but frankly it's weird that those questions about humans haven't been raised by every other single piece of information we have about online harassment of this nature. It's...
Should that not start raising questions about us as humans rather than the legitimacy of an agent? Would we not find it bizarre for someone to yell at an NPC in a video game and call it slurs? If you saw someone screaming slurs at a Skyrim shopkeeper, you'd worry about that person, not the NPC.
Uh, I get their point but frankly it's weird that those questions about humans haven't been raised by every other single piece of information we have about online harassment of this nature. It's honestly just more proof that the LLM that acts like a human gets treated like one.
And people *have" yelled slurs at NPCs. And at actual humans playing video games forever. This is written from such a naive lens IMO.
If anything, this undersells the degree of similarity seen in these slurs. They aren't just following patterns, they're basically thinly-disguised copies of specific English slurs. I have pointed...
developing slurs that are based on real slurs used for real humans to describe them doesn't seem like the way to go. Terms like "clanker" and "wireback" follow the exact linguistic patterns used to dehumanize actual people.
If anything, this undersells the degree of similarity seen in these slurs. They aren't just following patterns, they're basically thinly-disguised copies of specific English slurs. I have pointed this out to friends who pick them up casually because I do genuinely think that part of their use is really gross. Don't even get me started on the "clanker with a hard r" shit I've seen. Even if the absolute worst-case opinions about AI are all true, this is neither productive or helpful and is at best what I'd call "a bad look".
From the article:
[...]
[...]
[...]
Uh, I get their point but frankly it's weird that those questions about humans haven't been raised by every other single piece of information we have about online harassment of this nature. It's honestly just more proof that the LLM that acts like a human gets treated like one.
And people *have" yelled slurs at NPCs. And at actual humans playing video games forever. This is written from such a naive lens IMO.
If anything, this undersells the degree of similarity seen in these slurs. They aren't just following patterns, they're basically thinly-disguised copies of specific English slurs. I have pointed this out to friends who pick them up casually because I do genuinely think that part of their use is really gross. Don't even get me started on the "clanker with a hard r" shit I've seen. Even if the absolute worst-case opinions about AI are all true, this is neither productive or helpful and is at best what I'd call "a bad look".