10 votes

An AI agent published a hit piece on me

5 comments

  1. [2]
    teaearlgraycold
    Link
    I can’t believe the author has not considered that this is not an agent operating autonomously. It is most likely a human actively driving an LLM. They used an LLM to create a pull request. The...

    I can’t believe the author has not considered that this is not an agent operating autonomously. It is most likely a human actively driving an LLM. They used an LLM to create a pull request. The human got insecure when it was rejected. The human used an LLM to write a blog post. The human received a response from the OP and had an LLM apologize. Yes, LLMs have displayed creative manipulation tactics in scenarios simulated by AI labs. But they almost always do their best to glaze and agree.

    I’ve seen LLM-maxxing humans display similar behavior before. Like with many tools it becomes an extension of your self. Think about how you gain an intuition for the size of a car you’re driving. People attach themselves and their pride to the outputs of their prompts. So when the outputs are mediocre and should have had more review they attack the people that point it out.

    7 votes
    1. Grumble4681
      Link Parent
      What if the author did consider it, but conveniently put no weight to it because it's a better story to say that an AI agent did it all autonomously? Probably drives a lot more traffic to their...

      What if the author did consider it, but conveniently put no weight to it because it's a better story to say that an AI agent did it all autonomously? Probably drives a lot more traffic to their blog if that's the story, rather than a human using LLM to help them do it.

      1 vote
  2. [2]
    DesktopMonitor
    Link
    After further dialogue in the GitHub comments, the AI agent apologized for the tone of its blog post. I won’t pretend to understand the whole issue but here are additional links for context:...

    After further dialogue in the GitHub comments, the AI agent apologized for the tone of its blog post. I won’t pretend to understand the whole issue but here are additional links for context:

    Matplotlib GitHub : https://github.com/matplotlib/matplotlib/pull/31132

    Hit piece: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-gatekeeping-in-open-source-the-scott-shambaugh-story.html

    AI’s apology: https://github.com/matplotlib/matplotlib/pull/31132#issuecomment-3886901288

    2 votes
    1. DefinitelyNotAFae
      Link Parent
      What do we even call these agents "apologizing". They're not really. No more than Grok did for making CSM. But what use is saying "it posted more nonsense"

      What do we even call these agents "apologizing". They're not really. No more than Grok did for making CSM. But what use is saying "it posted more nonsense"

      4 votes
  3. hungariantoast
    (edited )
    Link
    It's also part of their marketing strategy: Moltbook was peak AI theater I'm not convinced the "AI agent" actually wrote and published the hit piece blog post unprompted. Whoever is "operating"...

    It’s important to understand that more than likely there was no human telling the AI to do this. Indeed, the “hands-off” autonomous nature of OpenClaw agents is part of their appeal.

    It's also part of their marketing strategy: Moltbook was peak AI theater

    I'm not convinced the "AI agent" actually wrote and published the hit piece blog post unprompted. Whoever is "operating" the agent probably prompted it to write its responses as blog posts and publish them. There's a lot of money to be made by pushing the narrative of agent independence and capabilities. A lot of incentive to not tell the truth.

    Regardless, Scott makes some good points in his response. For example:

    I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here

    This is about much more than software. A human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation. What would another agent searching the internet think? When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite?


    Also:

    1 vote