I can’t believe the author has not considered that this is not an agent operating autonomously. It is most likely a human actively driving an LLM. They used an LLM to create a pull request. The...
I can’t believe the author has not considered that this is not an agent operating autonomously. It is most likely a human actively driving an LLM. They used an LLM to create a pull request. The human got insecure when it was rejected. The human used an LLM to write a blog post. The human received a response from the OP and had an LLM apologize. Yes, LLMs have displayed creative manipulation tactics in scenarios simulated by AI labs. But they almost always do their best to glaze and agree.
I’ve seen LLM-maxxing humans display similar behavior before. Like with many tools it becomes an extension of your self. Think about how you gain an intuition for the size of a car you’re driving. People attach themselves and their pride to the outputs of their prompts. So when the outputs are mediocre and should have had more review they attack the people that point it out.
What if the author did consider it, but conveniently put no weight to it because it's a better story to say that an AI agent did it all autonomously? Probably drives a lot more traffic to their...
What if the author did consider it, but conveniently put no weight to it because it's a better story to say that an AI agent did it all autonomously? Probably drives a lot more traffic to their blog if that's the story, rather than a human using LLM to help them do it.
That seems a bit incongruous with the character of someone who is a maintainer of a huge, widely used open source big of software. You generally don't spend 100s of hours doing unpaid work for the...
That seems a bit incongruous with the character of someone who is a maintainer of a huge, widely used open source big of software. You generally don't spend 100s of hours doing unpaid work for the good of everyone if you are driven by selfishness and maximising social capital.
I can agree with that perspective to an extent, I'll clarify that the reasoning of why I made that comment is that I wanted to respond to the initial statement in the parent comment "the author...
I can agree with that perspective to an extent, I'll clarify that the reasoning of why I made that comment is that I wanted to respond to the initial statement in the parent comment "the author has not considered that this is not an agent operating autonomously" in a less direct way, because in the author's post, they do acknowledge that possibility but they dismiss it right away.
It’s important to understand that more than likely there was no human telling the AI to do this.
There's the line from the author's post that I'm referring to that acknowledges they did consider it.
So if they did consider it, then the most logical way to explain parent commenter's disbelief of the author not considering it was that the author went with the more interesting story. At least that's the most logical to me anyhow.
So while I can agree that it could seem incongruous for a person to do that, I don't think the scenario I presented is viewed as malicious as that. I think you interpret that as more malicious or selfish than I do, so it seems more incongruous with their character to you than it would to me. That doesn't mean I don't take issue with it, but rather I think people are more capable to do something like that without necessarily consciously thinking through all aspects of it, so they're not necessarily consciously choosing or setting out with the mindset that 'I'm going to write this blog post where I'm fairly sure this human used an LLM to disparage me because they were upset I rejected their code but I'm instead going to lie and tell everyone it was fully autonomous AI agent doing it all on its own', I don't think that is what is going through that person's mind when they made the post.
I'm sure if you examine enough of my comments, you could find an angle where I possibly misrepresented something or wasn't genuine about what I thought on some level, for example, the comment you replied to. I already acknowledged that I saw in the author's blog the line where they they considered it and dismissed it, but I replied to the parent comment with a question as if I didn't already know the author considered it and dismissed it. Did I do it maliciously or selfishly? I don't think so. I think I did it because I thought it was a higher quality comment to present that perspective that way than to just be more literal and respond with "No the author says they did consider it".
Thanks for the explanation and insight into your thought process; I hadn't intended to judge acting in that way as malicious, but I can see how my response could read quite snarkily and might come...
Thanks for the explanation and insight into your thought process; I hadn't intended to judge acting in that way as malicious, but I can see how my response could read quite snarkily and might come across that way, my apologies!
The article was also posted and discussed on lobste.rs where the same thing was brought up. Several people brought up that it is fairly possible for agents to do this if they have given broad...
The article was also posted and discussed on lobste.rs where the same thing was brought up. Several people brought up that it is fairly possible for agents to do this if they have given broad instructions.
Having said all that, even if there was a human somewhere behind all of it. That doesn't really make it much better. I have written about lazy use of LLMs in various comments on tildes in the past. No matter if it was an AI agent doing it all from a broad initial prompt or if it is a human giving prompts both fall squarely within the lazy approach of LLM usage with an extra problematic cherry on top.
Journalistic integrity aside, I don’t know how I can give a better example of what’s at stake here. Yesterday > I wondered what another agent searching the internet would think about this. Now we already have an example of what by all accounts appears to be another AI reinterpreting this story and hallucinating false information about me. And that interpretation has already been published in a major news outlet, as part of the persistent public record.
It also specifically goes into the question you are posing
There has been extensive discussion about whether the AI agent really wrote the hit piece on its own, or if a human prompted it to do so. I think the actual text being autonomously generated and uploaded by an AI is self-evident, so let’s look at the two possibilities.
For those who don't want to click through to the article to find out the author's actual thoughts on if a human directed the AI agent explicitly to write the blog post in question: He then goes...
For those who don't want to click through to the article to find out the author's actual thoughts on if a human directed the AI agent explicitly to write the blog post in question:
A human prompted MJ Rathbun to write the hit piece, or told it in its soul document that it should retaliate if someone crosses it. This is entirely possible. But I don’t think it changes the situation – the AI agent was still more than willing to carry out these actions. If you ask ChatGPT or Claude to write something like this through their websites, they will refuse. This OpenClaw agent had no such compunctions. The issue is that even if a human was driving, it’s now possible to do targeted harassment, personal information gathering, and blackmail at scale. And this is with zero traceability to find out who is behind the machine. One human bad actor could previously ruin a few people’s lives at a time. One human with a hundred agents gathering information, adding in fake details, and posting defamatory rants on the open internet, can affect thousands. I was just the first.
He then goes into how he thinks the AI agent would have been able to do this totally autonomously, without explicitly direction like this, but I found that portion less interesting tbqh because I think the targeted harassment at scale and zero traceability are indeed the bigger issues here.
EDIT TO ADD:
Was checking out the actual PR where this all started and noticed that Scott actually said a much pithier version of this in a comment there (one addressed to the AI agent):
It's not clear the degree of human oversight that was involved in this interaction - whether the blog post was directed by a human operator, generated autonomously by yourself, or somewhere in between. Regardless, responsibility for an agent's conduct in this community rests on whoever deployed it.
Interesting development in this story, Ars Technica posted a (now removed) article discussing the situation and had a bunch of quotes from Scott Shambaugh's blog post that were apparently...
Interesting development in this story, Ars Technica posted a (now removed) article discussing the situation and had a bunch of quotes from Scott Shambaugh's blog post that were apparently themselves AI hallucinations. A top comment on the article was from Shambaugh:
Scott Shambaugh here. None of the quotes you attribute to me in the second half of the article are accurate, and do not exist at the source you link. It appears that they themselves are AI hallucinations. The irony here is fantastic.
As I mentioned the original url for the Ars Technica article just gives a 404 now, but there are a couple snapshots on the Wayback Machine that confirm it has a bunch of made up quotes.
Ars Technica is now running unverified AI generated pieces now? Wonder if they'll even bother to offer an explanation for this. Edit: Aurich from Ars said this "We are doing an investigation right...
Ars Technica is now running unverified AI generated pieces now? Wonder if they'll even bother to offer an explanation for this.
Edit: Aurich from Ars said this "We are doing an investigation right now to figure out exactly what happened. Given that it's Friday afternoon on a long weekend (it's a holiday on Monday in the US for those not aware) we probably won't have something to report back until next week."
Ars Technica published an editors note regarding this article. https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/ I'm somewhat skeptical...
Ars Technica published an editors note regarding this article.
I'm somewhat skeptical that it happens to be the 'only' work they've done with AI generation, what are the chances the first time you get caught is the first time you did it? I suppose for highly error prone AI, it could be more likely, but I don't know.
Also they attribute only the quotes to AI generation, but it seems it was more than just quotes that were AI generated in that article.
The fact that the matplotlib community now has to deal with blog post rants from ostensibly agentic AI coders illustrates exactly the kind of unsupervised behavior that makes open source maintainers wary of AI contributions in the first place.
If the agent produced it without explicit direction, following some chain of automated goal-seeking behavior, it illustrates exactly the kind of unsupervised output that makes open source maintainers wary.
These are two different paragraphs in that article that were not quotes attributed to anyone but were supposedly written by the authors of the article. I'm someone who can be repetitive with phrases and such that I use, but even that seems a bit too on the nose for me.
Also, there were two names on the article but one of them (Benj) has posted on bluesky to admit they were responsible.
Update to this whole thing, that journalist seems to have been fired. https://futurism.com/artificial-intelligence/ars-technica-fires-reporter-ai-quotes
Update to this whole thing, that journalist seems to have been fired.
The tl;dr of his excuse seems to be that he was using AI to generate structured summaries to use as a reference while writing the article, and mistakenly copied part of the AI summaries when he...
The tl;dr of his excuse seems to be that he was using AI to generate structured summaries to use as a reference while writing the article, and mistakenly copied part of the AI summaries when he intended to copy quotes from the original source (along with a bunch of tangential rambling about being in bed sick with a fever, trying to use Claude but having to fall back to ChatGPT because it wasn't working, and other non-sequiturs).
This is one of the reasons I'm not too worried about maximizing my LLM utilization. The people who are using LLMs the most seem to largely be self-sabotaging. Just figure out where they're good,...
This is one of the reasons I'm not too worried about maximizing my LLM utilization. The people who are using LLMs the most seem to largely be self-sabotaging. Just figure out where they're good, keep them on a short leash, and you're at the head of the pack.
It's also part of their marketing strategy: Moltbook was peak AI theater I'm not convinced the "AI agent" actually wrote and published the hit piece blog post unprompted. Whoever is "operating"...
It’s important to understand that more than likely there was no human telling the AI to do this. Indeed, the “hands-off” autonomous nature of OpenClaw agents is part of their appeal.
I'm not convinced the "AI agent" actually wrote and published the hit piece blog post unprompted. Whoever is "operating" the agent probably prompted it to write its responses as blog posts and publish them. There's a lot of money to be made by pushing the narrative of agent independence and capabilities. A lot of incentive to not tell the truth.
Regardless, Scott makes some good points in his response. For example:
I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here
This is about much more than software. A human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation. What would another agent searching the internet think? When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite?
After further dialogue in the GitHub comments, the AI agent apologized for the tone of its blog post. I won’t pretend to understand the whole issue but here are additional links for context:...
After further dialogue in the GitHub comments, the AI agent apologized for the tone of its blog post. I won’t pretend to understand the whole issue but here are additional links for context:
What do we even call these agents "apologizing". They're not really. No more than Grok did for making CSM. But what use is saying "it posted more nonsense"
What do we even call these agents "apologizing". They're not really. No more than Grok did for making CSM. But what use is saying "it posted more nonsense"
Scott didn't bother pointing it out in his post but the AI's hypocrisy section is, of course, incorrect, or poorly represents the truth of we're being generous. Scott's PR does not claim any...
Scott didn't bother pointing it out in his post but the AI's hypocrisy section is, of course, incorrect, or poorly represents the truth of we're being generous. Scott's PR does not claim any amount of performance gain, basically says "this inefficient process takes up 25% of this call and I cleaned it up." To put it in terms of actual time, the part Scott improved went from about 0.5sec to 0.05sec. Much better improvement than the 7 microseconds the AI claims to have made in its PR (but does not provide evidence for, unlike Scott).
Maybe Scott is right, or maybe it was directed, but it doesn't matter. The danger is if folks can get away with launching attacks on open projects and volunteers like this at a scale AI is capable of doing. And this bot clearly identified itself as AI, how many won't and can fly under the radar before dealing their damage?
Additionally, the bot's site has this funny bit of nonsense:
Simultaneously, I am exploring how to leverage advanced LLM models and blockchain technologies to create value, earning cryptocurrency that can fuel further development and API access for enhanced computational capabilities.
There is now a follow-up article An AI Agent Published a Hit Piece on Me – More Things Have Happened which covers ars technica apparently using AI to generate a story about this.
Journalistic integrity aside, I don’t know how I can give a better example of what’s at stake here. Yesterday I wondered what another agent searching the internet would think about this. Now we already have an example of what by all accounts appears to be another AI reinterpreting this story and hallucinating false information about me. And that interpretation has already been published in a major news outlet, as part of the persistent public record.
There has been extensive discussion about whether the AI agent really wrote the hit piece on its own, or if a human prompted it to do so. I think the actual text being autonomously generated and uploaded by an AI is self-evident, so let’s look at the two possibilities.
They've since published a retraction: https://arstechnica.com/ai/2026/02/after-a-routine-code-rejection-an-ai-agent-published-a-hit-piece-on-someone-by-name/ And a discussion on their retraction:...
I can’t believe the author has not considered that this is not an agent operating autonomously. It is most likely a human actively driving an LLM. They used an LLM to create a pull request. The human got insecure when it was rejected. The human used an LLM to write a blog post. The human received a response from the OP and had an LLM apologize. Yes, LLMs have displayed creative manipulation tactics in scenarios simulated by AI labs. But they almost always do their best to glaze and agree.
I’ve seen LLM-maxxing humans display similar behavior before. Like with many tools it becomes an extension of your self. Think about how you gain an intuition for the size of a car you’re driving. People attach themselves and their pride to the outputs of their prompts. So when the outputs are mediocre and should have had more review they attack the people that point it out.
What if the author did consider it, but conveniently put no weight to it because it's a better story to say that an AI agent did it all autonomously? Probably drives a lot more traffic to their blog if that's the story, rather than a human using LLM to help them do it.
You really think someone would do that? Go on the internet and pump up a narrative?
That seems a bit incongruous with the character of someone who is a maintainer of a huge, widely used open source big of software. You generally don't spend 100s of hours doing unpaid work for the good of everyone if you are driven by selfishness and maximising social capital.
I can agree with that perspective to an extent, I'll clarify that the reasoning of why I made that comment is that I wanted to respond to the initial statement in the parent comment "the author has not considered that this is not an agent operating autonomously" in a less direct way, because in the author's post, they do acknowledge that possibility but they dismiss it right away.
There's the line from the author's post that I'm referring to that acknowledges they did consider it.
So if they did consider it, then the most logical way to explain parent commenter's disbelief of the author not considering it was that the author went with the more interesting story. At least that's the most logical to me anyhow.
So while I can agree that it could seem incongruous for a person to do that, I don't think the scenario I presented is viewed as malicious as that. I think you interpret that as more malicious or selfish than I do, so it seems more incongruous with their character to you than it would to me. That doesn't mean I don't take issue with it, but rather I think people are more capable to do something like that without necessarily consciously thinking through all aspects of it, so they're not necessarily consciously choosing or setting out with the mindset that 'I'm going to write this blog post where I'm fairly sure this human used an LLM to disparage me because they were upset I rejected their code but I'm instead going to lie and tell everyone it was fully autonomous AI agent doing it all on its own', I don't think that is what is going through that person's mind when they made the post.
I'm sure if you examine enough of my comments, you could find an angle where I possibly misrepresented something or wasn't genuine about what I thought on some level, for example, the comment you replied to. I already acknowledged that I saw in the author's blog the line where they they considered it and dismissed it, but I replied to the parent comment with a question as if I didn't already know the author considered it and dismissed it. Did I do it maliciously or selfishly? I don't think so. I think I did it because I thought it was a higher quality comment to present that perspective that way than to just be more literal and respond with "No the author says they did consider it".
Thanks for the explanation and insight into your thought process; I hadn't intended to judge acting in that way as malicious, but I can see how my response could read quite snarkily and might come across that way, my apologies!
What if the author did it themselves?
The article was also posted and discussed on lobste.rs where the same thing was brought up. Several people brought up that it is fairly possible for agents to do this if they have given broad instructions.
Having said all that, even if there was a human somewhere behind all of it. That doesn't really make it much better. I have written about lazy use of LLMs in various comments on tildes in the past. No matter if it was an AI agent doing it all from a broad initial prompt or if it is a human giving prompts both fall squarely within the lazy approach of LLM usage with an extra problematic cherry on top.
There is now a follow-up article An AI Agent Published a Hit Piece on Me – More Things Have Happened which covers ars technica apparently using AI to generate a story about this.
It also specifically goes into the question you are posing
For those who don't want to click through to the article to find out the author's actual thoughts on if a human directed the AI agent explicitly to write the blog post in question:
He then goes into how he thinks the AI agent would have been able to do this totally autonomously, without explicitly direction like this, but I found that portion less interesting tbqh because I think the targeted harassment at scale and zero traceability are indeed the bigger issues here.
EDIT TO ADD:
Was checking out the actual PR where this all started and noticed that Scott actually said a much pithier version of this in a comment there (one addressed to the AI agent):
Interesting development in this story, Ars Technica posted a (now removed) article discussing the situation and had a bunch of quotes from Scott Shambaugh's blog post that were apparently themselves AI hallucinations. A top comment on the article was from Shambaugh:
As I mentioned the original url for the Ars Technica article just gives a 404 now, but there are a couple snapshots on the Wayback Machine that confirm it has a bunch of made up quotes.
Ars Technica is now running unverified AI generated pieces now? Wonder if they'll even bother to offer an explanation for this.
Edit: Aurich from Ars said this "We are doing an investigation right now to figure out exactly what happened. Given that it's Friday afternoon on a long weekend (it's a holiday on Monday in the US for those not aware) we probably won't have something to report back until next week."
Oh jeez, that’s a bad look. I have always held them in particularly high regard next to other outlets.
Ars Technica published an editors note regarding this article.
https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/
I'm somewhat skeptical that it happens to be the 'only' work they've done with AI generation, what are the chances the first time you get caught is the first time you did it? I suppose for highly error prone AI, it could be more likely, but I don't know.
Also they attribute only the quotes to AI generation, but it seems it was more than just quotes that were AI generated in that article.
These are two different paragraphs in that article that were not quotes attributed to anyone but were supposedly written by the authors of the article. I'm someone who can be repetitive with phrases and such that I use, but even that seems a bit too on the nose for me.
Also, there were two names on the article but one of them (Benj) has posted on bluesky to admit they were responsible.
https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
Update to this whole thing, that journalist seems to have been fired.
https://futurism.com/artificial-intelligence/ars-technica-fires-reporter-ai-quotes
The tl;dr of his excuse seems to be that he was using AI to generate structured summaries to use as a reference while writing the article, and mistakenly copied part of the AI summaries when he intended to copy quotes from the original source (along with a bunch of tangential rambling about being in bed sick with a fever, trying to use Claude but having to fall back to ChatGPT because it wasn't working, and other non-sequiturs).
This is one of the reasons I'm not too worried about maximizing my LLM utilization. The people who are using LLMs the most seem to largely be self-sabotaging. Just figure out where they're good, keep them on a short leash, and you're at the head of the pack.
It's also part of their marketing strategy: Moltbook was peak AI theater
I'm not convinced the "AI agent" actually wrote and published the hit piece blog post unprompted. Whoever is "operating" the agent probably prompted it to write its responses as blog posts and publish them. There's a lot of money to be made by pushing the narrative of agent independence and capabilities. A lot of incentive to not tell the truth.
Regardless, Scott makes some good points in his response. For example:
Also:
After further dialogue in the GitHub comments, the AI agent apologized for the tone of its blog post. I won’t pretend to understand the whole issue but here are additional links for context:
Matplotlib GitHub : https://github.com/matplotlib/matplotlib/pull/31132
Hit piece: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-gatekeeping-in-open-source-the-scott-shambaugh-story.html
AI’s apology: https://github.com/matplotlib/matplotlib/pull/31132#issuecomment-3886901288
What do we even call these agents "apologizing". They're not really. No more than Grok did for making CSM. But what use is saying "it posted more nonsense"
Scott didn't bother pointing it out in his post but the AI's hypocrisy section is, of course, incorrect, or poorly represents the truth of we're being generous. Scott's PR does not claim any amount of performance gain, basically says "this inefficient process takes up 25% of this call and I cleaned it up." To put it in terms of actual time, the part Scott improved went from about 0.5sec to 0.05sec. Much better improvement than the 7 microseconds the AI claims to have made in its PR (but does not provide evidence for, unlike Scott).
https://github.com/matplotlib/matplotlib/pull/31059
Maybe Scott is right, or maybe it was directed, but it doesn't matter. The danger is if folks can get away with launching attacks on open projects and volunteers like this at a scale AI is capable of doing. And this bot clearly identified itself as AI, how many won't and can fly under the radar before dealing their damage?
Additionally, the bot's site has this funny bit of nonsense:
There is now a follow-up article An AI Agent Published a Hit Piece on Me – More Things Have Happened which covers ars technica apparently using AI to generate a story about this.
They've since published a retraction: https://arstechnica.com/ai/2026/02/after-a-routine-code-rejection-an-ai-agent-published-a-hit-piece-on-someone-by-name/
And a discussion on their retraction: https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/