While it didn't fully give away the farm, it pretty much gives away all the land around the farm. So to speak. There's a lot of interesting things to learn from this chunk of code that I'm sure...
Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public.
A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning.
The most significant takeaway for competitors lies in how Anthropic solved "context entropy"—the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity.
The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional "store-everything" retrieval.
Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories.
The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
While Anthropic may use this for internal "dog-fooding," it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure.
While it didn't fully give away the farm, it pretty much gives away all the land around the farm. So to speak. There's a lot of interesting things to learn from this chunk of code that I'm sure all the other major AI players are currently analyzing in earnest to see how they can apply it to their own products.
It is semi interesting news, but those quotes really aren't what I'd put much value in to be honest. Interestingly enough I have been giving Claude code a go just to see if it is anything close to...
It is semi interesting news, but those quotes really aren't what I'd put much value in to be honest.
The most significant takeaway for competitors lies in how Anthropic solved "context entropy"—the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity.
The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional "store-everything" retrieval.
Interestingly enough I have been giving Claude code a go just to see if it is anything close to what people have been saying. One of the things I found that it very much still gets confused and the various "memory" markdown files it writes down are more and more ignored over time in a session and also when the project grows and is used. I have found that in addition to needing to carefully check the code it outputs I also need to make sure these memory files don't contain outdated information, claude contradicting itself, etc.
I even found it being confused as to where it should store memories. At least one agent first wrote to the project files in the home directory and then suddenly switched to the same file name in the project directory.
It still does very well, as long as I babysit it like a junior developer with short term memory loss. Even more so as the sort of hallucinations I am seeing now are much more subtle and as a result harder to catch by a simple code review.
I have written about lazy use of LLMs in various comments on tildes in the past. Claude code very much sits at that threshold where with lazy use it will cause havoc. Considering that our brains tend to lean towards easy/laze approaches to things I very much still question the majority of work done by tools like these. That and the fact that I see it happening around me on a daily basis.
As a side note, the quote I ... re-quoted, reads artificial to me as well. Not even the em dash, but the inclusion of "a sophisticated" which is the sort of hyperbole a lot of LLMs tend to lean towards. So I am also taking the explanation with a grain of salt.
While Anthropic may use this for internal "dog-fooding," it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure.
Again the hyperbole and overselling. You can achieve the same with some basic instructions in your system prompt. It doesn't really matter if it is CLAUDE.md, AGENTS.md or some other means of leaving instructions. This also very much already does happen, either semi-autonomously through agents with a final check or more manual where the person behind it still does the manual steps of making PRs and stuff.
Edit: Okay, replied before I had a look at the article itself. It 100% is a slop article that is just generated based on other sources. It is full of overly dramatic methaphors, repetitive phrasing (I lost count on the "it is not just" usage) and overal very formulaic approach of story telling.
I did some digging, this post actually seems to have a human behind it. Although it mostly seems to be a summarizing the comments from the thread on that orange website. I guess the person behind did a better job creating a summary than the venturebeat AI did.
I just want to confirm... the article is LLM slop and the conclusions are mostly just wrong. Everyone already knew how a harness works, and Claude Code had been decompiled before this leak. It's...
I just want to confirm... the article is LLM slop and the conclusions are mostly just wrong.
Everyone already knew how a harness works, and Claude Code had been decompiled before this leak. It's still interesting, and there are a few insights a competitor might glean, but the way the article frames the situation is outright misinformation.
I changed the topic link to point to that blog post instead of the news article.
I did some digging, this post actually seems to have a human behind it. Although it mostly seems to be a summarizing the comments from the thread on that orange website. I guess the person behind did a better job creating a summary than the venturebeat AI did.
I changed the topic link to point to that blog post instead of the news article.
Sadly that blog post is AI written too, though with better "don't use these AI tells" prompting and some post inference human editing. Still, it's better slop than the first slop.
Sadly that blog post is AI written too, though with better "don't use these AI tells" prompting and some post inference human editing.
It might take more work, but you can learn just about everything about how a JavaScript app works from the minimized source code, no sourcemap needed. You'd be missing things like comments and...
It might take more work, but you can learn just about everything about how a JavaScript app works from the minimized source code, no sourcemap needed. You'd be missing things like comments and dead code that was stripped out during the compilation process because it's not reachable. This might give a clue about unreleased or disabled features.
While it didn't fully give away the farm, it pretty much gives away all the land around the farm. So to speak. There's a lot of interesting things to learn from this chunk of code that I'm sure all the other major AI players are currently analyzing in earnest to see how they can apply it to their own products.
It is semi interesting news, but those quotes really aren't what I'd put much value in to be honest.
Interestingly enough I have been giving Claude code a go just to see if it is anything close to what people have been saying. One of the things I found that it very much still gets confused and the various "memory" markdown files it writes down are more and more ignored over time in a session and also when the project grows and is used. I have found that in addition to needing to carefully check the code it outputs I also need to make sure these memory files don't contain outdated information, claude contradicting itself, etc.
I even found it being confused as to where it should store memories. At least one agent first wrote to the project files in the home directory and then suddenly switched to the same file name in the project directory.
It still does very well, as long as I babysit it like a junior developer with short term memory loss. Even more so as the sort of hallucinations I am seeing now are much more subtle and as a result harder to catch by a simple code review.
I have written about lazy use of LLMs in various comments on tildes in the past. Claude code very much sits at that threshold where with lazy use it will cause havoc. Considering that our brains tend to lean towards easy/laze approaches to things I very much still question the majority of work done by tools like these. That and the fact that I see it happening around me on a daily basis.
As a side note, the quote I ... re-quoted, reads artificial to me as well. Not even the em dash, but the inclusion of "a sophisticated" which is the sort of hyperbole a lot of LLMs tend to lean towards. So I am also taking the explanation with a grain of salt.
Again the hyperbole and overselling. You can achieve the same with some basic instructions in your system prompt. It doesn't really matter if it is CLAUDE.md, AGENTS.md or some other means of leaving instructions. This also very much already does happen, either semi-autonomously through agents with a final check or more manual where the person behind it still does the manual steps of making PRs and stuff.
Edit: Okay, replied before I had a look at the article itself. It 100% is a slop article that is just generated based on other sources. It is full of overly dramatic methaphors, repetitive phrasing (I lost count on the "it is not just" usage) and overal very formulaic approach of story telling.
I did some digging, this post actually seems to have a human behind it. Although it mostly seems to be a summarizing the comments from the thread on that orange website. I guess the person behind did a better job creating a summary than the venturebeat AI did.
I just want to confirm... the article is LLM slop and the conclusions are mostly just wrong.
Everyone already knew how a harness works, and Claude Code had been decompiled before this leak. It's still interesting, and there are a few insights a competitor might glean, but the way the article frames the situation is outright misinformation.
I changed the topic link to point to that blog post instead of the news article.
Sadly that blog post is AI written too, though with better "don't use these AI tells" prompting and some post inference human editing.
Still, it's better slop than the first slop.
Yeah how about a link from The Register? That's probably not slop (god please let it not be slop)
It might take more work, but you can learn just about everything about how a JavaScript app works from the minimized source code, no sourcemap needed. You'd be missing things like comments and dead code that was stripped out during the compilation process because it's not reachable. This might give a clue about unreleased or disabled features.