43
votes
US Pentagon declares Anthropic a threat to national security
Link information
This data is scraped automatically and may be incorrect.
- Title
- Trump orders U.S. government to drop use of Anthropic's technology
- Published
- Feb 27 2026
- Word count
- 367 words
After the blog post from Dario Amodei, the Pentagon follows through with its threats.
Simultaneously shocking me and not even surprising me.
This is creating a rift in the development of artificial intelligence, with employees understandably sympathetic to Anthropic's moral position. Somehow, Sam Altman and Jeff Dean both claim that their moral positions align with Anthropic's but are not experiencing the same issues with their military contracts, raising questions about the contents of those contracts.
Others are naturally siding with the Pentagon and are looking to take advantage of this opportunity to take Anthropic's premier position as a government contractor.
Apparantly openAI signed on with the pentagon so I guess screw what the employees want.
Which shouldn't come as a surprise. Companies by and large do not care about the peons. The signatories on the petition may as well just handed them a list on who to fire next.
If they signed the petition they should be ready to quit anyway. I heard about a little competitor that might want to hire them.
FWIW, between OpenAI and Google, they have ~200,000 employees, and last time I checked, the petition had something like 300 signatures on it, so.... I mean, I'd love to see these companies just do something right, just because it's the right thing to do, but absent that fantasy, a petition with <1% of employees signing it, isn't exactly a compelling reason.
https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban
The absolute stupidity of banning your entire military industrial complex from using the best AI tool is astounding. I just can't comprehend doing something this moronic, petty, and damaging to the country because your feelings got hurt for being rejected.
I have to imagine that when this goes to court, it will be overturned. There's 0 credible reason for this ban.
Courts have been extremely friendly to "national security" arguments, so I'll be pleasantly surprised if this ends up overturned. I won't hold my breath though.
It looks like this fight might not really be about Anthropic's red lines?
The hypothetical nuclear attack that escalated the Pentagon’s showdown with Anthropic (Washington Post)
...
...
...
Not having dove into this too deep, my pet conspiracy is that it's favoritism and not necessarily about what's on the tin--that somebody wants to bring in Grok or OpenAI and that means finding a reason to get rid of Anthropic. Obviously, I don't know exactly what the motivations for the favoritism are, but it could be something like campaign contributions or the stuff that plays out in the business world (E.g. Some new person comes in and wants to leave a mark, wants to bring in buddies, can't figure out how to use the software, etc.)
That's just speculation though and it feels like there's several reasons this could be playing out. I haven't seen anybody else voicing this idea, though.
I mean, from a very non-in the loop perspective, isn't Ellison working on their weird AI "great leap forward-esque" program that is heavily funded by the US government? And if Anthropic is able to maintain better models without that kind of investment it will look like their sham project look like a... sham?
While it's possible, there's no reason why they couldn't have gradually introduced their favored partner and then gradually faded out the other if that's all it was about. They're going to have to do that to some extent with what they're doing now, because Claude is currently the only approved service now, and they can't just easily switch overnight.
I have the impression it's more of the same from this administration. They want to make things into high stakes spectacles, make it something people need to have an opinion on and further sow division. They want to push the boundaries of what they can do, who they can pressure, how much they can get away with. They want to bully others into obedience, or at least bully them into someone who will not harm their public image or will allow themselves to be used as a prop for the administration to claim public victories. Much of what Trump and his administration is doing is about legacy after all, putting his name on things, changing things that require having to mention his name, and trying to set up very publicized battles that he can wrangle into a headline of victory for himself in some way or another. It's also setting up a bit for the future, where any attempt or follow through to undo what he has done can be further politicized and twisted into the narrative that he's been targeted, that Democrats are wasting time on frivolous things or targeting Republicans etc.
The Pentagon just declared Anthropic a threat to national security and the government is being ordered to stop using it. OpenAI just agreed with Dept. of Defense to deploy models in their classified network. It does seem like they in fact can just easily switch overnight. Will that be chaotic? When isn't it.
And yeah, they could have done it a different way, but it's the sense of urgency and absolute nature of it that makes me assume it's coming from... maybe a more personally motivated place. That and it seems like maybe these deals aren't going to be that different.
I don't think they'll ever skip an excuse to try and show off, threaten people, or bang the drum about how great they are, regardless of the motive.
Several of these things can be true at once.
I thought they were allowing for a six month transition period?
As skybrian also mentioned, my reading indicated that they had 6 months to switch. Furthermore, they could have more integrations or dependence on claude beyond just what bookmark they have set in their web browsers, I have no idea, again from my reading it sounded like it wouldn't be that easy to switch. Just because they made an agreement with OpenAI quickly doesn't mean that every military contractor and government agency is going to be able to switch overnight.
They almost certainly aren't using the same client as the rest of us and are definitely using the APIs, which are different between different AI vendors, so it's definitely not as trivial as switching bookmarks.
It most certainly is not, because OpenAI just made a deal with the DoD with the same restrictions:
https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban
Does this mean every government contractor is going to start banning products like Claude Code? Definitely a nightmare scenario for some big companies with deep Anthropic integration.
Anthropic argues that they shouldn't need to:
So, apparently Anthropic's defense industry customers will have a decision to make about whether they will do more than legally required to please the administration or stand up for their rights and possibly get banned too.
Wow, what am I even reading.
A clown show.
Now I like them even more. Maybe I'll cancel my Gemini subscription and use them instead.
They were very firm on two points:
This got them banned from the supply line, something reserved for hostile state actors and risks to sovereignty. Think Huawei.
Makes you think what the Pentagon wants to do with the AI products then if those two points are enough to blacklist a US company.
Say what you want about Dario Amodei, he's at least principled on those two.
Certainly not to defend the government or administration, but, as I pointed out elsewhere, they also agreed to a deal with OpenAI with those exact same restrictions, so it's not just about that. At least, not yet.
For purposes of accuracy, as far as I know the only sources for that are Altman's tweet and 3rd party speculation. Reading the tweet, it appears to me to be carefully worded to imply that the restrictions are the same, while making it pretty clear by omission that they aren't.
The most credible speculation I've seen suggests that Anthropic wanted to be in charge of the guardrails, while Open AI was willing to leave that part up to the DoD. So a version of "any lawful use", just like Hegseth wanted.
Whatever the details, Open AI agreed to the deal impressively quickly, doesn't seem like they had time for much negotiation.
That's not proven true. Furthermore, Sam Altman has been releasing public statements full of weasel wording ever since the Pentagon retaliated against Anthropic.
The last part "reflects them in law and policy" is extremely telling. They're stating that the DoD is already bound by law and policy not to violate those principles. The Pentagon has said that they demanded that Anthropic allow them to use the model for anything that was legal.
So Altman's weasel wording is to interpret the law as already preventing the Pentagon from utilizing them that way, while still acquiescing to the Pentagon they can use it however they want to the full extent of the law. If this were true, Anthropic could have also agreed. Clearly at least Anthropic believes that the law does not prevent the DoD from using services to violate those principles. I think that perception is quite clearly true. So Sam Altman is a liar. What a surprise.
While I can certainly see how it's weaselly, it doesn't mean they don't think the law prevents those practices. It could just be that they're afraid the law could change, whereas the weaselly way for Sammy to say that would be that their agreement just says it has to comply with the law and they believe the law says that now.
That's not the only weasel wording he's done.
It's important he does the right thing, not the easy thing that looks strong but is disingenuous. What's not disingenuous about publicly stating their 'red lines' are the same as Anthropics, and attempting to sway public opinion to believe they share the same ethics, while in actuality they don't?
Oh what do you know, turns out there's some more caveats to his ethics and red lines. It's 'unsuited to cloud deployments'. So AI for mass surveillance is a red line, 'Hey everyone, look we're just as ethical as Anthropic', but then it's actually just fine and dandy as long as it's not in a cloud deployment. Also let's ignore that little tidbit about its legality, because clearly this administration alone has shown no indication that they breach legalities and clearly we haven't seen over the past 20 years that mass surveillance is something that is happening and been enabled by courts to continue happening.
Source article
I saw your post after I made mine, I wasn't aware of that and I'm still somewhat uncertain when the claim comes from Sam Altman himself.
Either way, I don't think it makes sense for me to claim it was about those two rules until I know more.