43 votes

US Pentagon declares Anthropic a threat to national security

29 comments

  1. [6]
    TonesTones
    Link
    After the blog post from Dario Amodei, the Pentagon follows through with its threats. Simultaneously shocking me and not even surprising me. This is creating a rift in the development of...

    The Trump administration placed AI firm Anthropic on a far-reaching national security blacklist Friday, directing federal agencies to stop using its technology and banning any other company that does business with the military from working with it, effective immediately.

    After the blog post from Dario Amodei, the Pentagon follows through with its threats.

    Defense Secretary Pete Hegseth followed up late Friday, saying in a post on X that he was declaring Anthropic a supply-chain risk. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote.

    The supply-chain-risk designation issued by Hegseth late Friday was an extraordinary escalation, ranking a leading American AI company alongside the likes of Chinese and Russian firms seen as a danger to the United States.

    Simultaneously shocking me and not even surprising me.

    By Friday afternoon, more than 550 employees at Google and ChatGPT-maker OpenAI had signed an open letter in support of Anthropic’s position and calling on their bosses to stand up to the Pentagon. Earlier in the week, Jeff Dean, Google’s chief AI scientist, said he was opposed to the technology being used for surveillance and repeated his long-standing opposition to autonomous weapons.

    This is creating a rift in the development of artificial intelligence, with employees understandably sympathetic to Anthropic's moral position. Somehow, Sam Altman and Jeff Dean both claim that their moral positions align with Anthropic's but are not experiencing the same issues with their military contracts, raising questions about the contents of those contracts.

    Elon Musk’s xAI could also benefit from Anthropic’s dispute with the Pentagon. Defense officials have said xAI has already agreed to the Pentagon’s terms for working on classified systems. The entrepreneur jumped on Michael’s social media thread Friday, saying “Anthropic hates Western Civilization.”

    Others are naturally siding with the Pentagon and are looking to take advantage of this opportunity to take Anthropic's premier position as a government contractor.

    18 votes
    1. [5]
      mild_takes
      Link Parent
      Apparantly openAI signed on with the pentagon so I guess screw what the employees want.

      Apparantly openAI signed on with the pentagon so I guess screw what the employees want.

      7 votes
      1. [2]
        CptBluebear
        Link Parent
        Which shouldn't come as a surprise. Companies by and large do not care about the peons. The signatories on the petition may as well just handed them a list on who to fire next.

        Which shouldn't come as a surprise. Companies by and large do not care about the peons. The signatories on the petition may as well just handed them a list on who to fire next.

        10 votes
        1. teaearlgraycold
          Link Parent
          If they signed the petition they should be ready to quit anyway. I heard about a little competitor that might want to hire them.

          If they signed the petition they should be ready to quit anyway. I heard about a little competitor that might want to hire them.

          3 votes
      2. Eric_the_Cerise
        Link Parent
        FWIW, between OpenAI and Google, they have ~200,000 employees, and last time I checked, the petition had something like 300 signatures on it, so.... I mean, I'd love to see these companies just do...

        FWIW, between OpenAI and Google, they have ~200,000 employees, and last time I checked, the petition had something like 300 signatures on it, so.... I mean, I'd love to see these companies just do something right, just because it's the right thing to do, but absent that fantasy, a petition with <1% of employees signing it, isn't exactly a compelling reason.

        5 votes
      3. updawg
        Link Parent
        https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban

        In a post on [twitter] announcing competitor OpenAI's deal with the Defense Department, the company's CEO Sam Altman, who previously cited similar concerns, said his agreement with the government included safeguards like the ones Anthropic had asked for.

        "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," he said. "The Do[D] agrees with these principles, reflects them in law and policy, and we put them into our agreement."

        https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban

        2 votes
  2. [2]
    derekiscool
    Link
    The absolute stupidity of banning your entire military industrial complex from using the best AI tool is astounding. I just can't comprehend doing something this moronic, petty, and damaging to...

    The absolute stupidity of banning your entire military industrial complex from using the best AI tool is astounding. I just can't comprehend doing something this moronic, petty, and damaging to the country because your feelings got hurt for being rejected.

    I have to imagine that when this goes to court, it will be overturned. There's 0 credible reason for this ban.

    7 votes
    1. Minori
      Link Parent
      Courts have been extremely friendly to "national security" arguments, so I'll be pleasantly surprised if this ends up overturned. I won't hold my breath though.

      Courts have been extremely friendly to "national security" arguments, so I'll be pleasantly surprised if this ends up overturned. I won't hold my breath though.

      6 votes
  3. [9]
    skybrian
    Link
    It looks like this fight might not really be about Anthropic's red lines? The hypothetical nuclear attack that escalated the Pentagon’s showdown with Anthropic (Washington Post) ... ... ...

    It looks like this fight might not really be about Anthropic's red lines?

    The hypothetical nuclear attack that escalated the Pentagon’s showdown with Anthropic (Washington Post)

    Anthropic recently added a former deputy chief of staff to President Donald Trump to its board and explored taking investment from a fund led by Donald Trump Jr., according to people familiar with the pitch. Yet its leaders have also repeatedly clashed with the White House in public.

    In a coruscating post on X in October, David Sacks, Trump’s top AI adviser, accused the company of “fear-mongering” and pursuing “regulatory capture” in an attempt to bend the government to its will. Anthropic leaders have criticized one of the administration’s key AI policies in recent weeks, even as the dispute with the Pentagon was brewing.

    “There’s the subtext of Anthropic not being aligned with the MAGA agenda,” said Steven Feldstein, a senior fellow at the Carnegie Endowment, who researches the use of AI in war. “This is as much of a political fight as a military use issue.”

    ...

    Anthropic has differed from its rivals in simultaneously courting the administration for contracts while opposing it in other areas of policy.

    When the White House was pushing an executive order that would preempt restrictive state-level AI laws this winter, Anthropic was promoting a safety-oriented AI bill in California.

    Amodei has also criticized the Trump administration’s drive to allow exports of American AI chips to China. On the sidelines of the World Economic Forum in Davos, Switzerland last month, Amodei compared the policy to “selling nuclear weapons to North Korea.” After meeting with Amodei this month on Capitol Hill, Sen. Elizabeth Warren (D-Massachusetts) said she would introduce legislation to sharply limit any exports.

    ...

    Insiders in the world of defense technology argue that the current fight between the Pentagon and Anthropic appears to be more philosophical than technical, and that the administration had already soured on the AI company — even as rank-and-file military personnel were finding its services increasingly useful.

    ...

    “The administration and the Republicans are looking for ways to get rid of Anthropic once and for all,” the person familiar with the military’s work with AI companies said. The Pentagon clash could provide an opportunity to carry that through. In January, Hegseth issued a directive for the military to embrace AI as though the country were at war.

    5 votes
    1. [7]
      firedoll
      Link Parent
      Not having dove into this too deep, my pet conspiracy is that it's favoritism and not necessarily about what's on the tin--that somebody wants to bring in Grok or OpenAI and that means finding a...

      Insiders in the world of defense technology argue ...Anthropic appears to be more philosophical than technical, and that the administration had already soured on the AI company

      Not having dove into this too deep, my pet conspiracy is that it's favoritism and not necessarily about what's on the tin--that somebody wants to bring in Grok or OpenAI and that means finding a reason to get rid of Anthropic. Obviously, I don't know exactly what the motivations for the favoritism are, but it could be something like campaign contributions or the stuff that plays out in the business world (E.g. Some new person comes in and wants to leave a mark, wants to bring in buddies, can't figure out how to use the software, etc.)

      That's just speculation though and it feels like there's several reasons this could be playing out. I haven't seen anybody else voicing this idea, though.

      8 votes
      1. rosco
        Link Parent
        I mean, from a very non-in the loop perspective, isn't Ellison working on their weird AI "great leap forward-esque" program that is heavily funded by the US government? And if Anthropic is able to...

        I mean, from a very non-in the loop perspective, isn't Ellison working on their weird AI "great leap forward-esque" program that is heavily funded by the US government? And if Anthropic is able to maintain better models without that kind of investment it will look like their sham project look like a... sham?

        4 votes
      2. [5]
        Grumble4681
        Link Parent
        While it's possible, there's no reason why they couldn't have gradually introduced their favored partner and then gradually faded out the other if that's all it was about. They're going to have to...

        While it's possible, there's no reason why they couldn't have gradually introduced their favored partner and then gradually faded out the other if that's all it was about. They're going to have to do that to some extent with what they're doing now, because Claude is currently the only approved service now, and they can't just easily switch overnight.

        I have the impression it's more of the same from this administration. They want to make things into high stakes spectacles, make it something people need to have an opinion on and further sow division. They want to push the boundaries of what they can do, who they can pressure, how much they can get away with. They want to bully others into obedience, or at least bully them into someone who will not harm their public image or will allow themselves to be used as a prop for the administration to claim public victories. Much of what Trump and his administration is doing is about legacy after all, putting his name on things, changing things that require having to mention his name, and trying to set up very publicized battles that he can wrangle into a headline of victory for himself in some way or another. It's also setting up a bit for the future, where any attempt or follow through to undo what he has done can be further politicized and twisted into the narrative that he's been targeted, that Democrats are wasting time on frivolous things or targeting Republicans etc.

        2 votes
        1. [4]
          firedoll
          Link Parent
          The Pentagon just declared Anthropic a threat to national security and the government is being ordered to stop using it. OpenAI just agreed with Dept. of Defense to deploy models in their...

          They're going to have to do that to some extent with what they're doing now, because Claude is currently the only approved service now, and they can't just easily switch overnight.

          The Pentagon just declared Anthropic a threat to national security and the government is being ordered to stop using it. OpenAI just agreed with Dept. of Defense to deploy models in their classified network. It does seem like they in fact can just easily switch overnight. Will that be chaotic? When isn't it.

          And yeah, they could have done it a different way, but it's the sense of urgency and absolute nature of it that makes me assume it's coming from... maybe a more personally motivated place. That and it seems like maybe these deals aren't going to be that different.

          I don't think they'll ever skip an excuse to try and show off, threaten people, or bang the drum about how great they are, regardless of the motive.

          Several of these things can be true at once.

          2 votes
          1. skybrian
            Link Parent
            I thought they were allowing for a six month transition period?

            I thought they were allowing for a six month transition period?

            2 votes
          2. [2]
            Grumble4681
            Link Parent
            As skybrian also mentioned, my reading indicated that they had 6 months to switch. Furthermore, they could have more integrations or dependence on claude beyond just what bookmark they have set in...

            The Pentagon just declared Anthropic a threat to national security and the government is being ordered to stop using it. OpenAI just agreed with Dept. of Defense to deploy models in their classified network. It does seem like they in fact can just easily switch overnight. Will that be chaotic? When isn't it.

            As skybrian also mentioned, my reading indicated that they had 6 months to switch. Furthermore, they could have more integrations or dependence on claude beyond just what bookmark they have set in their web browsers, I have no idea, again from my reading it sounded like it wouldn't be that easy to switch. Just because they made an agreement with OpenAI quickly doesn't mean that every military contractor and government agency is going to be able to switch overnight.

            1 vote
            1. sparksbet
              Link Parent
              They almost certainly aren't using the same client as the rest of us and are definitely using the APIs, which are different between different AI vendors, so it's definitely not as trivial as...

              They almost certainly aren't using the same client as the rest of us and are definitely using the APIs, which are different between different AI vendors, so it's definitely not as trivial as switching bookmarks.

              1 vote
  4. [2]
    Minori
    Link
    Does this mean every government contractor is going to start banning products like Claude Code? Definitely a nightmare scenario for some big companies with deep Anthropic integration.

    Does this mean every government contractor is going to start banning products like Claude Code? Definitely a nightmare scenario for some big companies with deep Anthropic integration.

    5 votes
    1. skybrian
      Link Parent
      Anthropic argues that they shouldn't need to: So, apparently Anthropic's defense industry customers will have a decision to make about whether they will do more than legally required to please the...

      Anthropic argues that they shouldn't need to:

      Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.

      So, apparently Anthropic's defense industry customers will have a decision to make about whether they will do more than legally required to please the administration or stand up for their rights and possibly get banned too.

      5 votes
  5. [2]
    kingofsnake
    Link
    Wow, what am I even reading.

    Wow, what am I even reading.

    4 votes
  6. [8]
    Tukajo
    Link
    Now I like them even more. Maybe I'll cancel my Gemini subscription and use them instead.

    Now I like them even more. Maybe I'll cancel my Gemini subscription and use them instead.

    3 votes
    1. [7]
      CptBluebear
      Link Parent
      They were very firm on two points: No autonomous weapons without a human. Not to be used for mass surveillance on people. This got them banned from the supply line, something reserved for hostile...

      They were very firm on two points:

      • No autonomous weapons without a human.
      • Not to be used for mass surveillance on people.

      This got them banned from the supply line, something reserved for hostile state actors and risks to sovereignty. Think Huawei.

      Makes you think what the Pentagon wants to do with the AI products then if those two points are enough to blacklist a US company.

      Say what you want about Dario Amodei, he's at least principled on those two.

      16 votes
      1. [6]
        updawg
        Link Parent
        Certainly not to defend the government or administration, but, as I pointed out elsewhere, they also agreed to a deal with OpenAI with those exact same restrictions, so it's not just about that....

        Makes you think what the Pentagon wants to do with the AI products then if those two points are enough to blacklist a US company.

        Certainly not to defend the government or administration, but, as I pointed out elsewhere, they also agreed to a deal with OpenAI with those exact same restrictions, so it's not just about that. At least, not yet.

        4 votes
        1. post_below
          Link Parent
          For purposes of accuracy, as far as I know the only sources for that are Altman's tweet and 3rd party speculation. Reading the tweet, it appears to me to be carefully worded to imply that the...

          For purposes of accuracy, as far as I know the only sources for that are Altman's tweet and 3rd party speculation. Reading the tweet, it appears to me to be carefully worded to imply that the restrictions are the same, while making it pretty clear by omission that they aren't.

          The most credible speculation I've seen suggests that Anthropic wanted to be in charge of the guardrails, while Open AI was willing to leave that part up to the DoD. So a version of "any lawful use", just like Hegseth wanted.

          Whatever the details, Open AI agreed to the deal impressively quickly, doesn't seem like they had time for much negotiation.

          13 votes
        2. [3]
          Grumble4681
          Link Parent
          That's not proven true. Furthermore, Sam Altman has been releasing public statements full of weasel wording ever since the Pentagon retaliated against Anthropic. The last part "reflects them in...

          That's not proven true. Furthermore, Sam Altman has been releasing public statements full of weasel wording ever since the Pentagon retaliated against Anthropic.

          In a post on X announcing competitor OpenAI's deal with the Defense Department, the company's CEO Sam Altman, who previously cited similar concerns, said his agreement with the government included safeguards like the ones Anthropic had asked for.

          "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," he said. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

          The last part "reflects them in law and policy" is extremely telling. They're stating that the DoD is already bound by law and policy not to violate those principles. The Pentagon has said that they demanded that Anthropic allow them to use the model for anything that was legal.

          So Altman's weasel wording is to interpret the law as already preventing the Pentagon from utilizing them that way, while still acquiescing to the Pentagon they can use it however they want to the full extent of the law. If this were true, Anthropic could have also agreed. Clearly at least Anthropic believes that the law does not prevent the DoD from using services to violate those principles. I think that perception is quite clearly true. So Sam Altman is a liar. What a surprise.

          11 votes
          1. [2]
            updawg
            Link Parent
            While I can certainly see how it's weaselly, it doesn't mean they don't think the law prevents those practices. It could just be that they're afraid the law could change, whereas the weaselly way...

            Clearly at least Anthropic believes that the law does not prevent the DoD from using services to violate those principles.

            While I can certainly see how it's weaselly, it doesn't mean they don't think the law prevents those practices. It could just be that they're afraid the law could change, whereas the weaselly way for Sammy to say that would be that their agreement just says it has to comply with the law and they believe the law says that now.

            1. Grumble4681
              Link Parent
              That's not the only weasel wording he's done. It's important he does the right thing, not the easy thing that looks strong but is disingenuous. What's not disingenuous about publicly stating their...

              That's not the only weasel wording he's done.

              “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions,” Altman wrote in a memo that was viewed by CNBC. “These are our main red lines.”

              “This is a case where it’s important to me that we do the right thing, not the easy thing that looks strong but is disingenuous,” Altman wrote. “But I realize it may not “look good” for us in the short term, and that there is a lot of nuance and context.”

              It's important he does the right thing, not the easy thing that looks strong but is disingenuous. What's not disingenuous about publicly stating their 'red lines' are the same as Anthropics, and attempting to sway public opinion to believe they share the same ethics, while in actuality they don't?

              “We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons,” Altman wrote.

              Oh what do you know, turns out there's some more caveats to his ethics and red lines. It's 'unsuited to cloud deployments'. So AI for mass surveillance is a red line, 'Hey everyone, look we're just as ethical as Anthropic', but then it's actually just fine and dandy as long as it's not in a cloud deployment. Also let's ignore that little tidbit about its legality, because clearly this administration alone has shown no indication that they breach legalities and clearly we haven't seen over the past 20 years that mass surveillance is something that is happening and been enabled by courts to continue happening.

              Source article

              6 votes
        3. CptBluebear
          Link Parent
          I saw your post after I made mine, I wasn't aware of that and I'm still somewhat uncertain when the claim comes from Sam Altman himself. Either way, I don't think it makes sense for me to claim it...

          I saw your post after I made mine, I wasn't aware of that and I'm still somewhat uncertain when the claim comes from Sam Altman himself.

          Either way, I don't think it makes sense for me to claim it was about those two rules until I know more.

          2 votes