44 votes

Anthropic rejects latest US Pentagon offer: ‘We cannot in good conscience accede to their request’

21 comments

  1. [6]
    post_below
    Link
    Wow I did not expect that

    Wow I did not expect that

    23 votes
    1. [5]
      post_below
      Link Parent
      This is a nice touch: Open AI and Google employees sign on

      This is a nice touch:
      Open AI and Google employees sign on

      5 votes
      1. [2]
        JCPhoenix
        Link Parent
        This is what I was wondering about. On the low-side (the unclassified networks), GenAI.mil uses Google Gemini. While I've yet to use it, several of my coworkers have. And from their comments, it...

        "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the petition added.

        This is what I was wondering about. On the low-side (the unclassified networks), GenAI.mil uses Google Gemini. While I've yet to use it, several of my coworkers have. And from their comments, it does alright (though they're just using it for "typical" workplace purposes; summarizing, writing policies/SOPs, etc.). Would it be that difficult for Google (or OpenAI) to simply step in if Anthropic is kicked to the curb?

        I'm not saying that if Google and/or OpenAI are willing to allow "all lawful uses" and can easily replace Anthropic, that Anthropic shouldn't be making a stand. If I was at Anthropic, whatever stuff other companies do is their business. Living by your own principles is always worthwhile. Hopefully both Google and OpenAI are like "Yeah, nah, no go," too. However, my fear, as we've already seen with numerous companies, is that they simply are like "Yeah, do whatever! Just gib monies."

        5 votes
        1. post_below
          Link Parent
          Google might have trouble providing the same level of functionality, Gemini just isn't as good right now (benchmarks aside), but there's certainly nothing stopping them from saying yes. Open AI...

          Would it be that difficult for Google (or OpenAI) to simply step in if Anthropic is kicked to the curb?

          Google might have trouble providing the same level of functionality, Gemini just isn't as good right now (benchmarks aside), but there's certainly nothing stopping them from saying yes. Open AI models are close enough that they'd probably be able to swap pretty easily.

          Just gib monies

          Agreed, I'd amazed if the pentagon couldn't find a replacement, the US gov is the ultimate enterprise customer if you're willing to deal with the regulations. The solidarity from the employees is still great to see though. One sliver of hope: The Trump admin is unpopular and this is a good PR opportunity.

          2 votes
      2. [2]
        balooga
        Link Parent
        xAI conspicuously absent from the conversation. Hmm, I wonder where they stand on the matter…?

        "They're trying to divide each company with fear that the other will give in," the petition said, referring to the Department of War.

        "That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War," it added.

        xAI conspicuously absent from the conversation. Hmm, I wonder where they stand on the matter…?

        3 votes
        1. Grumble4681
          Link Parent
          I would say that if all others refuse to give in and there's enough public concern about this that it could have enough impact to persuade others from giving in, but people still use X despite...

          I would say that if all others refuse to give in and there's enough public concern about this that it could have enough impact to persuade others from giving in, but people still use X despite Threads (which isn't much of a better alternative on the ownership front thanks to fuckerberg) and BlueSky existing. I doubt anyone will back off using anything from xAI's Grok at that point.

          1 vote
  2. [8]
    stu2b50
    Link
    I'mma be honest, even though people were quite frantic about it, I have no idea how Claude is supposed to be used effectively for surveillance or for "weapons", but still nonetheless quite risky...

    I'mma be honest, even though people were quite frantic about it, I have no idea how Claude is supposed to be used effectively for surveillance or for "weapons", but still nonetheless quite risky to publicly decline this US administration in this way. So a message, for sure.

    21 votes
    1. raze2012
      Link Parent
      Like much of this administration: very poorly and incompetently. Reading some of the law and medical horror stories at hand with AI makes me shudder at whatever they are going to do with it for...

      I have no idea how Claude is supposed to be used effectively for surveillance or for "weapons"

      Like much of this administration: very poorly and incompetently. Reading some of the law and medical horror stories at hand with AI makes me shudder at whatever they are going to do with it for governmental military weapons.

      At least there's one company here on the record of pushing back. For the time being.

      17 votes
    2. [3]
      ICN
      Link Parent
      LLMs would be fantastic for a probable cause generator. Poke it until it says that whatever person/group you're targeting is the suspect, and now that an advanced AI system has said it you've got...

      LLMs would be fantastic for a probable cause generator. Poke it until it says that whatever person/group you're targeting is the suspect, and now that an advanced AI system has said it you've got all the justification needed to go after them.

      11 votes
      1. Greg
        Link Parent
        Douglas Adams used this exact idea, down to the software being bought out by the Pentagon, as a satirical plot point almost 40 years ago (relevant excerpt). I am not pleased to think that it’s now...

        Douglas Adams used this exact idea, down to the software being bought out by the Pentagon, as a satirical plot point almost 40 years ago (relevant excerpt). I am not pleased to think that it’s now entirely realistic…

        13 votes
      2. Eji1700
        Link Parent
        Probable cause isn’t really the domain of the pentagon? Sure for the alphabet agencies that’s a problem but this doesn’t strike me as even on the radar of the pentagon

        Probable cause isn’t really the domain of the pentagon? Sure for the alphabet agencies that’s a problem but this doesn’t strike me as even on the radar of the pentagon

        8 votes
    3. [3]
      teaearlgraycold
      Link Parent
      Sifting through all of that Prism data? Find the dissidents using NLP.

      Sifting through all of that Prism data? Find the dissidents using NLP.

      6 votes
      1. [2]
        stu2b50
        Link Parent
        How would Claude help in any way? The Pentagon is more than capable of running BERT.

        How would Claude help in any way? The Pentagon is more than capable of running BERT.

  3. [5]
    updawg
    Link
    That answers your question @stu2b50.

    In a lengthy blog post on Thursday, Amodei wrote: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.”

    Amodei said Anthropic understands that the Pentagon, “not private companies, makes military decisions.” But “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” He also said use cases like mass surveillance and autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do.”

    That answers your question @stu2b50.

    In response, Emil Michael, the Pentagon’s Undersecretary for Research and Engineering who had been part of the negotiations, wrote on X: “It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”

    7 votes
    1. [4]
      stu2b50
      Link Parent
      I mean it doesn’t really answer how that would be done. As people often note when anthropic’s blog posts come up, anthropic really likes to pretend like they’re techpriests summoning something...

      I mean it doesn’t really answer how that would be done. As people often note when anthropic’s blog posts come up, anthropic really likes to pretend like they’re techpriests summoning something from the dark age of technology.

      That answer implies that Claude could be used for surveillance or whatever in an unsafe and unreliable way, but as it currently is i don’t see how it could do anything remotely relevant, let alone unsafe.

      12 votes
      1. [3]
        updawg
        Link Parent
        I think that's the point. The unsafe way is to ask it to do something it can't really do.

        I think that's the point. The unsafe way is to ask it to do something it can't really do.

        10 votes
        1. [2]
          balooga
          Link Parent
          I think this is the right take. I want to read this as Anthropic saying they object to autonomous weapons and mass surveillance… but I think what they’re actually saying is LLMs are just the wrong...

          I think this is the right take. I want to read this as Anthropic saying they object to autonomous weapons and mass surveillance… but I think what they’re actually saying is LLMs are just the wrong tool for the job (but they have actual no problem with those things).

          Which, I mean, that’s an objectively correct assessment of the tech, but not exactly a bulwark of moral fortitude.

          5 votes
          1. Bwerf
            Link Parent
            I think that saying "llms are the wrong tool for the job" has a much bigger chance of succeeding than "we think this is unethical". So strategically it would be bad to not include that argument...

            I think that saying "llms are the wrong tool for the job" has a much bigger chance of succeeding than "we think this is unethical". So strategically it would be bad to not include that argument regardless of the actual reason. Doesn't have to be just one of the reasons either, it can be both.

            5 votes
  4. Eric_the_Cerise
    Link
    "request" ... apparently, Hegseth wasn't quite blunt enough.

    "request" ... apparently, Hegseth wasn't quite blunt enough.

    3 votes
  5. agentsquirrel
    Link
    Why does the DoD need Anthropic, or any AI company, for that matter? They have the wherewithal and resources to spin up all the LLMs and ML they could want, and develop models that are totally...

    Why does the DoD need Anthropic, or any AI company, for that matter? They have the wherewithal and resources to spin up all the LLMs and ML they could want, and develop models that are totally optimized for their use cases, like performing game theory or killing people.