21 votes

US Pentagon leverages AI in Iran strikes amid feud with Anthropic

21 comments

  1. [19]
    balooga
    Link
    I think this is the context we were missing in the last thread about this. Anthropic's two objections were autonomous killing, and mass surveillance. I think we weren't really sure what to make of...

    I think this is the context we were missing in the last thread about this. Anthropic's two objections were autonomous killing, and mass surveillance. I think we weren't really sure what to make of that because Claude isn't particularly well suited for either.

    The missing piece is that Palantir is the backend for both. Maven is using mass surveillance to facilitate (semi-)autonomous killing. Claude is just a frontend they've integrated with that system. Which violates their ToS. Things make a lot more sense now.

    16 votes
    1. skybrian
      Link Parent
      I'm speculating here, but here's what this sounds like: Palantir built a "find targets to bomb" algorithm and if you just send the target list to the bombers then there's no human in the loop, or...

      I'm speculating here, but here's what this sounds like:

      Palantir built a "find targets to bomb" algorithm and if you just send the target list to the bombers then there's no human in the loop, or minimal if they just click "Ok" a lot. Exactly how this works is what Palantir built and they do what the customer wants.

      Someone at Anthropic talked to someone at Palantir and they had a "you did what? Did you read the contract?" moment.

      And that was just before the military was about to bomb Iran. And the military is like "we're going to call off bombing Iran because of contract terms? No."

      And maybe that's how you bomb a school, among many other targets.

      12 votes
    2. [17]
      Eji1700
      Link Parent
      I do find all this a little odd because I wonder where the line between "AI" and "Algorithm" is. Run it through Claude is a no no, but we've had facial recognition for years now and i'm not sure...

      I do find all this a little odd because I wonder where the line between "AI" and "Algorithm" is. Run it through Claude is a no no, but we've had facial recognition for years now and i'm not sure how that's meaningfully different.

      HOPEFULLY this brings more understanding to the average person that "well yeah you're right that wasn't 'AI' as we call it but also probably really bad", but it's a very difficult discussion. A military isn't going to ignore weapons tech if it offers an advantage, but the obvious social issues are real.

      5 votes
      1. [12]
        turmacar
        Link Parent
        Personally, an algorithm is (at least somewhat) deterministic and can be tested and debugged. With the current generation of AI, vision or LLM, the only OODA loop taking place is long after the...

        Personally, an algorithm is (at least somewhat) deterministic and can be tested and debugged.

        With the current generation of AI, vision or LLM, the only OODA loop taking place is long after the automated system has made its determinations, if at all. Hopefully before their suggestions are being acted on but that doesn't seem to be the case. Seemingly there isn't any parallel process checking that it's suggestions are actually optimal, or are even based in reality.

        As with a lot of AI projects we're running headlong into "any decision isn't as good as a correct decision". If a script or a legal argument or a social media post is generated 'good enough', that might actually be good enough. If the output of the automated system is killing someone it needs to have an audit trail.

        9 votes
        1. [11]
          Eji1700
          Link Parent
          AI can do this as well. or you can take the AI loop that generate the output and turn that into an algorithm, is that ok? I mean i'm not sure how that's too much different from current target...

          Personally, an algorithm is (at least somewhat) deterministic and can be tested and debugged.

          AI can do this as well.

          or you can take the AI loop that generate the output and turn that into an algorithm, is that ok?

          Seemingly there isn't any parallel process checking that it's suggestions are actually optimal, or are even based in reality.

          I mean i'm not sure how that's too much different from current target acquisition algo's for loitering munitions.

          1. [9]
            ThrowdoBaggins
            Link Parent
            Wait, are large language models deterministic? I’ve always assumed not, because if I open a new window and copy paste the exact same prompt, the result can be different. Maybe only slightly...

            Personally, an algorithm is (at least somewhat) deterministic and can be tested and debugged.

            AI can do this as well.

            Wait, are large language models deterministic? I’ve always assumed not, because if I open a new window and copy paste the exact same prompt, the result can be different. Maybe only slightly different, but even an infinitesimal difference in output from the same input means it’s not a deterministic algorithm underneath.

            4 votes
            1. [2]
              stu2b50
              Link Parent
              They are deterministic, in that an LLM is a series of large matrices to which a vector is multiplied. As you may imagine, matrix multiplication is a deterministic process (linear algebra would be...

              They are deterministic, in that an LLM is a series of large matrices to which a vector is multiplied.

              As you may imagine, matrix multiplication is a deterministic process (linear algebra would be a very interesting class otherwise).

              The output of an LLM is a probability distribution, assigning every possible token a probability. Usually how LLMs work is that they sample from this distribution - that’s where randomness is.

              But you can technically just always take the most likely token, and that would be a deterministic result. And of course you can always inspect the probability distribution.

              And the random sampling itself is not actually random on a computer, it is directly dependent on the seed used.

              4 votes
              1. sparksbet
                (edited )
                Link Parent
                I don't think we disagree on the implications of the math re: determinism here -- my point is more that when considering the inputs and outputs that a human being can perceive and change, they do...

                I don't think we disagree on the implications of the math re: determinism here -- my point is more that when considering the inputs and outputs that a human being can perceive and change, they do not result in behavior that fits the definition in the practical sense of "a human interfacing with it can put in the same input and expect the exact same ouput". This doesn't coincide with the theoretical or mathematical definition thereof, but it is very often what people are actually saying when they ask if these models are deterministic (and it's what I understood from the question above, but rereading the comment it is less clear which sense they're asking about it, so it's possible I was off the mark and they were asking about whether they were deterministic strictly speaking on a mathematical level).

                3 votes
            2. [5]
              sparksbet
              Link Parent
              Large language models are deterministic to the extent that the weather is deterministic -- technically if we had an utterly impossible complete knowledge of every factor going into them, we could...

              Large language models are deterministic to the extent that the weather is deterministic -- technically if we had an utterly impossible complete knowledge of every factor going into them, we could probably predict them based on the fairly basic math involved. By any practical metric, neither is actually deterministic, though, because it is impossible to even come close to having that complete knowledge of every factor that goes into generating the output.

              It is actually extremely difficult if not impossible to make LLMs deterministic in any practical sense and it would be an incredible waste of one's time and energy even if it were achievable for a particular task. There are parameters that can make them more predictable, but these do not come close to actually making them deterministic in any practical sense of the word (Eji more or less admits this in his comment while trying to claim they are deterministic). There are other AI models (usually older, simpler ones) that are actually deterministic, so if one suits your task and you're okay with other tradeoffs, it's better to pick one of those when you need something truly deterministic.

              3 votes
              1. [2]
                stu2b50
                Link Parent
                That’s conflating determinism and explainability. LLMs are absolutely 100% deterministic, in that the same inputs always derives the same outputs. They’re not explainable, as, say, a decision tree...

                That’s conflating determinism and explainability. LLMs are absolutely 100% deterministic, in that the same inputs always derives the same outputs.

                They’re not explainable, as, say, a decision tree is explainable, in that from the model parameters you cannot realistically reason why the output would be what it is.

                4 votes
                1. sparksbet
                  Link Parent
                  I agree that the same inputs do arise from the same outputs strictly speaking, but in practice the inputs are sufficiently many and unexplainable that you cannot actually get deterministic results...

                  I agree that the same inputs do arise from the same outputs strictly speaking, but in practice the inputs are sufficiently many and unexplainable that you cannot actually get deterministic results on a practical level, even if mathematically it is technically deterministic.

              2. [2]
                Eji1700
                Link Parent
                I do not. You can spin one up in a home lab right now with the right (if now stupidly expensive) equipment and do it. The question is where the tradeoffs lie in doing so.

                There are parameters that can make them more predictable, but these do not come close to actually making them deterministic in any practical sense of the word (Eji more or less admits this in his comment while trying to claim they are deterministic).

                I do not. You can spin one up in a home lab right now with the right (if now stupidly expensive) equipment and do it. The question is where the tradeoffs lie in doing so.

                1. sparksbet
                  Link Parent
                  I'm not sure you'd get anything much like the behavior of modern LLMs if you strictly sampled the most likely token every time, even if you were able to build such a model locally, but fair enough.

                  I'm not sure you'd get anything much like the behavior of modern LLMs if you strictly sampled the most likely token every time, even if you were able to build such a model locally, but fair enough.

                  1 vote
            3. Eji1700
              Link Parent
              They fundamentally are although there’a a few reasons your average one behaves as you have seen (temp needs to be 0 and there can still be random factors with selecting tokens or in some cases...

              They fundamentally are although there’a a few reasons your average one behaves as you have seen (temp needs to be 0 and there can still be random factors with selecting tokens or in some cases floating point math plus race conditions.).

              It would not be hard to make it deterministic if you needed to although you might get slightly less accurate output.

              2 votes
          2. turmacar
            Link Parent
            I think the discussion below is probably right that "explainable" is probably a better word than deterministic. Mostly it comes down to how much trust is placed in the system. The second you slap...

            I think the discussion below is probably right that "explainable" is probably a better word than deterministic.

            Mostly it comes down to how much trust is placed in the system. The second you slap "AI" on it, or it starts responding in sentences, there seems to be an over-sized amount of trust paced in the system regardless of how it performs in real world conditions.

            It's all just getting super close to circumventing the reason behind having a human in the loop, because the human is blindly trusting the systems to the point that they're bored and pushing the 'kill' button is an annoyance keeping them from their other tasks.

            1 vote
      2. [4]
        sparksbet
        (edited )
        Link Parent
        Facial recognition has been around for years. but that doesn't mean it's somehow not been AI that whole time. Unless you use the incoherent definition of AI that only includes a true general...

        we've had facial recognition for years now and i'm not sure how that's meaningfully different.

        Facial recognition has been around for years. but that doesn't mean it's somehow not been AI that whole time. Unless you use the incoherent definition of AI that only includes a true general intelligence, under which definition AI does not yet exist despite us using the term in research and industry for decades, facial recognition has been AI for years. It has also been AI with huge, well-studied problems that deeply negatively affect humans (including very well-known racial bias, for example). As far as I am aware, there hasn't been a ChatGPT-esque revolution in that part of the field recently, so aren't actually likely to be huge differences under the hood between what they were using a few years ago and what they're using now (although I haven't worked directly with facial recognition so it's possible I've missed big developments in that subfield).

        2 votes
        1. [3]
          Eji1700
          Link Parent
          Because to the average street person these days its not. Someone already provided a definition of the difference between AI and algo (deterministic), which Facial recognition is. The entire...

          Facial recognition has been around for years. but that doesn't mean it's somehow not been AI that whole time.

          Because to the average street person these days its not. Someone already provided a definition of the difference between AI and algo (deterministic), which Facial recognition is. The entire problem is AI is a very ill defined term. LLM is a very clearly defined term.

          So long as we package these movements and decisions under something as vague as "AI" they are mostly useless. IF you want to say "no weapons that use facial recognition" that's fine, and a hell of a lot more useful than "no AI" which has always been vague.

          2 votes
          1. [2]
            sparksbet
            Link Parent
            Ah yeah, I think I see your point there now. I agree that "no AI" is not particularly useful because the way laypeople use the term "AI" varies so much and can be often completely nonsensical. I...

            Ah yeah, I think I see your point there now. I agree that "no AI" is not particularly useful because the way laypeople use the term "AI" varies so much and can be often completely nonsensical. I don't think there's a coherent definition of AI that would include LLMs but not the types of models doing facial recognition, but there's not exactly a shortage of people using random incoherent definitions for it these days so fair point.

            Honestly even "LLM" is kinda vaguely defined imo, but to a normal degree rather than to the extreme degree of "AI".

            1 vote
            1. Eji1700
              Link Parent
              Yeah it's been a frustrating marketing problem in my world because you have standard algorithms for things like trend behavior (peak times/dips/identifying outliers) which all got repackaged as...

              Yeah it's been a frustrating marketing problem in my world because you have standard algorithms for things like trend behavior (peak times/dips/identifying outliers) which all got repackaged as AI! and marked up.

  2. skybrian
    Link
    From the article: [...] [...] [...] [...]

    From the article:

    The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system.

    Embedded into the system is Anthropic’s AI tool Claude, a technology that was banned by the Pentagon last week after heated negotiations over the terms of its use in war.

    Over the last year military planners have seen Claude, paired with Maven, mature into a tool that is in daily use across most parts of the military, according to two of the people.

    [...]

    As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people. The pairing of Maven and Claude has created a tool that is speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations, said one of the people. The AI tools also evaluate a strike after it is initiated, the person said.

    Claude has also been used in countering terror plots and in the raid that captured Venezuelan president Nicolás Maduro. But this is the first time it has been used in major war operations, according to two of the people.

    [...]

    Military commanders have become so dependent on the AI system that if Amodei directed the military to cease, the Trump administration would use government powers to retain the technology until it can be replaced, said one of the people.

    [...]

    The Pentagon began to integrate Anthropic’s Claude chatbot into Maven in late 2024, according to public announcements. The system has been used to generate proposed targets, to track logistics and provide summaries of intelligence coming in from the field. The Trump administration has vastly expanded the use of Maven into many other parts of the military, with over 20,000 military personnel using it as of last May.

    The commanders now overseeing the Iran campaign are steeped in the use of Maven, having used earlier versions of the system in the U.S. withdrawal from Afghanistan in 2021 and to support Israel after the Oct. 7, 2023, attacks, according to a talk by Navy Rear Adm. Liam Hulin in 2024. Hulin, now the deputy director of operations at Central Command, said then that the system pulled in information from 179 sources of data.

    [...]

    It’s been quickly adopted. NATO, which signed a contract with Palantir last year, portrayed its version of Maven as giving commanders video-game like abilities to oversee battles in a recent video. In the American military, the system allowed one artillery unit to do the work of 2,000 staff with a team of just 20 people, according to a study of the system’s use by the Army’s 18th Airborne Corps by Georgetown University.

    6 votes
  3. snake_case
    Link
    I work for an analytics company on a project that combines Claude and whatever set of unstructured data documents you have. We’re in the process of selling it to the DoD and they’ve been stalling...

    I work for an analytics company on a project that combines Claude and whatever set of unstructured data documents you have.

    We’re in the process of selling it to the DoD and they’ve been stalling it for months. I’ll bet this is why.

    5 votes