36 votes

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

15 comments

  1. Gaywallet
    Link
    A few choice quotes

    A few choice quotes

    One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.

    Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

    In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

    23 votes
  2. Tuaam
    Link
    I'm curious on what datasets they train this AI-based targeting system on and where they get it from - do they basically feed the machine on imagery and statistical data of the gaza strip? The...

    I'm curious on what datasets they train this AI-based targeting system on and where they get it from - do they basically feed the machine on imagery and statistical data of the gaza strip? The Article mentioned that due to the ongoing destruction of the gaza strip the current models were unusable due to the landscape drastically changing, does this mean they attempt to obtain more imagery and images of the strip as it is now and then re-train the models so that they're effectively "optimized" for the new environment? I guess it explains the constant buzzing of drones in the background from whatever civilian footage you can see online, but that is obviously just regular military reconnaissance.

    What I'm also wondering about is if they're using some sort of AI-based target detection with their small quadcopter drones, I have heard of these drones going around with AI-voices shooting people in the head or telling people to do things, are these even controlled by a Human Operator? I only ask about AI-based target recognition here because there seems to be a drive to maintain a facial database of Palestinians in both the Gaza Strip and West Bank... There seems to be alot of nebulous or downright nefarious connotations here, almost dystopian in many cases...

    7 votes
  3. [13]
    stu2b50
    Link
    On a meta note, I really feel like at this point the term "AI" becomes harmful to anyone trying to discuss anything about anything. The why is that it's such a nebulous, undefined term in the...

    On a meta note, I really feel like at this point the term "AI" becomes harmful to anyone trying to discuss anything about anything. The why is that it's such a nebulous, undefined term in the common conscious now that talking about "AI" means almost nothing.

    I can't read or speak hebrew, but it seems like from the lecture the IDF was using fairly bog-standard PU algorithms, and yeah it does kinda matter if you're using a logistic regression or boosted trees for classification vs chatGPT. At some point you're just saying that they used "mathematics to do X".

    I'm not really trying to get into the can of worms about the appropriateness of what they did in this comment, just that any intuition you have about a large transformer does not really apply to random forests, which was invented in 1977. As an example, if you start talking about "AI hallucinations", when the "AI" is boosted trees, like, boosted trees don't "hallucinate".

    14 votes
    1. Eji1700
      Link Parent
      I have sat in waaaaaaaay too many meetings where people pitch predictive algorithms based on past data as "AI" with a 50x price markup. It's a mess of a conversation now and almost all in bad...

      I have sat in waaaaaaaay too many meetings where people pitch predictive algorithms based on past data as "AI" with a 50x price markup. It's a mess of a conversation now and almost all in bad faith/headline grabbing intention.

      14 votes
    2. [5]
      Comment deleted by author
      Link Parent
      1. FluffyKittens
        Link Parent
        While you’re making a valid point, the conflict has been going on for quite a while now, and this ain’t the first war crime revelation. I think I speak for a decent chunk of the userbase when I...

        While you’re making a valid point, the conflict has been going on for quite a while now, and this ain’t the first war crime revelation. I think I speak for a decent chunk of the userbase when I say the fundamental issues have been flamewarred well enough on this forum. Acknowledgement of the tragedy is a given.

        The AI tool is the novel point of discussion covered by this article, so IMO it’s entirely reasonable to dive into the details of it in this thread.

        9 votes
      2. [3]
        stu2b50
        Link Parent
        I don't particularly find it inappropriate. The article writer found the "AI" important enough to be the most prominent part of the title, so it seems important as an area to discuss.

        I don't particularly find it inappropriate. The article writer found the "AI" important enough to be the most prominent part of the title, so it seems important as an area to discuss.

        8 votes
        1. [3]
          Comment deleted by author
          Link Parent
          1. [2]
            stu2b50
            Link Parent
            I think it changes the dynamics. For instance, if you trained a forward neural network on what seems like tabular data, you'd reasonably expect that it's likely the model would overfit, and then...

            I think it changes the dynamics. For instance, if you trained a forward neural network on what seems like tabular data, you'd reasonably expect that it's likely the model would overfit, and then there'd be the consequences of running an overfitted model on novel data, when the outcome is probable death.

            Then there's also the dynamics of explainability and how that plays into the separation and culpability of the humans involved. A decision tree can be explained - it is, in fact, partially designed for that. There's a difference between a human getting something that could only ever be a black box result, and a human getting a more nuanced report on which they then act. In the latter, the human could have done more thorough audits and interpretations of what their data showed, and thus whether or not they simply ignored that plays into the exact ethical dilemmas.

            6 votes
            1. [2]
              Comment deleted by author
              Link Parent
              1. stu2b50
                Link Parent
                I don’t think it’s mutually exclusive, no, or largely irrelevant. Is it not because that it’s not irrelevant that it’s the subject of the article? The title isn’t “Israel’s bombing spree”. Why...

                I don’t think it’s mutually exclusive, no, or largely irrelevant. Is it not because that it’s not irrelevant that it’s the subject of the article? The title isn’t “Israel’s bombing spree”.

                Why does the author spend so much time talking about it or dedicated 1/5th of the article to describing how the algorithm works if it doesn’t matter?

                7 votes
    3. [7]
      Gaywallet
      Link Parent
      The word hallucinate isn't even in the article and no one is saying that. I understand the confusion about AI being applied to ML, but the reality is in the vernacular, the two terms are...

      As an example, if you start talking about "AI hallucinations", when the "AI" is boosted trees, like, boosted trees don't "hallucinate".

      The word hallucinate isn't even in the article and no one is saying that. I understand the confusion about AI being applied to ML, but the reality is in the vernacular, the two terms are interchangeable and they are rapidly becoming broad umbrella terms to describe more than just generative AI models. While your average human may be more likely to know the word hallucination and think about the possibility of AI hallucinations than would understand the history of the use of AI and ML and what's appropriate to classify which (or even deeper stats knowledge such as what a random forest is), the word is currently undergoing a change that you don't have any control over. I say this in the nicest way I possibly can, but spending time and energy arguing about what the "correct" definition is just isn't very productive and at worst can have people writing you off as a pedant.

      7 votes
      1. [6]
        stu2b50
        Link Parent
        The issue is that I don't think it's as simple as being equivalent to "machine learning", which initiofitself is an unhelpfully vague term. "AI" in modern discourse comes with a lot of baggage -...

        The issue is that I don't think it's as simple as being equivalent to "machine learning", which initiofitself is an unhelpfully vague term. "AI" in modern discourse comes with a lot of baggage - you can see it whenever there's any discussion about "AI". There's always a lot of talk around AGI, or things like how it's unexplainable, which wouldn't really apply to logistic regression, which is perfectly explainable.

        Of course, it begs the question of what should be used in place, and in that case I think it depends on what the story the article wants to tell. If it's mainly about the lack of a human touch, for instance, then "statistical models" would bring in less useless connotations.

        7 votes
        1. [5]
          Gaywallet
          Link Parent
          None of that is the focus of the article. The article is primarily about the tools that are being used here and the processes that cropped up around them. The primary thrust, if I had to put it...

          There's always a lot of talk around AGI, or things like how it's unexplainable, which wouldn't really apply to logistic regression, which is perfectly explainable.

          None of that is the focus of the article. The article is primarily about the tools that are being used here and the processes that cropped up around them. The primary thrust, if I had to put it anywhere, is around how increasing the level of abstraction away from the decision to murder someone makes it a lot easier to accept killing civilians as an outcome. I think it rightfully highlights a lot of the problems with allowing a non-human entity to be given this authority, and how it may have lead to the adoption of additional processes which ramped up the killing of civilians.

          13 votes
          1. [4]
            stu2b50
            Link Parent
            What "kind" of AI matters in that, though. I think colloquially AI is generally used to refer to at least neural networks, even if I also find that a bit silly. A statistical model that was more...

            The primary thrust, if I had to put it anywhere, is around how increasing the level of abstraction away from the decision to murder someone makes it a lot easier to accept killing civilians as an outcome.

            What "kind" of AI matters in that, though. I think colloquially AI is generally used to refer to at least neural networks, even if I also find that a bit silly. A statistical model that was more simple and explainable, or potentially generative instead of discriminative, is different than what a neural network would do in a given situation.

            If the operatives were given a risk factor and the factors that calculated it, I would put it as fairly low on the abstraction scale - if they gave the OK, it was them interpreting the data in the end.

            It sounds like the IDF had a statistical model that output a probability score for a given individual. To me, it seems the line would be crossed when the article seems to describe that they basically just using a threshold to automatically OK strikes at some point in the war, as that mostly if not fully abstracts away the decision.

            5 votes
            1. [3]
              Gaywallet
              Link Parent
              I'm trying to be as nice as possible here, but have you read the article? You've asked about hypotheticals which are not in the article (hallucinations), you've started a discussion on semantics...

              I'm trying to be as nice as possible here, but have you read the article? You've asked about hypotheticals which are not in the article (hallucinations), you've started a discussion on semantics of AI (not addressed in the article) and now you're asking questions about the process which is explained in the article.

              The following quote paints a pretty good picture of how much information they were provided, and perhaps more importantly, what actions they were actually taking when presented with data:

              “In any case, an independent examination by an [intelligence] analyst is required, which verifies that the identified targets are legitimate targets for attack, in accordance with the conditions set forth in IDF directives and international law.

              However, sources said that the only human supervision protocol in place before bombing the houses of suspected “junior” militants marked by Lavender was to conduct a single check: ensuring that the AI-selected target is male rather than female. The assumption in the army was that if it were a woman, the machine had likely made a mistake, because there are no women among the ranks of the military wings of Hamas and PIJ.

              And that's not the only quote which touches on the specifics of what is known as well as how the system is actually being used.

              It's a bit tangential here, but I also think you're missing the forest for the trees. The article clearly outlines how there was a conscious decision made here to provide more targets. Another relevant quote-

              “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”

              They wanted to optimize for how quickly they could kill targets, not for how accurate those targets were, and that's reflected in the efficiency choices that were being made above. The push for more targets made it okay to simplify down "identified targets are legitimate targets for attack" to simply "a single check: ensuring that the AI-selected target is male rather than female".

              No amount of information or display of information would make a difference here. In fact, I would be highly surprised if most of the people in charge of interpreting the output of this machine or taking action on its outputs had a cursory level of understanding of statistics. They are probably exactly the kind of people who wouldn't understand any of the points you are bringing up- they wouldn't know the difference between chatgpt and a random forest, and that's a crucial part of what this article is about and why this discussion feels pedantic to me. The abstraction is being adopted because these people already made a moral decision about their actions. They already weighed whether it was okay to kill civilians, and in fact they put a number to the amount of civilians that were acceptable casualties.

              In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

              Allowing a machine, any machine regardless of whether it's AI or simple statistics removes or at least lessons the burden or requirement of conscious thought about what you are doing. When you had to research these people, inevitably you'd learn about their lives and the lives of all the civilians around them. You'd find some people are not valid targets, and you'd probably make some mistakes where you'd know a lot about someone or several people you just killed that were not the target or were false targets. You get to sit with the burden of taking those lives. The more you put it on a machine the more you can put that out of mind. Can we please focus on that, or anything else the article is talking about?

              8 votes
              1. [2]
                stu2b50
                Link Parent
                Is that not what we're talking about - the degree to which this is the case for different degrees of modeling? I feel like we're talking past. If you just want an answer as to whether or not I...

                You get to sit with the burden of taking those lives. The more you put it on a machine the more you can put that out of mind. Can we please focus on that, or anything else the article is talking about?

                Is that not what we're talking about - the degree to which this is the case for different degrees of modeling?

                I feel like we're talking past. If you just want an answer as to whether or not I think what the IDF is doing is probably bad, then if what the article alleges is true, I think they are, and I specified the boundary which they crossed.

                Would you have issues with any and all statistical values in military decision making? Surely there is a difference between averaging a column in excel, or doing some basic clustering with k-means, and a full end-to-end model which makes decision based on criteria you don't know.

                They wanted to optimize for how quickly they could kill targets, not for how accurate those targets were, and that's reflected in the efficiency choices that were being made above.

                Sure, and that would be a misuse if true, but further discussion would lead you in the direction of: how would you use this in a way that is responsible? Is there a way?

                I wouldn't necessarily expect drone operators to know the details of any possible statistical model that could be used, but different models inherently produce different schemas of outputs, which are then used differently.

                Would it still sufficiently put the decision out of mind if the operators did more than a "is the target male" check? Maybe, maybe not. What if they didn't have a single scalar value, but were only provided the layer before that, a list of risk factors, for instance. It's certainly not as abstracted as a value already deducted from those factors, but it more abstracted than the raw data nonetheless.

                6 votes
                1. Gaywallet
                  Link Parent
                  The biggest issue here seems to be one of process, not the tech which lies underneath it. I take issue with any process which increases civilian casualties. There were decisions made about process...

                  The biggest issue here seems to be one of process, not the tech which lies underneath it. I take issue with any process which increases civilian casualties. There were decisions made about process which optimized the idea of generating more targets, making it easy to ignore killing civilians, making it easy to sign off on killing anyone, pushing people to keep working the list of people to kill. These are all conscious process decisions and I believe these decisions, especially considering they are decisions about killing innocents, are what we should be talking about.

                  6 votes