36 votes

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

17 comments

  1. Bet
    Link
    This is outright slaughter; an active pursuit of disregard for human life chosen at every step, and on such a large scale, and it is just mind boggling. Some excerpts from just a small portion of...

    This is outright slaughter; an active pursuit of disregard for human life chosen at every step, and on such a large scale, and it is just mind boggling.

    Some excerpts from just a small portion of the article; all emphasis mine:

    During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.

    Why would they do this? Everyone everywhere is and has been aware that this approach is horrifically fallible. There can be no excuses.

    “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

    I never want to see this sort of reasoning adopted by anyone anywhere ever. It is blind madness to believe justice could ever be served through killing an entire family simply because one particular person is connected to Hamas.

    And there is something absolutely sick about naming one such system ‘Where’s Daddy?’. They knew from the start that this was inherently designed to be used to also murder children, and, apparently, that’s appropriate for an official cutesy-wootsey tongue-in-cheek joke.

    In addition, according to the sources, when it came to targeting alleged junior militants marked by Lavender, the army preferred to only use unguided missiles, commonly known as “dumb” bombs (in contrast to “smart” precision bombs), which can destroy entire buildings on top of their occupants and cause significant casualties. “You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs],” said C., one of the intelligence officers.

    Right. A new iteration of an old ‘ideal’. Israel, of all nations, is perfectly aware of where this specific sort of mathematics leads. Which numbers should matter most — the pecuniary or the mortal? There is no justification for calibrating a military system to knowingly kill and maim non-combatants.

    40 votes
  2. Gaywallet
    Link
    A few choice quotes

    A few choice quotes

    One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.

    Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

    In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

    23 votes
  3. Tuaam
    Link
    I'm curious on what datasets they train this AI-based targeting system on and where they get it from - do they basically feed the machine on imagery and statistical data of the gaza strip? The...

    I'm curious on what datasets they train this AI-based targeting system on and where they get it from - do they basically feed the machine on imagery and statistical data of the gaza strip? The Article mentioned that due to the ongoing destruction of the gaza strip the current models were unusable due to the landscape drastically changing, does this mean they attempt to obtain more imagery and images of the strip as it is now and then re-train the models so that they're effectively "optimized" for the new environment? I guess it explains the constant buzzing of drones in the background from whatever civilian footage you can see online, but that is obviously just regular military reconnaissance.

    What I'm also wondering about is if they're using some sort of AI-based target detection with their small quadcopter drones, I have heard of these drones going around with AI-voices shooting people in the head or telling people to do things, are these even controlled by a Human Operator? I only ask about AI-based target recognition here because there seems to be a drive to maintain a facial database of Palestinians in both the Gaza Strip and West Bank... There seems to be alot of nebulous or downright nefarious connotations here, almost dystopian in many cases...

    7 votes
  4. [14]
    stu2b50
    Link
    On a meta note, I really feel like at this point the term "AI" becomes harmful to anyone trying to discuss anything about anything. The why is that it's such a nebulous, undefined term in the...

    On a meta note, I really feel like at this point the term "AI" becomes harmful to anyone trying to discuss anything about anything. The why is that it's such a nebulous, undefined term in the common conscious now that talking about "AI" means almost nothing.

    I can't read or speak hebrew, but it seems like from the lecture the IDF was using fairly bog-standard PU algorithms, and yeah it does kinda matter if you're using a logistic regression or boosted trees for classification vs chatGPT. At some point you're just saying that they used "mathematics to do X".

    I'm not really trying to get into the can of worms about the appropriateness of what they did in this comment, just that any intuition you have about a large transformer does not really apply to random forests, which was invented in 1977. As an example, if you start talking about "AI hallucinations", when the "AI" is boosted trees, like, boosted trees don't "hallucinate".

    14 votes
    1. Eji1700
      Link Parent
      I have sat in waaaaaaaay too many meetings where people pitch predictive algorithms based on past data as "AI" with a 50x price markup. It's a mess of a conversation now and almost all in bad...

      I have sat in waaaaaaaay too many meetings where people pitch predictive algorithms based on past data as "AI" with a 50x price markup. It's a mess of a conversation now and almost all in bad faith/headline grabbing intention.

      14 votes
    2. [5]
      Comment deleted by author
      Link Parent
      1. FluffyKittens
        Link Parent
        While you’re making a valid point, the conflict has been going on for quite a while now, and this ain’t the first war crime revelation. I think I speak for a decent chunk of the userbase when I...

        While you’re making a valid point, the conflict has been going on for quite a while now, and this ain’t the first war crime revelation. I think I speak for a decent chunk of the userbase when I say the fundamental issues have been flamewarred well enough on this forum. Acknowledgement of the tragedy is a given.

        The AI tool is the novel point of discussion covered by this article, so IMO it’s entirely reasonable to dive into the details of it in this thread.

        9 votes
      2. [3]
        stu2b50
        Link Parent
        I don't particularly find it inappropriate. The article writer found the "AI" important enough to be the most prominent part of the title, so it seems important as an area to discuss.

        I don't particularly find it inappropriate. The article writer found the "AI" important enough to be the most prominent part of the title, so it seems important as an area to discuss.

        8 votes
        1. [3]
          Comment deleted by author
          Link Parent
          1. [2]
            stu2b50
            Link Parent
            I think it changes the dynamics. For instance, if you trained a forward neural network on what seems like tabular data, you'd reasonably expect that it's likely the model would overfit, and then...

            I think it changes the dynamics. For instance, if you trained a forward neural network on what seems like tabular data, you'd reasonably expect that it's likely the model would overfit, and then there'd be the consequences of running an overfitted model on novel data, when the outcome is probable death.

            Then there's also the dynamics of explainability and how that plays into the separation and culpability of the humans involved. A decision tree can be explained - it is, in fact, partially designed for that. There's a difference between a human getting something that could only ever be a black box result, and a human getting a more nuanced report on which they then act. In the latter, the human could have done more thorough audits and interpretations of what their data showed, and thus whether or not they simply ignored that plays into the exact ethical dilemmas.

            6 votes
            1. [2]
              Comment deleted by author
              Link Parent
              1. stu2b50
                Link Parent
                I don’t think it’s mutually exclusive, no, or largely irrelevant. Is it not because that it’s not irrelevant that it’s the subject of the article? The title isn’t “Israel’s bombing spree”. Why...

                I don’t think it’s mutually exclusive, no, or largely irrelevant. Is it not because that it’s not irrelevant that it’s the subject of the article? The title isn’t “Israel’s bombing spree”.

                Why does the author spend so much time talking about it or dedicated 1/5th of the article to describing how the algorithm works if it doesn’t matter?

                7 votes
    3. [7]
      Gaywallet
      Link Parent
      The word hallucinate isn't even in the article and no one is saying that. I understand the confusion about AI being applied to ML, but the reality is in the vernacular, the two terms are...

      As an example, if you start talking about "AI hallucinations", when the "AI" is boosted trees, like, boosted trees don't "hallucinate".

      The word hallucinate isn't even in the article and no one is saying that. I understand the confusion about AI being applied to ML, but the reality is in the vernacular, the two terms are interchangeable and they are rapidly becoming broad umbrella terms to describe more than just generative AI models. While your average human may be more likely to know the word hallucination and think about the possibility of AI hallucinations than would understand the history of the use of AI and ML and what's appropriate to classify which (or even deeper stats knowledge such as what a random forest is), the word is currently undergoing a change that you don't have any control over. I say this in the nicest way I possibly can, but spending time and energy arguing about what the "correct" definition is just isn't very productive and at worst can have people writing you off as a pedant.

      7 votes
      1. [6]
        stu2b50
        Link Parent
        The issue is that I don't think it's as simple as being equivalent to "machine learning", which initiofitself is an unhelpfully vague term. "AI" in modern discourse comes with a lot of baggage -...

        The issue is that I don't think it's as simple as being equivalent to "machine learning", which initiofitself is an unhelpfully vague term. "AI" in modern discourse comes with a lot of baggage - you can see it whenever there's any discussion about "AI". There's always a lot of talk around AGI, or things like how it's unexplainable, which wouldn't really apply to logistic regression, which is perfectly explainable.

        Of course, it begs the question of what should be used in place, and in that case I think it depends on what the story the article wants to tell. If it's mainly about the lack of a human touch, for instance, then "statistical models" would bring in less useless connotations.

        7 votes
        1. [5]
          Gaywallet
          Link Parent
          None of that is the focus of the article. The article is primarily about the tools that are being used here and the processes that cropped up around them. The primary thrust, if I had to put it...

          There's always a lot of talk around AGI, or things like how it's unexplainable, which wouldn't really apply to logistic regression, which is perfectly explainable.

          None of that is the focus of the article. The article is primarily about the tools that are being used here and the processes that cropped up around them. The primary thrust, if I had to put it anywhere, is around how increasing the level of abstraction away from the decision to murder someone makes it a lot easier to accept killing civilians as an outcome. I think it rightfully highlights a lot of the problems with allowing a non-human entity to be given this authority, and how it may have lead to the adoption of additional processes which ramped up the killing of civilians.

          13 votes
          1. [4]
            stu2b50
            Link Parent
            What "kind" of AI matters in that, though. I think colloquially AI is generally used to refer to at least neural networks, even if I also find that a bit silly. A statistical model that was more...

            The primary thrust, if I had to put it anywhere, is around how increasing the level of abstraction away from the decision to murder someone makes it a lot easier to accept killing civilians as an outcome.

            What "kind" of AI matters in that, though. I think colloquially AI is generally used to refer to at least neural networks, even if I also find that a bit silly. A statistical model that was more simple and explainable, or potentially generative instead of discriminative, is different than what a neural network would do in a given situation.

            If the operatives were given a risk factor and the factors that calculated it, I would put it as fairly low on the abstraction scale - if they gave the OK, it was them interpreting the data in the end.

            It sounds like the IDF had a statistical model that output a probability score for a given individual. To me, it seems the line would be crossed when the article seems to describe that they basically just using a threshold to automatically OK strikes at some point in the war, as that mostly if not fully abstracts away the decision.

            5 votes
            1. [3]
              Gaywallet
              Link Parent
              I'm trying to be as nice as possible here, but have you read the article? You've asked about hypotheticals which are not in the article (hallucinations), you've started a discussion on semantics...

              I'm trying to be as nice as possible here, but have you read the article? You've asked about hypotheticals which are not in the article (hallucinations), you've started a discussion on semantics of AI (not addressed in the article) and now you're asking questions about the process which is explained in the article.

              The following quote paints a pretty good picture of how much information they were provided, and perhaps more importantly, what actions they were actually taking when presented with data:

              “In any case, an independent examination by an [intelligence] analyst is required, which verifies that the identified targets are legitimate targets for attack, in accordance with the conditions set forth in IDF directives and international law.

              However, sources said that the only human supervision protocol in place before bombing the houses of suspected “junior” militants marked by Lavender was to conduct a single check: ensuring that the AI-selected target is male rather than female. The assumption in the army was that if it were a woman, the machine had likely made a mistake, because there are no women among the ranks of the military wings of Hamas and PIJ.

              And that's not the only quote which touches on the specifics of what is known as well as how the system is actually being used.

              It's a bit tangential here, but I also think you're missing the forest for the trees. The article clearly outlines how there was a conscious decision made here to provide more targets. Another relevant quote-

              “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”

              They wanted to optimize for how quickly they could kill targets, not for how accurate those targets were, and that's reflected in the efficiency choices that were being made above. The push for more targets made it okay to simplify down "identified targets are legitimate targets for attack" to simply "a single check: ensuring that the AI-selected target is male rather than female".

              No amount of information or display of information would make a difference here. In fact, I would be highly surprised if most of the people in charge of interpreting the output of this machine or taking action on its outputs had a cursory level of understanding of statistics. They are probably exactly the kind of people who wouldn't understand any of the points you are bringing up- they wouldn't know the difference between chatgpt and a random forest, and that's a crucial part of what this article is about and why this discussion feels pedantic to me. The abstraction is being adopted because these people already made a moral decision about their actions. They already weighed whether it was okay to kill civilians, and in fact they put a number to the amount of civilians that were acceptable casualties.

              In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

              Allowing a machine, any machine regardless of whether it's AI or simple statistics removes or at least lessons the burden or requirement of conscious thought about what you are doing. When you had to research these people, inevitably you'd learn about their lives and the lives of all the civilians around them. You'd find some people are not valid targets, and you'd probably make some mistakes where you'd know a lot about someone or several people you just killed that were not the target or were false targets. You get to sit with the burden of taking those lives. The more you put it on a machine the more you can put that out of mind. Can we please focus on that, or anything else the article is talking about?

              8 votes
              1. [2]
                stu2b50
                Link Parent
                Is that not what we're talking about - the degree to which this is the case for different degrees of modeling? I feel like we're talking past. If you just want an answer as to whether or not I...

                You get to sit with the burden of taking those lives. The more you put it on a machine the more you can put that out of mind. Can we please focus on that, or anything else the article is talking about?

                Is that not what we're talking about - the degree to which this is the case for different degrees of modeling?

                I feel like we're talking past. If you just want an answer as to whether or not I think what the IDF is doing is probably bad, then if what the article alleges is true, I think they are, and I specified the boundary which they crossed.

                Would you have issues with any and all statistical values in military decision making? Surely there is a difference between averaging a column in excel, or doing some basic clustering with k-means, and a full end-to-end model which makes decision based on criteria you don't know.

                They wanted to optimize for how quickly they could kill targets, not for how accurate those targets were, and that's reflected in the efficiency choices that were being made above.

                Sure, and that would be a misuse if true, but further discussion would lead you in the direction of: how would you use this in a way that is responsible? Is there a way?

                I wouldn't necessarily expect drone operators to know the details of any possible statistical model that could be used, but different models inherently produce different schemas of outputs, which are then used differently.

                Would it still sufficiently put the decision out of mind if the operators did more than a "is the target male" check? Maybe, maybe not. What if they didn't have a single scalar value, but were only provided the layer before that, a list of risk factors, for instance. It's certainly not as abstracted as a value already deducted from those factors, but it more abstracted than the raw data nonetheless.

                6 votes
                1. Gaywallet
                  Link Parent
                  The biggest issue here seems to be one of process, not the tech which lies underneath it. I take issue with any process which increases civilian casualties. There were decisions made about process...

                  The biggest issue here seems to be one of process, not the tech which lies underneath it. I take issue with any process which increases civilian casualties. There were decisions made about process which optimized the idea of generating more targets, making it easy to ignore killing civilians, making it easy to sign off on killing anyone, pushing people to keep working the list of people to kill. These are all conscious process decisions and I believe these decisions, especially considering they are decisions about killing innocents, are what we should be talking about.

                  6 votes
    4. spit-evil-olive-tips
      Link Parent
      if the headline of the article was changed to "Lavender: The positive-unlabeled learning algorithms directing Israel’s bombing spree in Gaza" would that satisfy your objection? if it's such a...

      it seems like from the lecture the IDF was using fairly bog-standard PU algorithms

      if the headline of the article was changed to "Lavender: The positive-unlabeled learning algorithms directing Israel’s bombing spree in Gaza" would that satisfy your objection?

      it's such a nebulous, undefined term in the common conscious now that talking about "AI" means almost nothing.

      if it's such a meaningless term, why did this guy include it in the title of his book?

      In 2021, a book titled “The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionize Our World” was released in English under the pen name “Brigadier General Y.S.”

      and why does Tel Aviv University host an event called "AI Week" where the system was talked about? (they even registered https://ai-week.com/ for the event...)

      +972 and Local Call have obtained footage of a private lecture given by the commander of Unit 8200’s secretive Data Science and AI center, “Col. Yoav,” at Tel Aviv University’s AI week in 2023, which was reported on at the time in the Israeli media.

      it seems like it does have meaning...but, like probably every other term in English or any other language, the meaning is dependent on the context?

      I do think there is a kernel of truth to what you're saying - that when news articles intended for non-technical consumption mention AI, they rarely mention which specific algorithms or methods are used. and...OK? I don't understand how that's particularly different from any other journalism.

      when I read an article critical of Shotspotter, to pick an unrelated example, it doesn't go into the details of the digital signal processing algorithms they use to supposedly tell the difference between gunshots and cars backfiring. or if I read an article about a breakthrough in treatment of some form of cancer, the article typically doesn't talk about the specific lab techniques that were used to develop the new cancer drug.

      and likewise, when I read an article about the real-world effects of AI, it usually doesn't mention whether the system in question is based on spline-reticulation learning or non-Euclidean hyperparameter optimization or partially-hidden Markov jungles.

      but I would say that "AI" does have a fairly well-established meaning in the context of this type of journalism. AI is the siren song of the idea that "there's just too much data that it's infeasible for humans to sort through and understand, so we have no choice but to feed all the data into a computer and trust that The Algorithm will produce the right outputs":

      In “The Human-Machine Team,” the book referenced at the beginning of this article, the current commander of Unit 8200 advocates for such a system without referencing Lavender by name. (The commander himself also isn’t named, but five sources in 8200 confirmed that the commander is the author, as reported also by Haaretz.) Describing human personnel as a “bottleneck” that limits the army’s capacity during a military operation, the commander laments: “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”

      ...

      “The more information, and the more variety, the better,” the commander writes. “Visual information, cellular information, social media connections, battlefield information, phone contacts, photos.” While humans select these features at first, the commander continues, over time the machine will come to identify features on its own. This, he says, can enable militaries to create “tens of thousands of targets,” while the actual decision as to whether or not to attack them will remain a human one.

      of course, that thing about the decision to attack remaining in human hands seems to have...not held up particularly well:

      From that moment, sources said that if Lavender decided an individual was a militant in Hamas, they were essentially asked to treat that as an order, with no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based.

      6 votes