13 votes

Aug 2024 - "America isn’t ready for the wars of the future" by Mark Milley (ex-Chairman of the Joint Chiefs of Staff) and Eric Schmidt (ex-CEO of Google)

8 comments

  1. [2]
    knocklessmonster
    Link
    Hopefully, and even this is bleak, we reach a sort of MAD policy with major powers agreeing that full automation of war is the first step towards human annihilation. There have been multiple...

    Hopefully, and even this is bleak, we reach a sort of MAD policy with major powers agreeing that full automation of war is the first step towards human annihilation.

    It doesn’t take much imagination to see how matters could go horribly wrong if these AI systems were actually used. In 1983, a Soviet missile detection system falsely classified light reflected off clouds as an incoming nuclear attack. Fortunately, the Soviet army had a human soldier in charge of processing the alert, who determined the warning was false. But in the age of AI, there might not be a human to double-check the system’s work.

    There have been multiple incidents in which the Soviets nearly fired a nuke only for a human to realize something's wrong (I'm just not aware of any American situations like this, but would be curious about them). We need touchy-feely fleshbags involved in war because as messed up as we can be, and as dehumanizing as war can be, human limits for torture are far below a robot's threshold for destruction in many contexts.

    I think the best case scenario is fully automated battlefields with robots against robots, but that's basically just science fiction (I actually think that's in something?) but wouldn't resolve the conflicts by any conventional means (casutalties -> loss of population -> peace treaty to stop the destruction).

    I'm not opposed to the use of these technologies in a war context if they can minimize casualties. If we're going to have people fighting, however, we need as little abstraction from the battlefield as possible to ensure it's not like playing a video game, where even completely normal people are more capable of heinous things.

    9 votes
    1. thearctic
      Link Parent
      I'm personally pessimistic that there will be any MAD paradigm with robotics in warfare. The thing I'd say that makes MAD work with nukes is that there existed no gradations between "normal" bombs...

      I'm personally pessimistic that there will be any MAD paradigm with robotics in warfare. The thing I'd say that makes MAD work with nukes is that there existed no gradations between "normal" bombs and nuclear bombs; the weakest nuke was still many times stronger and more devastating than a conventional bomb (on that note, the first deployment of tactical nukes I think may risk ending MAD). With automated robotic weapons of war, I think it will be too much of a gradient for us to perceive it as causing mutual destruction, rightly or wrongly.

      6 votes
  2. [6]
    umlautsuser123
    Link
    Submitted this as 1) the writers are objectively interesting and 2) I didn't realize we were using drones without human input for war?

    Submitted this as 1) the writers are objectively interesting and 2) I didn't realize we were using drones without human input for war?

    The war in Ukraine is hardly the only conflict in which new technology is transforming the nature of warfare. In Myanmar and Sudan, insurgents and the government are both using unmanned vehicles and algorithms as they fight. In 2020, an autonomous Turkish-made drone fielded by Libyan government-backed troops struck retreating combatants—perhaps the first drone attack conducted without human input. In the same year, Azerbaijan’s military used Turkish- and Israeli-made drones, along with loitering munitions (explosives designed to hover over a target), in an effort to seize the disputed enclave of Nagorno-Karabakh. And in Gaza, Israel has fielded thousands of drones connected to AI algorithms, helping Israeli troops navigate the territory’s urban canyons.

    The United States must therefore transform its armed forces so it can maintain a decisive military advantage—and ensure that robots and AI are used in an ethical manner.

    AI systems could, for instance, simulate different tactical and operational approaches thousands of times, drastically shortening the period between preparation and execution. The Chinese military has already created an AI commander that has supreme authority in large-scale virtual war games. ... Soldiers could sip coffee in their offices, monitoring screens far from the battlefield, as an AI system manages all kinds of robotic war machines.

    But as global urbanization draws more people into cities and nonstate actors pivot to urban guerrilla tactics, the decisive battlefields of the future will likely be densely populated areas.

    7 votes
    1. [4]
      umlautsuser123
      Link Parent
      Following up also with my stream of consciousness opinions: I thought I'd share this as I personally find the "Do No Evil" to "Techno-Military Industrial Complex" turn of a Google CEO to be, well,...

      Following up also with my stream of consciousness opinions:

      • I thought I'd share this as I personally find the "Do No Evil" to "Techno-Military Industrial Complex" turn of a Google CEO to be, well, ghoulish. But it sounds like (from other sources) he was always interested in this stuff. Opinions aside, I think it's interesting and not getting as much attention as I would expect.
        • This is also the same Eric Schmidt that's an advisor to Chainlink (blockchain company).
        • It also reminded me of this very old article, which to me implies a relationship between the Arab Spring and Google interest in 'democracy' (I am unsure how authentic the interest is). This felt so benign at the time with Google's popularity and the world's relative stability, but I suppose it was not.

      In a series of colorful emails they discussed a pattern of activity conducted by Cohen under the Google Ideas aegis, suggesting what the "do" in "think/do tank" actually means.

      Cohen's directorate appeared to cross over from public relations and "corporate responsibility" work into active corporate intervention in foreign affairs at a level that is normally reserved for states. Jared Cohen could be wryly named Google's "director of regime change."

      According to the emails, he was trying to plant his fingerprints on some of the major historical events in the contemporary Middle East. He could be placed in Egypt during the revolution, meeting with Wael Ghonim, the Google employee whose arrest and imprisonment hours later would make him a PR-friendly symbol of the uprising in the Western press. Meetings had been planned in Palestine and Turkey, both of which—claimed Stratfor emails—were killed by the senior Google leadership as too risky.

      Cohen stated that the merger of his Movements.org outfit with Advancing Human Rights was "irresistible," pointing to the latter's "phenomenal network of cyber-activists in the Middle East and North Africa." He then joined the Advancing Human Rights board, which also includes Richard Kemp, the former commander of British forces in occupied Afghanistan. In its present guise, Movements.org continues to receive funding from Gen Next, as well as from Google, MSNBC and PR giant Edelman, which represents General Electric, Boeing, and Shell, among others.


      Lastly, the AI thing reminds me of talking about war with a friend. Is war purely about technological might? Is it purely about strategy? Barring situations of being oppressed, why would someone wage war with the costs as steep as they are? We wondered if you could use chess, robotics, etc. to circumvent the need to even involve people. Why bother signing up civilians and youths to the pain of war that would simply be decided by strategy (including home terrain strategies) or tech? I guess war is a multipronged effort-- everything from financial warfare via sanctions, to manipulating public perception, to war strategy, to resource richness, etc. Life is full of variables that could tip the scale and defy expectations. But it's hard not to wish these could be done without bloodshed.

      4 votes
      1. [3]
        skybrian
        (edited )
        Link Parent
        Yeah, during Arab Spring many people were pretty optimistic and wanted to support the underdog. You can make it look sinister but it was sort of like supporting Ukraine nowadays. There’s a sense...

        Yeah, during Arab Spring many people were pretty optimistic and wanted to support the underdog. You can make it look sinister but it was sort of like supporting Ukraine nowadays.

        There’s a sense in which a war indicates uncertainty about the outcome. If both sides knew what the war would cost them and what they would gain, they’d likely avoid it by negotiating some alternative. But in real life, attackers often miscalculate badly about how bad for their side it would be. Also, the leaders aren’t the ones doing the fighting.

        I don’t think AI will lead to better decisions at the strategic level - there’s too much uncertainty, in part because a lot of the technology is new, and that uncertainty can’t be magicked away. (Also, it’s often said that in war, the goals are simple, but the simplest things are difficult.) At the tactical level, though, the technology changes rapidly because safety and precision matter a lot less when there’s a war. Winning is everything and mistakes are collateral damage. They can take shortcuts and accept risks that wouldn’t be acceptable if there weren’t a war.

        5 votes
        1. R3qn65
          (edited )
          Link Parent
          For those interested, this is an interesting paper discussing the concept of "accidental war" (war that resulted from miscalculation). It's more about accidental escalation than about pure...

          For those interested, this is an interesting paper discussing the concept of "accidental war" (war that resulted from miscalculation). It's more about accidental escalation than about pure miscalculation, but it's interesting nonetheless.

          https://sais.jhu.edu/kissinger/programs-and-projects/kissinger-center-papers/exculpating-myth-accidental-war

          The author has a pretty negative view of the concept - a view that I don't necessarily agree with - but it's an interesting exploration nonetheless.

          3 votes
        2. umlautsuser123
          Link Parent
          Maybe it was my youth plus it being the first "social media" supported revolution, but I agree, it totally did not feel sinister, even with the knowledge that there was tampering. After all, it's...

          Maybe it was my youth plus it being the first "social media" supported revolution, but I agree, it totally did not feel sinister, even with the knowledge that there was tampering. After all, it's just freedom of thought. In hindsight, it's another datapoint that forced regime change is pretty dangerous (not saying it's bad, but average people will pay a price), and that social media does not communicate depth of policy. We support sentiments, not plans, and create power vacuums. When Ukraine came around, I was surprised to be an outlier in my opinions on how to act.

          There’s a sense in which a war indicates uncertainty about the outcome. If both sides knew what the war would cost them and what they would gain, they’d likely avoid it by negotiating some alternative. But in real life, attackers often miscalculate badly about how bad for their side it would be. Also, the leaders aren’t the ones doing the fighting.

          In hindsight, yes, a closed system like the one I suggested has less room for cheating or other means of creating advantage. Real life does have that uncertainty aura. Also, my one wish for the world has honestly been "people who call for war fight their own wars first." I've been told it's basically a Dune-esque effort to breed out aggression, though.

          At the tactical level, though, the technology changes rapidly because safety and precision matter a lot less when there’s a war. Winning is everything and mistakes are collateral damage. They can take shortcuts and accept risks that wouldn’t be acceptable if there weren’t a war.

          This is true. My impression of Ukraine was in part that it's been an "exciting" effort for private defense companies, as it's been a means for them to test out their tech and to take advantage of the local technical talent as well. (I don't get any joy out of saying this.)

          3 votes
    2. tibpoe
      Link Parent
      It's a spectrum. Is a Javelin missile a drone without human input? Once it is launched, it tracks its target visually (more-or-less) without any human input. Technology here is definitely...

      I didn't realize we were using drones without human input for war?

      It's a spectrum. Is a Javelin missile a drone without human input? Once it is launched, it tracks its target visually (more-or-less) without any human input. Technology here is definitely progressing for them to become more capable of being able to allow weapons to select their own targets given certain parameters.

      AI systems could, for instance, simulate different tactical ... The Chinese military

      Yeah this part is just fear-mongering BS. There's no substitute for human judgement, and every time I've seen breakthrough claims like this outside of the military context it's been just marginally passable garbage in contrived situations. It's in everyone involved interest to pretend that it's more significant than it actually is.

      And in Gaza, Israel has fielded thousands of drones connected to AI algorithms, helping Israeli troops navigate the territory’s urban canyons.

      Reading between the lines here (and doing a bit more background reading) these can't hurt anyone. They're just for mapping and reconnaissance, which is pretty much the lowest-risk application of this technology.

      4 votes