13
votes
Aug 2024 - "America isn’t ready for the wars of the future" by Mark Milley (ex-Chairman of the Joint Chiefs of Staff) and Eric Schmidt (ex-CEO of Google)
Link information
This data is scraped automatically and may be incorrect.
- Title
- America Isn't Ready for the Wars of the Future
- Published
- Aug 5 2024
- Word count
- 3872 words
Hopefully, and even this is bleak, we reach a sort of MAD policy with major powers agreeing that full automation of war is the first step towards human annihilation.
There have been multiple incidents in which the Soviets nearly fired a nuke only for a human to realize something's wrong (I'm just not aware of any American situations like this, but would be curious about them). We need touchy-feely fleshbags involved in war because as messed up as we can be, and as dehumanizing as war can be, human limits for torture are far below a robot's threshold for destruction in many contexts.
I think the best case scenario is fully automated battlefields with robots against robots, but that's basically just science fiction (I actually think that's in something?) but wouldn't resolve the conflicts by any conventional means (casutalties -> loss of population -> peace treaty to stop the destruction).
I'm not opposed to the use of these technologies in a war context if they can minimize casualties. If we're going to have people fighting, however, we need as little abstraction from the battlefield as possible to ensure it's not like playing a video game, where even completely normal people are more capable of heinous things.
I'm personally pessimistic that there will be any MAD paradigm with robotics in warfare. The thing I'd say that makes MAD work with nukes is that there existed no gradations between "normal" bombs and nuclear bombs; the weakest nuke was still many times stronger and more devastating than a conventional bomb (on that note, the first deployment of tactical nukes I think may risk ending MAD). With automated robotic weapons of war, I think it will be too much of a gradient for us to perceive it as causing mutual destruction, rightly or wrongly.
Submitted this as 1) the writers are objectively interesting and 2) I didn't realize we were using drones without human input for war?
Following up also with my stream of consciousness opinions:
Lastly, the AI thing reminds me of talking about war with a friend. Is war purely about technological might? Is it purely about strategy? Barring situations of being oppressed, why would someone wage war with the costs as steep as they are? We wondered if you could use chess, robotics, etc. to circumvent the need to even involve people. Why bother signing up civilians and youths to the pain of war that would simply be decided by strategy (including home terrain strategies) or tech? I guess war is a multipronged effort-- everything from financial warfare via sanctions, to manipulating public perception, to war strategy, to resource richness, etc. Life is full of variables that could tip the scale and defy expectations. But it's hard not to wish these could be done without bloodshed.
Yeah, during Arab Spring many people were pretty optimistic and wanted to support the underdog. You can make it look sinister but it was sort of like supporting Ukraine nowadays.
There’s a sense in which a war indicates uncertainty about the outcome. If both sides knew what the war would cost them and what they would gain, they’d likely avoid it by negotiating some alternative. But in real life, attackers often miscalculate badly about how bad for their side it would be. Also, the leaders aren’t the ones doing the fighting.
I don’t think AI will lead to better decisions at the strategic level - there’s too much uncertainty, in part because a lot of the technology is new, and that uncertainty can’t be magicked away. (Also, it’s often said that in war, the goals are simple, but the simplest things are difficult.) At the tactical level, though, the technology changes rapidly because safety and precision matter a lot less when there’s a war. Winning is everything and mistakes are collateral damage. They can take shortcuts and accept risks that wouldn’t be acceptable if there weren’t a war.
For those interested, this is an interesting paper discussing the concept of "accidental war" (war that resulted from miscalculation). It's more about accidental escalation than about pure miscalculation, but it's interesting nonetheless.
https://sais.jhu.edu/kissinger/programs-and-projects/kissinger-center-papers/exculpating-myth-accidental-war
The author has a pretty negative view of the concept - a view that I don't necessarily agree with - but it's an interesting exploration nonetheless.
Maybe it was my youth plus it being the first "social media" supported revolution, but I agree, it totally did not feel sinister, even with the knowledge that there was tampering. After all, it's just freedom of thought. In hindsight, it's another datapoint that forced regime change is pretty dangerous (not saying it's bad, but average people will pay a price), and that social media does not communicate depth of policy. We support sentiments, not plans, and create power vacuums. When Ukraine came around, I was surprised to be an outlier in my opinions on how to act.
In hindsight, yes, a closed system like the one I suggested has less room for cheating or other means of creating advantage. Real life does have that uncertainty aura. Also, my one wish for the world has honestly been "people who call for war fight their own wars first." I've been told it's basically a Dune-esque effort to breed out aggression, though.
This is true. My impression of Ukraine was in part that it's been an "exciting" effort for private defense companies, as it's been a means for them to test out their tech and to take advantage of the local technical talent as well. (I don't get any joy out of saying this.)
It's a spectrum. Is a Javelin missile a drone without human input? Once it is launched, it tracks its target visually (more-or-less) without any human input. Technology here is definitely progressing for them to become more capable of being able to allow weapons to select their own targets given certain parameters.
Yeah this part is just fear-mongering BS. There's no substitute for human judgement, and every time I've seen breakthrough claims like this outside of the military context it's been just marginally passable garbage in contrived situations. It's in everyone involved interest to pretend that it's more significant than it actually is.
Reading between the lines here (and doing a bit more background reading) these can't hurt anyone. They're just for mapping and reconnaissance, which is pretty much the lowest-risk application of this technology.