- Four Battlegrounds by Paul Scharre explores the competition between AI superpowers and the four key elements that define this struggle: data, computing power, talent, and institutions.
- This book excerpt explains how artificial intelligence could soon change how militaries fight on the battlefield.
- AI could transform battle tactics to the extent that humans can't keep up with it — a scenario which Scharre refers to as a "singularity" in warfare.
It can be daunting to stand at the dawn of a new cognitive age of human-machine teaming in warfare and imagine the future, and we should be humble in our ability to do so. The deep learning revolution is only a decade old, and the capabilities of AI systems and how they are used decades from now may bear little resemblance to today. In 1913, a decade after the first flight at Kitty Hawk, airplanes were just beginning to be integrated into military forces, predominantly in reconnaissance roles. There was no hint of the fleets of bombers that would devastate entire cities in World War II or the supersonic jets and intercontinental nuclear bombers that would be invented during the Cold War. We are at a similar position with AI, attempting to peer into an unknown and highly uncertain future.
AI’s ability to imbue military forces with greater situational awareness, precision, coordination, and speed is likely to result in a battlefield that is faster paced, more transparent, and more lethal. The ability of AI systems to process large amounts of information and take in the totality of action is likely to make it increasingly difficult for military forces to hide, placing a premium on camouflage, deception, and decoys. New tactics will be needed as a result.
In World War I, militaries struggled to adapt their tactics to the new reality the industrial revolution had unleashed on the battlefield. The machine gun rendered nineteenth-century tactics ineffective by increasing lethality through both a higher rate of fire and greater effective range. During the Napoleonic era, infantry troops advancing over open terrain against fixed defenses could face an average of two shots fired at each soldier during the course of their advance. (The firepower was also not very accurate.) By 1916, defenders armed with machine guns and rifles could pour an average of 200 shots per soldier at attackers moving over open terrain. With this hundredfold increase in firepower, defending fire didn’t need to be particularly accurate to be lethal. In the bloody trenches of World War I, military leaders stuck to their outdated tactics, throwing bodies against fixed positions in a vain attempt to break the deadlock of trench warfare. In the first day of the Battle of the Somme, Great Britain lost 19,000 men attempting to break through the German lines. A generation of European men were killed or wounded in the trenches of World War I. By early 1918, military tactics had finally adapted to the brutal realities of industrialized warfare, after three and a half long years of fighting.
Future military tactics will also need to adapt to a changed battlefield in which the enemy has greater visibility and the capacity to quickly and precisely strike exposed forces. The gunslinger advantage of AI in superhuman reaction times is likely to lead militaries to embrace automation for circumstances in which even a split-second advantage in shooting first substantially increases survivability. AI-enabled command and control is also likely to lead to greater coordination among distributed units across the battlefield, allowing greater dispersal of forces and more effective long-range coordinated campaigning.
AI enables a radical shift in military doctrine toward swarming, a method of fighting in which many disparate elements maneuver independently but cooperatively as part of a cohesive whole. Swarming differs from traditional maneuver warfare, in which military units move as part of formations. For example, in contemporary ground combat, a line of soldiers might pin down enemy forces, while another element maneuvers to achieve a flanking position on the enemy. Militaries generally try to limit the number of independently maneuvering units working in close proximity in order to minimize the potential for fratricide. Similarly, soldiers moving as a unit may spread out to avoid being targeted, but they still move as one, stopping and starting together and maintaining the same speed and spacing. Swarming is different. It involves individual elements maneuvering independently but to achieve a common goal—more like a sports team, with individuals moving in an organic, fluid way, reacting to each other’s movements. Few sports have more than a dozen players on the field at a time, though. (Australian football is an outlier, with eighteen players on the field per team.) Military squads have varied over time and by country but generally number around seven to fourteen individuals. The similarity in these numbers is not a coincidence. They are set by the limits of human cognition. A hundred sports players working together on the field would be chaos. Coordinating their actions would require the more regimented structures that militaries use to manage large numbers of troops, breaking them down into units and subunits with leaders for each. These cognitive limits do not apply to AI systems, which could coordinate hundreds or thousands of independently maneuvering elements toward a coherent whole.
Swarming tactics have occurred throughout military history, although their use has often been limited, in part because of the challenge of maintaining cohesion among large numbers of independently maneuvering forces. If successfully executed, swarming has many advantages. It allows military forces to disperse when attacked, avoiding presenting the enemy with a single formation to target. Swarming forces can then reconverge when it is advantageous to attack enemy forces. Swarms present an enemy with a large number of independently moving targets to track, as well as the threat of simultaneous attack from multiple directions.
True swarms are more than just a deluge of attackers. Swarming entails individual elements coordinating and altering their behavior in response to one another. While groups of small aerial drones have seen increasing use in combat, including in mass drone attacks, most do not exhibit true cooperative swarming behavior in which individual drones are responding to each other’s actions. In June 2021, Israel allegedly used the first true drone swarm in an attack in Gaza. Nonmilitary robot swarms have been demonstrated in research labs and multiagent AI cooperation has been demonstrated in games. It is only a matter of time before drone swarms become a regular tactical tool in combat.
At first, swarming will be merely a tactic narrowly used in certain situations, but AI opens the possibility that swarming, over time, could completely restructure how militaries fight at the operational level of war. Rather than military formations maneuvering to gain a positional advantage, swarming could become the dominant mode of military operations, with thousands of disparate units spreading across a battlefield then converging to attack. Such an approach would be very challenging for humans to counteract, as it could overload the cognitive abilities of human defenders. If large-scale AI-driven swarming proved successful as an operational approach to organizing and employing military forces, other militaries could be forced to follow suit to survive. Such a development is likely decades away, if at all. There is a major leap between small tactical drone swarms, a near-term prospect, and the widespread use of AI-driven swarming across the entire battlefield. But AI could enable such a future.
If AI swarms become a dominant form of warfare, they could lead to a change in how military forces are organized. Today, militaries are organized in a hierarchical fashion into squads, platoons, companies, battalions, brigades, divisions, and corps. Each level consolidates usually three to five elements into a larger unit. There are roughly three to four squads in a platoon, three to five platoons in a company, three to five companies in a battalion, and so on. Militaries use the term “span of control” to refer to the number of subordinate units a commander directs, and the limits of span of control come from human cognition. A commander cannot reasonably directly manage a hundred subordinates at a time. These hierarchical structures would not be needed for AI command and control and in fact may be a hindrance to optimal operations.
Humans commanding an AI swarm would have a very different relationship with battlefield action than humans have today. Humans would establish the swarm’s goals, supervise its operation, and even conceivably intervene to make changes, but they would effectively hand over execution of swarm behavior to one (or many) AI systems. Humans would have ceded the “micro” of combat action to AI, and over time the amount of combat authority delegated to AI systems could grow. As machines become more advanced, the centaur model of human-machine teaming may no longer work. Garry Kasparov, who created human-machine teaming chess after his loss to Deep Blue, has suggested that as machines become more intelligent the human-machine relationship will switch to a “shepherd model” in which “the machines become the experts and humans oversee them.”
Shepherding advanced AI systems may not be so simple. The increased scale and speed of operations enabled by AI could begin to push warfare out of human control. Such a shift would not happen overnight. It would likely take decades to unfold and happen incrementally, but little by little militaries could find themselves ceding more and more decision-making to machines. Just like warfare today is fought by humans but mediated through physical machines—tanks, airplanes, ships, and machine guns—warfare in the future could be fought between humans but mediated by AI systems that plan and execute combat.
Some Chinese military scholars have speculated about the potential for a future “singularity” in warfare, in which the pace of AI-driven action on the battlefield exceeds human cognition. In the article “Artificial intelligence: disruptively changing the ‘rules of the game,’ ” Chen Hanghui of the PLA’s Army Command College described such a potential development:
In the future battlefield, with the continuous advancement of artificial intelligence and human-machine integration technology, the pace of combat will become faster and faster until it arrives at a “singularity”: The human brain will no longer be able to handle the ever-changing battlefield situation and must give up most of the decision-making power to highly intelligent machines.
American defense scholars have hypothesized about a similar development, which retired general John Allen and tech entrepreneur and author Amir Husain have termed “hyperwar.”
The evolution of warfare into a regime that is beyond human control would be a profound and troubling development. Humans would lose the ability to effectively control battlefield actions, not just at the tactical “micro” level of how individual units maneuver but at the strategic level of how the war unfolds. Even if humans choose whether to start a conflict, they may lose the ability to control escalation or terminate a war at the time of their choosing. Accidents or unexpected AI decisions could lead to widespread devastation before humans could intervene. Such a development is unlikely in the near- or even mid-term, but if a battlefield singularity is the long-term outcome of the integration of AI into military forces, humanity risks a dangerous future in which wars could spiral out of human control.