Military machines can increasingly move, search for targets, and even kill without human control. Current autonomous weapons have typically been used for tactical defense, but the future is unclear. Because autonomy can be applied to virtually any weapon system or platform, the net effects of autonomy for offense and defense depend on a variety of factors, ranging from the significance of the application to the reliability of the system.
Military machines can increasingly move, search for targets, and even kill without human control. Growing computer power coupled with advances in artificial intelligence empower autonomous weapons and platforms to carry out more sophisticated behaviors and activities. Autonomy fundamentally means reducing human involvement in command and control, which theoretically means that virtually every military platform and weapon can be made autonomous. Whether this autonomy is sensible or ethical is another question.
Current autonomous weapons have typically been used for tactical defense. Land and seamines are extremely simple autonomous weapons, based on mechanical triggers to keep an enemy from crossing or holding a particular piece of territory. Other autonomous weapons like close-in weapon systems and active protection systems are used to defend military platforms against incoming projectiles. Of course, that tactical defense may support strategic offense, protecting invading forces from defender attack. Some systems like loitering munitions, particularly radar-hunting missiles, are designed primarily for offense, to destroy defending air and missile defenses.
The future of autonomous weapons is unclear. A broad range of factors influence whether autonomy will ultimately favor offense or defense and the degree of this impact. These factors include the type and nature of the application, reliability in face of adversary interference, affordability and availability of application types, and overall effects on the cost of war.
Differing Applications of Autonomous Weapons
Autonomy can be applied to virtually any weapon system or platform, so the net effects for offense and defense depend on which applications prove most significant. For example, militaries are developing drone swarms to target air and missile defenses. Cheap drones may overwhelm and destroy defenses to ensure more expensive manned aircraft are safe from reprisal. But autonomy and artificial intelligence can also improve those samedefenses, increasing the risks to manned aircraft. The need to protect against aerial drone swarm attacks is already driving improvements to those defenses.
The nature of the application matters too. Offense-defense theorists argue nuclear weapons are the ultimate defensive weapon, because they ensure nuclear-armed states can retaliate with overwhelming destruction. If so, then any technology weakening that deterrent must favor the offense. Some researchers have argued that the creation of massive underwater sensor networks may render the ocean transparent, effectively eliminating undersea nuclear second-strike capabilities. Hypothetically, a mixture of unmanned undersea, surface, and aerial vehicles; sensors; and manned anti-submarine warfare systems would comb the ocean to find nuclear submarines. The argument goes that if an adversary can locate every nuclear submarine, they may be able to destroy them all in a single strike. The reality is likely morecomplex, given the challenge of processing and managing such a huge network and carrying out strikes against identified targets. Nonetheless, any meaningful risk to nuclear stability certainly would have outsized global effects.
Autonomy and artificial intelligence can also play a support role for manned forces. Autonomous vehicles can provide logistical support by helping transport supplies and forces to the battlefield, which would favor offensive operations. But what happens if an autonomous convoy comes under attack? Limits on autonomous cognition may inhibit their response, making them more of a liability than an asset. At the same time, autonomous systems can help collect intelligence to identify targets, assess enemy defenses, and plan military actions, and artificial intelligence can help sift and process that information. This can help attackers identify vulnerabilities and plan attacks, while also granting defenders better situational awareness to monitor movements of attackers and plan ambushes.
Reliability of Autonomous Weapons
The reliability of autonomous weapons and platforms also affects the net impact of autonomy on offense and defense. Current machine vision systems are heavily dependent on trainingdata. Machines require large amounts of data to know the difference between a car, a tank, a cruise ship, and a naval destroyer. Training data may exhibit structuralbiases that affect how and when the weapons are used. For example, an autonomous weapon might be able to recognize an unobstructed tank on a sunny day, but what about a foggy or snowy day, or if the tank is partially obscured by a tree or a building? Militaries cannot always anticipate the environmental characteristics of where the autonomous systems are deployed. Differences in climate, flora, architecture, and other factors may create reliability problems in deployed areas. For example, Russia deployed the unmanned combat vehicle Uran-9 in Syria, but it worked poorly: it struggled to spot enemies farther away than 1.25 miles, its sensors and weapons were useless when moving, the tracked suspension had unreliable parts, and the remote control system had a much shorter range than expected. For autonomous weapons to be meaningful in combat, these challenges need to be resolved.
Enemy action also affects reliability. In the civilian domain, mere stickers on a stop sign have been enough to manipulate an autonomous car. Hopefully, a military autonomous system would use multiple sensors to prevent such a simple manipulation, but interference is still possible. At an extreme level, manipulation could cause autonomous weapons to fire on friendly forces. The unreliability of autonomous weapons may mean that they tend to favor the defense, as defending militaries can more readily control and influence the environment to cause problems for attacking forces.
Effects on the Cost of War
Conversely, autonomous weapons may make wars easier to start, benefiting the offense. Autonomous weapons necessarily reduce the immediate risk to human operations by removing humans from the battlefield, potentially reducing the perceived risk of military conflict. If war is cheaper to start, then theoretically, war will happen more readily. A military composed of numerous autonomous and unmanned systems would exacerbate those concerns. A drone support force is one thing; an army of them is another entirely.
Affordability of Autonomous Weapons
The affordability of autonomous weapons may favor defense. In theory, autonomous weapons should be cheaper than manned systems. They can be more readilymass produced since they do not need to support human life, and may be as simple as a small drone with a bomb strapped onto it. And defensive autonomous weapons are likely to be cheaper than offensive weapons, because defensive weapons do not require mobility and may be designed to control a known, expected battlefield. If so, autonomous weapons and platforms may be deployed in larger numbers by defending states, allowing them to impose greater costs on an attacking state.
Autonomous weapons and artificial intelligence are clearly growing features of global conflict, but what that means for global stability is unclear. Researchers and policy-makers need to better understand what weapons are used, how they are used, and by whom. That requires research and analysis at all levels of warfare, across all domains—land, sea, air, and space. States and civil society need to exploit the opportunities autonomy offers, while identifying and countering the risks. Global security depends on it.
Zachary Kallenborn is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), the Master Coordination Director for Project Exodus Relief helping evacuate high-risk Afghan refugees, an officially proclaimed U.S. Army “Mad Scientist,” and national security consultant. His research on autonomous weapons, drone swarms, weapons of mass destruction (WMD) and WMD terrorism has been published in a wide range of peer-reviewed, wonky, and popular outlets, including the Brookings Institution, Foreign Policy, Slate, War on the Rocks, and the Nonproliferation Review. Journalists have written about and shared that research in the New York Times, NPR, Forbes, the New Scientist and WIRED, among numerous others.
Image Credit: By Lt. Col. Leslie Pratt – afrc.af.mil, Public Domain.
A year ago, Russia’s cyberwar against Ukraine was reviled as it deployed hostile information and systems interventions with synchronized physical hostilities. Yet, the results of the cyberwar have been far…
ChatGPT and other natural language models have recently sparked considerable intrigue and unease. Governments and businesses are increasingly acknowledging the role of Generative Pre-trained Transformers (GPTs) in shaping the cybersecurity…