Title: War, Artificial Intelligence, and the Future of Conflict
Artificial intelligence (AI) is now influencing every area of human life. The past decade has seen a drastic increase in the use of AI, including facial recognition software, self-driving vehicles, search engines, and translation software. These accepted uses of AI in modern society have also coincided with an increased presence of AI in modern warfare. The escalating weaponization of AI parallels the nuclear arms race of the Cold War, with nuclear weapons being replaced with automated weapons systems. However, the international community, the United Nations, and international law have been struggling to adapt to and regulate the use of automated weapons, which are rapidly changing the landscape of modern warfare.
The international community started to take notice of AI and its influence on modern warfare in 2012, with a series of documents outlining the use of automated weapons systems. These documents included policy directives by the U.S. Department of Defense (DoD) on autonomy in weapons systems and a report from Human Rights Watch and the Harvard Law School’s International Human Rights Clinic (2012 HRW-IHRC report) calling for an outright ban on automated weapons.
The development and use of weapons that can undertake autonomous functions during conflict is becoming the focus of states and tech companies. In 2017, an open letter from the Future Life Institute to the United Nations (UN) signed by 126 CEOs and founders of artificial intelligence and robotics companies “implored” states to prevent an arms race for autonomous weapons systems (AWS). However, no international legal regulatory framework exists to address these concerns around the use of AI, particularly in the context of conflict. The only legal framework for AI that does exist, established by Article 26 of the International Covenant on Civil and Political Rights, only relates AI use to the right to privacy.
What is an Automated Weapon?
There are competing definitions of what constitutes AWS, although the UK Ministry of Defence (MoD) and the U.S. Department of Defense (DoD) have developed the two main definitions. In 2011, the UK MoD defined AWS as “systems capable of understanding higher level intent and direction, namely of achieving the same level of situational understanding as a human and able to take appropriate action to bring about the desired state.” Comparably, the U.S. DoD in 2023 proposed a different approach and defined “AWS as being capable of once activated, to select and engage targets without further intervention from a human operator.” The 2012 HRW-IHRC report advanced a similar definition for the international community, defining AWS as “fully autonomous weapons that could select and engage targets without human intervention.” The NATO Joint Air Power Competence Centre (JAPCC) also extends the notion of automation to “consciousness and self-determination.” Examples of automated weapons include defensive systems like the Israeli Iron Dome and the German MANTIS as well as active protective vehicles like the Swedish LEDS-150. A new definition would also need to include automated weapons used in non-conflict situations, like the South Korean Super aEgis II, which is used as a peacetime surveillance device along the South and North Korean border.
However, the real problem lies in future-proofing definitions of AWS. Definitions must not only include systems not already accounted for, such as the Super aEgis II, but also anticipate AWS that may emerge in the future. In particular, the international community must agree on a definition that can encompass AI human cognitive inputting algorithms, which have humanlike decision-making capabilities.
Despite this need, the international community has yet to agree on regulations of AWS. The UN Convention on Conventional Weapons (CCW) has a special Amended Protocol (1986) governed by the Group of Governmental Experts (GGE), who meet annually to discuss the implementation of the Protocols of the CCW and related weapons issues. The latest GGE meeting, held in May 2023, ended without any substantial progress on AWS, as the GGE did not agree on any regulatory safeguards. Their draft report also failed to advance a legal framework. However, the report did introduce prohibitions centered on the need for human control of AWS as well as regulations centered around the development of AWS. Fifty-two states issued a joint statement of support for the draft report, but they also stated that the draft was a minimum standard and emphasized the need for a much more robust and ambitious legal framework. In its May 2023 meeting, the GGE resolved to organize longer discussions on emerging lethal AWS technologies in March and August 2024. While more meaningful developments may occur in the latest GGE discussions, a legal framework to regulate the development and deployment of AWS has yet to emerge.
The Race for Killer Robots
Russian President Vladimir Putin has stated that the nation that leads in AI “will become the ruler of the world.” The advancement of AI in modern warfare will forever alter the relationships between great powers like the United States, China, and Russia as well as the private technology industry. For this reason, China has committed 150 billion dollars to become the world leader in AI technology, compared to Russian spending of 181 million dollars from 2021 to 2023 and U.S. spending of 4.6 billion dollars. In 2019, Jane’s stated that more than 80,000 surveillance drones and almost 2,000 attack drones will be purchased around the world in the next decade. The UK operates missile-bearing drones and plans to spend 415 million pounds on Protector drones by 2024. Saudi Arabia also cannot be underestimated as a newer entrant in the drone marketplace, having invested 69 billion dollars in 2023—23 percent of its national budget—on defense. Additionally, Saudi Arabia plans to create a 40 billion dollar fund to invest in AI, which would make it the world’s largest AI investor.
With spending on drone and AI development increasing so rapidly, advancements in technology might eventually enable drones to make decisions instantaneously without human input during conflict. This may potentially eliminate peaceful negotiation in conflict, as drones’ reactions will consist purely of retaliatory violence. Drone technology has already advanced from its use by NATO to identify hidden Serbian strategic positions during the Kosovo War in 1999 to its use by the United States in the immediate aftermath of the September 11 terrorist attacks. After an ISR drone successfully located Osama Bin Laden, the U.S. military increasingly used and outfitted drones with lethal payloads, carrying out 14,000 drone strikes in Afghanistan alone from 2010 to 2020.
The United States, the United Kingdom, and Israel remain the largest users of drones, and their arsenals continue to grow. The United States and the United Kingdom have used weaponized drones for over a decade, including the Predator and the Reaper, both made by the California-based company General Atomics. According to Drone Wars, in four years of conflict in Syria from 2014 to 2018, the United Kingdom used Reaper drones more than 2,400 times during strategic missions, the equivalent of two per day. The Pentagon estimates that by 2035, remotely piloted aircraft will make up 70 percent of the U.S. Air Force. Meanwhile, Israel has been developing its own weaponized drones, and it has deployed drones in Gaza to conduct surveillance, deliver explosives, and more.
Furthermore, drone technology is spreading rapidly to militaries around the world. Nearly every NATO member state now has the capability to use drones in conflict. In the last five years, both Turkey and Pakistan have also created drone manufacturing programs. China currently supplies several states with its Wing Loong and CH-series drones, including the UAE, Egypt, Saudi Arabia, Nigeria, and Iraq. Even non-state actors are using drones. Hezbollah has used Iranian-built reconnaissance drones to violate Israeli airspace, while Hamas has been using drones against Israel since October 2023.
AI use in warfare is also spreading rapidly. Reports suggest that Ukraine has equipped its long-range drones with AI that can autonomously identify terrain and military targets, using them to launch successful attacks against Russian refineries. Israel has also used the “Lavender” AI system in the conflict in Gaza to identify 37,000 Hamas targets. Accordingly, the current conflict between Israel and Hamas has been dubbed the first“AI war.” However, no evidence indicates that an AWS, a system without significant human control, has been used in conflict yet.
As anxieties grow over the emergence of so-called “killer robots,” AI use in warfare raises increasingly salient ethical and legal questions. In particular, drones may not be able to distinguish the difference between combatants and civilians. Thankfully, many AI technologies are still in development. The “killer robots” analogy refers to unmanned aircraft that can operate autonomously; however, most current AI only functions well in a narrow, predetermined set of circumstances, with input by human operators.
Nonetheless, the increasing integration of AI into drones and other AWS creates very real dangers of conflict being decided without meaningful human control. The use of violence during conflict may be determined by the instincts of machines incapable of navigating the moral ambiguities of war and making ethical decisions. It is impossible to predict how the law will keep up with or even stop such technological advances, but the current legal framework certainly lacks clarity and foresight.
Conclusion
It is uncertain whether the ethical or moral questions surrounding conflict driven by algorithms and machines without human intervention can ever be answered. The use of automated drones, which are not weapons themselves but rather platforms to deliver weapons, is not specifically regulated under international law. Although the use of drones is governed by the same principles of all weapons under international law—namely, the rules of distinction, proportionality, and prohibition of indiscriminate attacks—the absence of any specific laws makes regulation exceedingly difficult. The international community must establish an international legal framework that ensures humans always retain meaningful control over AWS and that systems do not select military targets during conflict autonomously.
The GGE’s latest report, published in 2023, emphasizes that legal measures must exist to restrict the use of “weapon systems based on emerging technologies in the area of lethal autonomous weapons systems that, once activated, are able to identify, select, track, and apply force to targets, without further human intervention.” As an unprecedented innovation in the weaponry of war, AWS requires a new international legal framework that is robust and flexible enough to keep pace with the breakneck pace of technological progress. The GGE, therefore, must continue to push the UN to adopt a new international legal framework that restricts the development and use of AWS in modern warfare.
. . .
Kristian Humble is an Associate Professor of International Law in the School of Law and Criminology at the University of Greenwich, London. He is widely published on topics within international law including human rights, artificial intelligence, the right to privacy, populism, modern warfare and international relations. He is also a contributor to the House of Lords Select Committee on Artificial Intelligence in Weapon Systems.
Image credit: Lt. Col. Leslie Pratt, public domain, via Wikimedia Commons.
Recommended Articles
In this article, I historicize the Gaza war from the joint vantage point of Israeli Jews and Palestinian Arabs through the perspective of collective violence. The Israeli Jewish and the…
Many assessments of the reasons for Russia’s invasion of Ukraine have emphasized Vladimir Putin’s individual characteristics, such as his early career as a KGB officer, imperial ambitions towards Ukraine, or…
Southeast Asian nations such as Malaysia, the Philippines, and Vietnam, but also ASEAN, are hedging toward both China and the United States. However, China’s assertiveness in the South China Sea…