The below article is the second installment in a two-part series. The first installment can be found here. Artificial Intelligence can benefit humanity or it can be married with weapons to create fully autonomous killing machines. Are people prepared for this third revolution in warfare? What are the pros and cons of such weapons and should they be banned from the battlefield?
Cons Machine killings violate the most basic of rights—the right to life and the concept of human dignity. Machines are not inherently moral beings that have human traits, such as empathy and compassion, which are essential for complicated and ethical decision-making in war.
Fundamental principles of the laws of war include the ability to judge the proportionality of an attack, to distinguish between civilians and combatants during conflict, and accountability in battle. Proportionality refers to the ability to weigh the military advantage of an operation as compared to the civilian cost. Distinction requires that at all times combatants must be able to distinguish between civilians and combatants, and between civilian objects and military objectives. Military operations should not be directed at civilians or civilian objects like hospitals or schools. Indiscriminate attacks or operations using indiscriminate means and methods of war are prohibited. Changing unexpected circumstances in battle would be difficult for machines to react to and could result in violations of the laws of war.
This brings up another problem with killer robots—accountability. It is unclear who would be held responsible if an autonomous weapon were to commit a war crime—the manufacturer of the weapon, the machine’s programmer, the military commander who launched the weapon, or even the killer robot itself. The difficulty in assigning culpability would make it extremely difficult to ensure justice, especially for victims.
Those who are not particularly concerned about autonomous weapons making war on human beings or on each other seem to assume that the programming of such weapons, the algorithms that determine how autonomous weapons would function, would be clear of bias and that all of the complexities and confusion of warfare could be programmed so that killer robots would respond appropriately in any situation.
However, no neutral “superior intellect” would program the machines; instead, humans, not one of whom is free of bias, would create the algorithms. There are already issues of bias and other ethical problems with facial recognition technologies, which would likely be used with some autonomous weapons. And this example is but one of many to consider when it comes to programming autonomous weapons.
Consider a conflict between two powers that possess autonomous weapons of all types. The conflict escalates and a commander decides to attack the enemy with a swarm of autonomous fighter jets, all armed with autonomous missiles, and the opposing commander responds in kind. Since neither side knows the programming of the other’s swarming jets or munitions, it would be almost impossible to predict the outcome upon interaction. On top of that, the weapons could be spoofed or hacked compounding the threat.
Such a precarious, unpredictable situation could easily spiral out of control perhaps even leading to the use of nuclear weapons. In 2019, Lt. Gen. Jack Shanahan described war with autonomous weapons systems: “We are going to be shocked by the speed, the chaos, the bloodiness and the friction of a future fight in which this will be playing out in microseconds at times.”
Replacing soldiers with machines could make going to war easier. Many drone pilots are deeply and negatively affected by their operations, leading to a high turnover in the job. Four former drone pilots participated in an interview “to register their opposition to the ongoing reliance on the technology as the US military’s modern weaponry of choice.” One referred to an operation he participated in as “cowardly murder.”Another learned, after leaving the program, that he participated in drone strikes that killed 1,626 people while safely sitting in a room thousands of miles away.
With fully autonomous drones, there would be no turnover of pilots disturbed by their work, and autonomous drones would not give interviews to talk about their concerns about killing from thousands of miles away. However, a world with killer robots unbothered by human emotions helps erase the emotional restraint against combat and killing, making it easier to go to war.
Finally, given the trends of providing law enforcement agencies with military-grade weapons, it is not inconceivable that fully autonomous weapons could end up outside of military actions in war. Police and border control with autonomous weapons could use them to suppress protests and dissent anywhere. Authoritarian regimes could be particularly prone to using them.
The Way Forward: A New Treaty Prohibiting Killer Robots
Some states, in particular those vigorously pursuing ever-greater autonomy in weapons—the United States, Israel, South Korea, Russia, and China—argue that it is premature to ban killer robots. They contend it is impossible to prohibit a weapon that you cannot even define. They also contend that it is counterintuitive to stop the research and design of the weapons when technology will advance enough to make the various arguments against autonomous weapons obsolete. These diversionary arguments are part of what grounded meaningful action on killer robots in the CCW to a halt.
But the most significant reason for no meaningful action in Geneva is because the CCW operates by consensus: a single country opposed to a treaty could stop negotiations before they even start. As such, “consensus” will never lead to dealing with autonomous weapons in any meaningful way and will result in a new, destabilizing arms race of robotic weapons.
The Campaign to Stop Killer Robots and other organizations and governments that share the myriad concerns about a possible future where the world is awash in various types of killer robots—in the air, on land, and in the ocean—recognize the specious arguments for what they are and continue their work to help bring about a treaty prohibiting weapons that can target and kill human beings on their own.
There are various steps to put a brake on autonomous weapons until the negotiation of a treaty prohibiting weapons with full autonomy. For example, states enacted moratoriums on weapons while discussions about them took place. Some even enacted domestic legislation that banned weapons and helped build the momentum that led to weapons ban treaties.
Thirty countries, the Secretary-General of the UN, four thousand five hundred artificial intelligence experts, and twenty-six Nobel Peace laureates have called for a ban on killer robots. In April 2019, the European Parliament blocked the use of any money from the EU Defense Fund to pursue the development of killer robots. A recent international poll commissioned by the Campaign to Stop Killer Robots found that sixty-one percent of those interviewed support a ban on autonomous weapons.
All of these steps will help build momentum toward a treaty prohibiting autonomous weapons, but unless the CCW dramatically changes course, the killer robot treaty will end up being negotiated outside of the UN. Already, proponents of the ban believe that the CCW has run its course and that it is time to move on.
. . .
Jody Williams received the Nobel Peace Prize in 1997 for her work to ban antipersonnel landmines, which she shared with the International Campaign to Ban Landmines. She is a co-founder of the Campaign to Stop Killer Robots. Williams also chairs the Nobel Women’s Initiative, which brings together five women Nobel Peace Prize recipients to support women’s organizations working in conflict zones to bring about sustainable peace with justice and equality.
In an October 2020 referendum, Chileans overwhelmingly voted to begin drafting a document to replace the 1980 Augusto Pinochet-era constitution. Chile now faces the test of satisfying demands for deep…