Skip to main content

Killer Robots Pose Grave Threats to Civilian Safety and Ethical Norms

Members of the Campaign to Stop Killer Robots meet in 2013. | Campaign to Stop Killer Robots

The possibility of so-called “killer robots” poses significant threats to established ethical, diplomatic and legal norms, according to participants of a press briefing at the 2019 AAAS Annual Meeting. Countries, institutions and associations must work together to build regulatory frameworks and confront the prospective threat of these machines, the speakers said.

Lethal autonomous weapons, or military robots that can engage and kill targets without any human control, do not yet exist. However, autonomy and artificial intelligence capabilities in weapons are rapidly increasing and countries such as the U.S., China and Israel are heavily investing in the development of autonomous weapon systems, the panelists said.

As weapons technology has advanced, researchers, industry leaders and other experts have sounded the alarm. In August of 2017, more that 100 experts signed an open letter to the United Nations, claiming “killer robots” represent a potential third revolution in warfare after gun powder and nuclear weapons. The signatories claimed that killer robots could carry drastic and negative consequences for political stability and civilian safety and must be pre-emptively banned.

The briefing panelists fielded questions about the main opponents to banning killer robots, the current state of technological progress on autonomous weapons systems and the state of play of diplomatic talks on the issue.

When discussing killer robots, “we’re talking about much simpler technologies than ‘Terminator’ which are at best a few years away,” said Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, Australia.

“You can see many of these under development today in every theater of war, such as autonomous drones in the air, autonomous ships and submarines in the sea and on land with autonomous vehicles and tanks,” he said.

Walsh stressed that although there are some arguments for autonomous weapons, such as the fact they might reduce collateral damage in war zones, those arguments don’t hold up to scrutiny, and there are many more valid counterarguments against autonomous weapons.

“Autonomous weapons will change the speed and duration of warfare and the already delicate geopolitical balance between countries,” he said. “Also, these weapons will cross a red line, because machines do not have a moral capability, today or ever, to be able to take these sorts of decisions.”

Mary Wareham, Peter Asaro and Toby Walsh spoke Thursday during a press briefing on killer robots at the 2019 AAAS Annual Meeting. | Adam Cohen/AAAS

Mary Wareham, coordinator of the Campaign to Stop Killer Robots at the Human Rights Watch in Washington, D.C. argued that public and international support for banning killer robots has steadily increased in the past decade.

Wareham, who worked for the Vietnam Veterans of America Foundation in support of its Nobel prize-winning International Campaign to Ban Landmines, now coordinates the Campaign to Stop Killer Robots (CSKR), an international coalition of non-governmental organizations working to ban lethal autonomous weapon systems.

“Our campaign is now comprised of 93 non-governmental organizations and 53 countries, we’re growing very rapidly now around the world,” she said. “Our objective is to have true movement here [on this issue].”

She discussed results from an Ipsos poll released in January this year and commissioned by the CSKR showing that the public strongly supports a ban on killer robots and said that opposition is clearly rising. According to the poll results, 61% of respondents from 26 countries said they “totally oppose” the use of killer robots in war, compared to 56% two years ago.

Furthermore, many poll participants indicated the use of killer robots crosses a clear moral line, according to Wareham. Around 66% of those opposed to killer robots said their opposition was due to their belief that machines should not be allowed to kill and 54% said they opposed killer robots because the weapons would be unaccountable.

“Public opposition was strong across the board, and this opposition was strong for both men and women, although it was men who were more likely to favor these weapons systems,” she said.

A chief goal for the CSKR is to push countries to adopt a formal and legally binding ban on killer robots, Wareham said. However, progress at the Convention on Certain Conventional Weapons, a U.N. body that prohibits or restricts conventional weapons, has been mixed. At the convention’s latest meeting in Geneva in November of 2018, El Salvador and Morocco added their names to a list of 28 countries calling for a ban on fully autonomous weapons. However, Russia blocked the negotiations from continuing and reduced the time dedicated to talks in 2019.

“The diplomacy has been failing, so what we’ve been doing in the campaign is focusing our efforts on the national level,” she said. “We don’t know how the story will end, but we hope it will end with an international treaty that provides guidance stigmatizing the removal of human control.”

Peter Asaro, associate professor of media studies at The New School in New York, N.Y, drew upon his experiences as a participating member in U.N. discussions on autonomous weapons and his role as the co-founder of the International Committee for Robot Arms Control (ICRAC), an association that brings together experts in robotics, philosophy and human rights.

“What we’ve been lobbying is a complete ban on some technical capabilities involved in autonomous weapons, but also a more general requirement that all weapons systems should have meaningful human control over the targeting,” he said.

Asaro cited the example of Project Maven, a Google project with the Pentagon to develop automatic visual object recognition within visual feeds for drones. Many concerned employees at Google reached out to ICRAC, which organized a letter from 1,400 scientists to support the workers. Facing intense media pressure, Google subsequently said they would not renew the contract after it expires in 2019 and issued a set of ethical principles for developing artificial intelligence.

“We’re also fearful autonomous weapons could constitute a new kind of weapon of mass destruction, to the extent that a small group could launch large numbers of autonomous weapons with devastating effects on population centers,” he added.

When asked who they see as their opponents, the participants identified several significant military powers such as the U.S. and Russia, which have blocked diplomatic proposals at the U.N. Asaro said these countries anticipate gaining tactical military advantages from these weapons over their competitors. However, he believes the advantages will be short-lived, and argued these nations haven’t fully considered how these systems could potentially destabilize relations and undermine existing forms of deterrence.