Jonathan Drake is a Senior Program Associate with the Geospatial Technologies Project, part of the Scientific Responsibility, Human Rights and Law Program at the American Association for the Advancement of Science.
“We completely understand the public’s concern about futuristic robots feeding on the human population, but that is not our mission.”
With these words, Cyclone Power Technologies CEO Harry Schoell attempted to extinguish the flames of what had become a major public relations disaster. On July 7, 2009, Schoell’s company announced that it had completed preliminary work on an experimental engine that would be used to power an autonomous robot developed for the Defense Advanced Research Projects Agency (DARPA). Envisioned as a platform capable of performing “long-range, long-endurance missions without the need for manual or conventional re-fueling,” the robot, assembled by Robotic Technology, Inc. (RTI), would accomplish these feats of perseverance through its patented ability to “find, ingest, and extract energy from biomass in the environment.” It was known as the “Energetically Autonomous Tactical Robot (EATR™)” – and it wielded a chainsaw. [1, 2]
Barely a week later, Schoell was in full damage-control mode. Over the previous few days, headlines savaging his company had blared from front pages in print media and across the Internet. “Military Researchers Develop Corpse-Eating Robots,” announced Wired magazine. [3] “DARPA Stops Trying Not to Be Terrifying,” opined tech blog Gizmodo. [4] In the UK, The Register dispassionately reported “Robot land-steamers to consume all life on Earth as fuel.” [5] Cyclone and RTI’s joint press release, intended to quell this storm of bad publicity, offered scant comfort to allay such fears. Insisting that “this robot is strictly vegetarian” (emphasis in original), the statement reminded readers that “the commercial applications alone for this earth-friendly energy solution are enormous,” and went on to reassure an anxious public that “desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI.” [1]
The whole incident may read like the opening scene of a dark science-fiction comedy, but it was all too real, and highlights many of the difficulties that can arise when technological research outpaces the attention that is given to the ethical implications of autonomous systems. From a strictly engineering standpoint, after all, the concept was a sound one; biomass such as wood and peat has been used as fuel for millennia. When paired with a robot capable of making decisions in the real world, however, the potential for unintended consequences should have been given far more attention. In the case of EATR™, the failure to envision how these might arise from the design of the chainsaw-brandishing automaton led to similarly inadvertent impacts on the company itself, and the project was abandoned.
Unintended consequences result from many new inventions. Autonomous systems, however, are unique in that the decisions that give rise to these consequences are, at least partially, made without human input. The most severe of these potential adverse outcomes arise from machines that are designed to cause harm from the outset, i.e. weapons systems. Long a staple of science fiction films, weapons incorporating varying degrees of autonomous functionality have in fact been in existence for some time, with landmines being one of the simplest –and ethically problematic– examples of the technology. Today, however, the science of artificial intelligence has advanced to the point that the construction of sophisticated fully autonomous robots is a possibility. In response to this, in 2012 the “Campaign to Stop Killer Robots” was launched by a coalition of NGOs seeking to ensure that life-or-death decisions remain firmly within human hands. [6]
Although not associated with the campaign, one organization philosophically aligned with maintaining human control over lethal force is the U.S. Department of Defense, which that same year issued Directive number 3000.09, defining its policy that fully autonomous weapon systems are only to “be used to apply non-lethal, non-kinetic force such as some forms of electronic attack.” [7] Lethal force, according to current policy, requires human control. While organizations such as Human Rights Watch consider this policy a positive development from an ethics standpoint, they have nonetheless noted that the policy applies only until 2022, and can be waived by the Deputy Secretary of Defense in cases of “urgent operational need.” [8, 9] In the absence of a broad international consensus, some experts worry that such a need may become unavoidable if, in the future, adversaries are able to increase their battlefield advantage by progressively surrendering control to computers. The end state of such an arms race, according to futurist Michael LaTorra, could be that “after a few cycles of improvement, the race to develop ever more powerful military robots could cross a threshold in which the latest generation of autonomous military robots would be able to outfight any human-controlled military system.” [10]
Even with the cooperation of governments, however, the risks of destructive uses of autonomous technology remain. The same increase in capability and decrease in cost that has revolutionized the electronics industry over the last several decades has also resulted in the rapid proliferation of autonomous technology into the consumer space. This development has allowed non-state actors access to capabilities that, in the recent past, were available only to governments. Unmanned aerial vehicles (UAVs, colloquially known as “drones”), for instance, many of which can follow a defined flight path, perform tasks, and return without human intervention, were linked to a number of incidents surrounding nuclear facilities in France in 2014. [11] In the United States, a plot to attack the Pentagon with GPS-guided UAVs was disrupted by federal agents in 2011. [12] Ground-based autonomous systems are also well within the capabilities of non-state actors. One U.S. hobbyist, for example, was able to demonstrate the construction of a functional “robot sentry” with autonomous target acquisition and tracking software. [13] While his creation is armed only with a paintball gun, the potential for a malevolent user to adapt the same functionality to lethal effect is clear.
Although less dramatic than weaponized robots, the development of autonomous technology in the private sector also has the potential to be highly disruptive, on both an individual and a societal level, by automating processes once performed by humans. Like autonomous weapons, concerns about workers being replaced by machines have existed in various forms since the industrial era. As in the military sector, however, the current pace of change is accelerating at a remarkable rate. Since the end of the most recent recession, for example, manufacturing output in the United States has increased by over twenty percent, but employment in the same sector has risen by only a quarter as much. [14] This trend may only be the beginning. According to a 2013 study by researchers from Oxford University, as general-purpose robotic systems become increasingly capable and more easily programmable, an estimated 47 percent of U.S. jobs may be at high risk of automation. [15] The policy and ethical implications of such a development would be particularly acute since, as the study notes, many of the tasks most likely to be automated correspond to low-skilled jobs that today are disproportionately held by the working poor.
Whether or not autonomous systems end up replacing human labor to that extent, it is clear that in the future humans and robots will increasingly inhabit the same space, both at home and in the workplace. This itself poses significant ethical challenges, particularly with regard to ensuring human safety. Self-driving cars, for example, will be tasked with making rapid decisions in which human lives are at risk – decisions that any human would recognize as having a significant ethical component. Twentieth century author Isaac Asimov attempted to resolve these issues with his famous “Three Laws of Robotics,” in which robots must prioritize human security above all other directives. Asimov, however, also recognized the potential that even the most well-intentioned laws could result in outcomes that their creators never envisioned or invited. [16] Such “emergent behavior,” in which complex actions result from multiple robots following individually simple directives is an active area of robotics research, and shows great promise for solving difficult problems. Ensuring that this emergent group behavior is ethical, however, may be even more difficult than ensuring the same for an individual robot. [17]
Looking further into the future, it is possible that artificial intelligence may develop to such an extent that the safety and rights not just of humans, but of the robots themselves may need to be considered. Already, robots are exhibiting behaviors which are sufficiently animal-like that online videos have satirically condemned their testing process in the style of an ASPCA publicity campaign, and parody websites have emerged denouncing such practices as “robot abuse.” [18, 19] That such efforts may someday lose their tongue-in-cheek character is entirely plausible, according to Ryan Calo, a professor of law at the University of Washington. As robots become increasingly human-like in behavior, he suggested in a 2015 interview with California magazine, it may become correspondingly ethically problematic to treat them the way we do other machines. “Nobody cares when you dump your old TV on the street,” Calo said, but “how will they feel when you dump your old robot?” If such robots are ever imbued with significant levels of intelligence, he suggests that it would bring about “a fundamental sea change in the way we think about human rights.” [20]
At present, such far-reaching concerns remain a long way off. The need to deal with the unethical use of autonomous systems, however, is already a growing concern. In July of 2015, a UAV dropped drugs into a prison recreation yard in Ohio, and close calls involving drones intruding on airports and firefighting operations have become increasingly common. [21] So far, in the battle against misbehaving drones, the humans are decisively winning. One British company has developed a “death ray” that disables UAVs from up to a mile away with a concentrated burst of radio energy. [22] In the Netherlands, by contrast, the Dutch National Police are addressing the problem of errant drones in a decidedly low-tech way – by training eagles to size them in their talons and drag them out of the sky. “These birds are used to meeting resistance from animals they hunt in the wild, and they don't seem to have much trouble with the drones,” claimed the founder of the company training the eagles, when interviewed by Reuters. “The real problem we have,” he went on, “is that they destroy a lot of drones. It's a major cost of testing.” [23]
[1] http://www.robotictechnologyinc.com/images/upload/file/Cyclone%20Power%20Press%20Release%20EATR%20Rumors%20Final%2016%20July%2009.pdf
[2] http://www.robotictechnologyinc.com/images/upload/file/Presentation%20EATR%20Brief%20Overview%206%20April%2009.pdf
[3] http://www.wired.com/2009/07/military-researchers-develop-corpse-eating-robots/
[4] http://gizmodo.com/5311824/darpa-stops-trying-not-to-be-terrifying-funds-chainsaw-wielding-flesh-eating-robot
[5] http://www.theregister.co.uk/2009/07/09/eatr_beta/
[6] https://www.stopkillerrobots.org/
[7] http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf
[8] https://www.hrw.org/news/2013/04/16/us-ban-fully-autonomous-weapons
[9] https://www.hrw.org/news/2013/04/15/review-2012-us-policy-autonomy-weapons-systems
[10] http://io9.gizmodo.com/10-horrifying-technologies-that-should-never-be-allowed-1635238363
[11] http://www.cbsnews.com/news/dozens-of-french-nuclear-plants-buzzed-by-mystery-drones/
[12] https://www.fbi.gov/boston/press-releases/2012/man-sentenced-in-boston-for-plotting-attack-on-pentagon-and-u.s.-capitol-and-attempting-to-provide-detonation-devices-to-terrorists
[13] http://www.tested.com/art/makers/448919-world-maker-faire-2012-project-sentry-gun/
[14] http://fivethirtyeight.com/features/manufacturing-jobs-are-never-coming-back/
[15] www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
[16] Asimov, Isaac. The Evitable Conflict. Astounding Science Fiction, June 1950, John W. Campbell, Jr. Ed. pp. 48-63
[17] Robot Ethics: The Ethical and Social Implications of Robotics. Lin, P.; Abney, K.; and Bekey, G. Eds. Cambridge, MIT, 2012
[18] https://www.youtube.com/watch?v=uXcatFp3REg
[19] http://stoprobotabuse.com/
[20] http://alumni.berkeley.edu/california-magazine/just-in/2015-06-08/good-bad-and-robot-experts-are-trying-make-machines-be-moral
[21] http://www.npr.org/sections/alltechconsidered/2015/07/24/425652212/in-the-heat-of-the-moment-drones-are-getting-in-the-way-of-firefighters
[22] https://www.theguardian.com/technology/2015/oct/07/drone-death-ray-device-liteye-auds
[23] http://www.reuters.com/article/us-dutch-police-drones-idUSKCN0VB136