Skip to main content

<em>Science</em>: Public’s Moral Inconsistencies Create Dilemma for Programming Driverless Cars

Should self-driving cars be programmed to save more pedestrians, or more passengers? | Science/ AAAS

People generally approve of driverless, or autonomous, cars programmed to sacrifice their passengers in order to save pedestrians, a new study published in Science's 23 June issue reveals, but these same people are not enthusiastic about riding in such autonomous vehicles (AVs) themselves.

In six online surveys of U.S. residents conducted between June and November 2015, researchers asked participants how they would want their AVs to behave. The scenarios involved in the surveys varied in the number of pedestrian and passenger lives that could be saved, among other factors. For example, participants were asked whether it would be more moral for AVs to sacrifice one passenger rather than kill 10 pedestrians.

Survey participants said that AVs should be programmed to be utilitarian and to minimize harm to pedestrians, a position that would put the safety of those outside the vehicle ahead of the driver and passengers' safety. The same respondents, however, said they prefer to buy cars that protect them and their passengers, especially if family members are involved.

This suggests that if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in the latter — even though they would prefer others to do so.

The inconsistency, which illustrates an inherent ethical tension between the good of the individual and that of the public, persisted across a wide range of survey scenarios analyzed, according to the paper's authors.

"Our results highlight a real disconnect between people's moral preferences and their consumer preferences when it comes to autonomous vehicles," said lead author Jean-François Bonnefon, a psychological scientist at the Toulouse School of Economics. "What's also surprising is that the typical solution for such a case, central regulation, may actually do more harm than good here, according to our survey results, significantly delaying the adoption of autonomous cars."

The survey-driven insights of Bonnefon and colleagues highlight just how difficult it will be to make underlying programming decisions for autonomous cars — something that should be done well before the cars become a global commodity, they say.

Autonomous vehicles can sense their environment and navigate without human involvement. Though they might have sounded like science fiction a few years ago, they are fast becoming a reality, now being tested in several U.S. states, for example.

While AVs have the potential to eliminate up to 90% of traffic accidents, not all crashes will be avoided, and some crash scenarios will require underlying AV programming to make difficult ethical decisions. This raises important questions, including: How should these vehicles be programmed, and who should decide on the programming?

"We believe knowing how people want cars to be programmed should be an important input into the legal discussion in this space," said study co-author Iyad Rahwan, the AT&T Career Development Professor at the MIT Media Lab.

Typically, regulation could be useful in such a case. However, based on the authors' survey results — which reveal that regulation could substantially delay AV adoption — regulation could be counterproductive.

"The fact that people would be unwilling to purchase cars regulated to enforce the utilitarian behaviors they want to see in cars around them is surprising since regulation has always been the solution to social dilemmas," said Rahwan. "Understanding the reasons why people are uncomfortable with regulation would be an important next step."

In a related Perspective, Joshua D. Greene, a professor in the department of psychology at Harvard University, highlights additional challenges around programming driverless cars. Manufacturers of utilitarian vehicles will be criticized for their willingness to kill their own passengers, he notes, while manufacturers of self-protective cars "will be criticized for devaluing the lives of others."

A shift to more autonomous vehicles on our roads would also have implications for who, or what, is liable for related accidents.

"This will ultimately depend on what legislation emerges for AVs," explained co-author Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, "but if it is the case that people still have a choice over the behavior of the car — for example, a self-protective versus utilitarian car — then they may retain liability for those decisions. If someone chooses a self-protective car, for instance, will the insurers hold the driver responsible for any collisions that emerged from its self-protective algorithm?"

Though determining just how to build ethical autonomous machines remains "one of the thorniest challenges in artificial intelligence today," according to Bonnefon and colleagues, they said their data-driven survey approach highlights the way the field of experimental ethics can provide key insights as more and more autonomous cars hit the road.

And even though the path to programming AVs faces numerous hurdles, Shariff contends the benefits of AVs are so broad that their widespread adoption is inevitable.

"I think that we will ultimately manage to transcend the social dilemma," he said, "probably through people becoming more comfortable with regulation. It may take longer than many people think, but that just gives the public more time to overcome the psychological barriers by gradually becoming more accustomed to the intermediary steps as cars reach full autonomy."

[Credit for associated image: Wikimedia Commons]