Facing a moral dilemma over driverless cars


Driverless cars pose a quandary when it comes to safety. These autonomous vehicles are programmed with a set of safety rules, and it is not hard to construct a scenario in which those rules come into conflict with each other. Suppose a driverless car must either hit a pedestrian or swerve in such a way that it crashes and harms its passengers. What should it be programmed to do?

An article in Science this week shows that the give contradictory responses to scenarios like these. Researchers found that people generally take a utilitarian approach to safety ethics: They would prefer autonomous vehicles to minimize casualties in situations of extreme danger. That would mean, say, having a car with one rider swerve off the road and crash to avoid a crowd of 10 pedestrians. At the same time, they would be much less likely to use a vehicle programmed that way.

Essentially, people want driverless cars that are as pedestrian-friendly as possible -- except for the vehicles they would be riding in. "Most people want to live in in a world where cars will minimize casualties," says Iyad Rahwan, an associate professor in the MIT Media Lab. "But everybody want their own car to protect them at all costs."

People were strongly opposed to the idea of the government regulating driverless cars to ensure they would be programmed with utilitarian principles. They were only one-third as likely to purchase a vehicle regulated this way, as opposed to an unregulated vehicle, which could presumably be programmed in any fashion.

"This is a challenge that should be on the mind of carmakers and regulators alike," the researchers said. Moreover, if autonomous vehicles actually turned out to be safer than regular cars, unease over the dilemmas of regulation "may paradoxically increase casualties by postponing the adoption of a safer technology."

The result is what the researchers call a "social dilemma," in which people could end up making conditions less safe for everyone by acting in their own self-interest: "For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest."




MORE ON THESE TOPICS | utilitarianism

This article is published by Michael Cook and BioEdge under a Creative Commons licence. You may republish it or translate it free of charge with attribution for non-commercial purposes following these guidelines. If you teach at a university we ask that your department make a donation. Commercial media must contact us for permission and fees. Some articles on this site are published under different terms.

 
 Search BioEdge

 Subscribe to BioEdge newsletter
rss Subscribe to BioEdge RSS feed

 
comments powered by Disqus