Driverless cars pose a “social dilemma” when it comes to how the vehicle is programmed to uphold safety, according to a new study.
One of the premises behind driverless vehicles is that after being programmed with a set of safety rules, they can dramatically reduce the number of traffic accidents. The problem is, these safety rules can conflict with each other.
One key example is posed in the study: when a car is on course to crash and kill several pedestrians, should the car kill the pedestrians, or swerve and kill the passenger?
This question of whether the self-sacrifice of passengers should be employed to save the lives of others was primarily what the study delved into, including:
- the perceived morality of this self-sacrifice
- the willingness to see this self-sacrifice being legally enforced
- the expectations that AVs [autonomous vehicles] will be programmed to self-sacrifice, and
- the willingness to buy self-sacrificing AVs
The survey showed that people prefer for the vehicle to minimize casualties. So in a scenario that involves a large number of people, the car would swerve off the road and kill the passenger rather than potentially hurting a large number of 10 or so pedestrians.
But, in a direct contradiction, the respondents also said they would be reluctant to buy or use an autonomous car programmed to put the interests of the pedestrian over the passenger.
“Most people want to live in a world where cars will minimize casualties,” says Iyad Rahwan, a MIT professor who co-authored the study. “But everybody wants their own car to protect them at all costs.”
The survey also showed that 76 percent of respondents believe it is moral for a autonomous vehicle to sacrifice the lives of a small number of passengers or a single passenger’s life over the lives of 10 pedestrians. But in similar contradictory fashion, the rating dropped by a third when respondents considered the possibility of riding in such a car.
The results also showed that people were strongly opposed to governmental regulation of driverless cars to make sure such utilitarian principles were enforced in the car’s programming. Respondents were two-thirds less likely to purchase a vehicle regulated in this way.
“This is a challenge that should be on the mind of carmakers and regulators alike,” the scholars wrote. Moreover, if autonomous vehicles actually turned out to be safer than regular cars, unease over the dilemmas of regulation “may paradoxically increase casualties by postponing the adoption of a safer technology.”
The study concludes by saying that people seem willing to embrace autonomous vehicles programmed to make utilitarian moral decisions. But though they wish for that be the norm for other drivers, they are much less likely to buy the vehicles themselves.
“What we observe here is the classic signature of a social dilemma: People mostly agree on what should be done for the greater good of everyone, but it is in everybody’s self-interest not to do it themselves.”
“For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest,” the scholars said.
The authors suggest research is an early stage and findings could change as the landscape of driverless cars evolve.
The paper, “The social dilemma of autonomous vehicles,” was published June 24 in the journal Science.