Driverless Cars: Automakers, Government Regulators, Face Ethical Dilemma

Driverless Cars: Automakers, Government Regulators, Face Ethical Dilemma
A row of Google self-driving Lexus cars are parked at a Google event in Mountain View, Calif., on May 13, 2014. (AP Photo/Eric Risberg)
The Canadian Press
4/6/2016
Updated:
4/6/2016

OTTAWA—Questions about how so-called driverless cars react to collisions and other ethical dilemmas need to be answered by governments because automakers lack the expertise, says a Canadian expert on the ethics of engineering.

Last month’s federal budget included $7.3 million over two years to improve motor vehicle safety, with part of that going toward developing regulations for the automated vehicles that major automakers and technology firms are racing to bring to market.

Advocates for the country’s high-tech and automotive sectors have urged Ottawa to tread lightly as it moves to create new rules for the autonomous vehicle industry.

But regulators need to come to grips with the complicated ethical and political questions that will emerge as the vehicles start rolling onto roadways in large numbers, says Jason Millar, an engineer who teaches philosophy at Carleton University.

“Ethical decision-making is not an engineering problem,” Millar said in an interview.

While many of the questions remain theoretical, they became more real in February when an autonomous car being tested by Google in California was partly blamed for an accident with a bus.

There were no injuries in what the company characterized as a minor fender-bender. But academics and engineering experts cite the incident as proof that greater ethical issues are bound to arise, with more serious potential consequences.

Google acknowledged the role its adaptive programming played in the Feb. 14 accident in a statement to California’s Department of Motor Vehicles and said it had fixed the problem.

“But government cannot assume that Google is an expert in solving ethical problems,” said Millar.

Should an autonomous vehicle, for example, be programmed to protect the vehicle’s occupants in swerving to avoid an accident, or a nearby pedestrian? And who is responsible for an accident if a computer is controlling the vehicle when it happens?

One option being looked at in some jurisdictions, including California, is to cede control back to the human driver during treacherous conditions, so a machine is not required to make moral decisions—a notion that’s been roundly criticized by engineers, said Millar.

“That is just broadly considered a bad design choice,” he said.

“When you have a driver that is not paying attention to the road and you suddenly try to ask them to pay attention, things go horribly wrong.”

The option is also strongly denounced by groups including Mothers Against Drunk Driving, which have advocated for the wide use of autonomous vehicle technology to reduce impaired driving.

“That doesn’t work for impaired driving,” said MADD Canada CEO Andrew Murie.

“If you’re really impaired, and that’s the reason you’re using that (autonomous) vehicle to get home, you don’t want all of a sudden to have a situation where they have to take over (the vehicle).”

Industry observers say they expect Google, Mercedes, Volkswagen, and other major auto manufacturers to press ahead with driverless cars without waiting for ethical issues to be fully resolved. They’ve already done it with some automated safety features such as crash-avoidance technology and self-parking mechanisms.

From The Canadian Press