Can Self-Driving Cars Crash Into Each Other?

Earlier this week, two autonomous cars, operated by Google and Audi, had a close call on the street of Silicon Valley, the first known near collision of its kind.
Can Self-Driving Cars Crash Into Each Other?
Jonathan Zhou
6/28/2015
Updated:
7/7/2015

At least there won’t be any road rage if self-driving cars hit each other.

After six years of driving in test lots and on California roads, Google’s self-driving cars have covered 1.8 million miles and racked up exactly a dozen accidents, according to the project’s monthly report for May 2015 (the first time it included accident information in the report). 

Due to the rarity of self-driving cars, there had been little to no risk of one bumping into another autonomous vehicle—until now.

Last week, two autonomous cars, operated by Google and Audi, reportedly had a close call on the streets of Silicon Valley, according to Reuters, making it the first known near collision of its kind.

Audi maintains that the incident was not a close call at all, and one of the engineers on the Carnegie Mellon research team that powers the Audi vehicle said via email that what happened was how self-driving cars allow other cars to change lanes.

The incident naturally fosters speculation about a future where machine error could replace human misjudgment as the No. 1 killer on the streets. So just how safe will self-driving cars be?

One answer to the question is tautological: Self-driving navigational systems would likely only be approved for commercial use when—not if—they become as cautious and reliable as the average human driver.

Optimistic studies predict that mass production of self-driving cars will start in a mere 15 years, and that the adoption of the technology could cut down traffic accidents by as much as 90 percent and save hundreds of billions in health care costs in the United States. The savings—both in human life and the cost of dealing with injuries—would be even greater in countries like India and China, where the rate of fatal traffic accidents is more than three times the U.S. rate.

Even if self-driving cars perform less well than the average driver, they could still be used in selective scenarios: subbing for a truck driver who has already worked for more than a dozen hours, or a drunk driver returning home from a night out on the town.

A Bumpy Road Ahead

Still, the challenges of creating a self-driving navigational system that can perform well in all terrains—not just simple ones like highways—are numerous. For a car to drive as well as a human in all respects, it will need a high level of artificial intelligence to interpret certain visual cues, such as a policeman waving at a car to stop or go, that aren’t as simple as detecting an object on a radar map.

Then there are the basic problems that come with software. Older personal computers can experience frequent breakdowns, and having your call dropped while talking on a cellphone is a universal experience in the 21st century. Self-driving software systems could also be hijacked by hackers.

“The engineering challenges to development of software-based systems that can operate safely in the highly unpredictable traffic environment are formidable,” said Steven Shladover, a research engineer at UC Berkeley, via email.

In absolute terms, Shladover says, car accidents, especially fatal ones, are a rare event: Car-related injuries occur once every 65,000 hours of driving, and fatal injuries every 3 million hours of driving. He expects fully autonomous cars to be at least decades away from mass production.

“Compare these large numbers of hours between serious errors with the frequency of malfunctions on laptop computers or dropped calls on mobile phones to get a sense of how much more work needs to be done,” Shladover said.

On the other hand, Google boasts that in 1.8 million test-hours, all 12 accidents sustained by autonomous vehicles have been fender benders, with no injuries, and all caused by the other car—all driven by humans of course. But then again, Google is testing its vehicles in an ideal setting, avoiding places with snow, for example, which would interfere with a vehicle’s sensors.

Many of the difficulties that self-driving cars face could disappear once there are enough other self-driving cars on the road: They could pool data on a common network and communicate with each other better than they could with human drivers. For example, a self-driving car making a lane-change would never forget, save in the case of a device malfunction, to alert cars in the vicinity of its course of action.

However safe self-driving cars could be, their behavior could still be manipulated by humans with bad intentions. If the driving patterns of autonomous vehicles are customizable, their owners could make them drive more dangerously than what is optimal.

“Autonomous vehicles could be (re-)programmed by their owners to make them behave more aggressively (merging into traffic at unsafe speed or with inadequate spacing) and sensors are prone to jamming (effectively blinding the vehicle),” commented Ryan Gerdes, a researcher at Utah State University who studies autonomous navigational systems, via email. “Each of these situations could result in accidents.”