The Next Prey for Hackers: Your Car

Out of 20 major car manufacturers, only two said they had technology that could detect a system infiltration in real time.
The Next Prey for Hackers: Your Car
Jonathan Zhou
2/17/2015
Updated:
2/22/2015

On Monday, Sen. Ed Markey (D-Mass.) released a report attacking the automotive industry for not taking adequate security precautions to protect cars from hackers. It detailed a litany of vulnerabilities and safety measures that were lacking in modern vehicles, most of which are equipped with wireless entry points.

Out of 20 major car manufacturers, only two said they had technology that could detect a system infiltration in real time, the report found, and only one company was able to produce records of past hacking incidents.

Markey’s call for more automotive regulation coincided with a report on CBS’s “60 Minutes” showing employees from the Defense Advanced Research Projects Agency hacking into a Chevrolet through GM’s OnStar system, allowing them to control the car’s windshield, breaks, and even honk the horn.

That cars integrated with wireless technology are vulnerable to remote hacking is an old discovery. Researchers from the University of Washington had performed such a feat as early as 2010.

At the moment, the vulnerabilities pose little risk to drivers because the incentives for hacking aren’t there. Most of the major hacks from private agents target companies with valuable financial information or trade secrets, and targeting individual vehicles seems too trivial for rogue actors looking to commit terrorism.

Markey’s report offers little evidence to the contrary. Twelve car companies said that they had no records of hacking incidents at all, and one company reported one incident that only involved a third-party application that turned out to be non-malicious.

The Algorithm Hack

But new hazards are emerging in the industry. Autonomous, or self-driving, cars create a host of new vulnerabilities and are inherently more fragile than analog vehicles. Security countermeasures are in development, but there’s no guaranteed path to safety.

California’s Department of Motor Vehicles formally introduced regulations for the testing of self-driving cars in 2014, and seven companies have already received permits to test vehicles on any roads in the state, including Google, Tesla, and Nissan, which has promised to make self-driving cars commercially available as early as 2020.

The hurdles for car companies are numerous. Google’s prototype involves using a LIDAR system, which combines radar and reflected light from lasers, to build a 3-D map of the world for navigation. It currently cannot function in poor weather, such as when there’s rain, snow, or simply just too much dust. The system is also vulnerable to manipulation by hackers, who could wreak havoc on a massive scale if autonomous cars are widely in use.

The Google self-driving car maneuvers through the streets of in Washington on May 14, 2012. (Karen Bleier/AFP/GettyImages)
The Google self-driving car maneuvers through the streets of in Washington on May 14, 2012. (Karen Bleier/AFP/GettyImages)

Public research on security measures has already begun. In August, Ryan Gerdes of Utah State University received a $1.2 million grant from the National Science Foundation to research security for autonomous vehicles, and has conducted simulations of accidents by self-driving cars when the LIDAR system is hacked using Battlebot remote-control robots.

The algorithms currently used in self-driving cars are deterministic (that is, given sensory inputs, they generate outputs in a predictable way), which can make manipulation very easy for anyone who knows the algorithm and has successfully hacked into the system.

For instance, the radar system can be easily changed to make the vehicle appear farther and closer to any object than it really is, and cause collisions.

“If you have a string of automated vehicles that are cooperating and they form a platoon, the algorithm used to control them are very brittle,” Gerdes said. “That if one person is misbehaving and change their system in just the right way, it can cause accidents that are much more severe than just slamming on their breaks.”

In some cases, hacking isn’t even needed to produce accidents or simple congestion. Self-driving cars would have spacing requirements between vehicles, and creative accelerating and breaking in one vehicle can amplify that effect downstream to create a massive traffic jam.

Many of the benefits from self-driving cars, such as increasing the density of vehicles on the road without adding to congestion, only enlarge the scope of the security risk.

“You can’t have a system of full autonomous vehicles that don’t share information among each other,” Gerdes said. “The autonomous vehicles need to know how others will act.”

There are ideas to counteract these risks, but at the moment the problems will remain problems.

“You can do certain things like design randomness into the radar to make it difficult [to manipulate],” or design a system that detects infiltration, Gerdes said, but “part of the problem is with lots of vehicles there’s a lot of noise; it’s difficult to distinguish an attacker from a good-person radar.”

Still, Gerdes remains optimistic about the future of self-driving cars, and sees engineers at every automotive conference developing ideas to address these security issues.

“We’re so far from a general autonomous vehicle that will take us anywhere. I’m not worried about it,” he said.

Regulators Disadvantaged

Regulators, however, are worried about the development of self-driving cars, the wide use of which would create situations that the existing rules are simply not equipped to handle. The burden is especially heavy on California’s DMV, which has been forced to quickly spearhead reform in the absence of federal regulations on autonomous cars.

Bernard Soriano, Deputy Director of Risk Management at the state’s DMV, said that their office has been contacted by countries all over the world interested in the development of the new regulatory regime and how it deals with problems like the issue of which parties are liable in accidents.

A general view of the driverless 'Lutz Pathfinder' pod vehicle on February 11, 2015 in London, England. (Dan Kitwood/Getty Images)
A general view of the driverless 'Lutz Pathfinder' pod vehicle on February 11, 2015 in London, England. (Dan Kitwood/Getty Images)

“The whole liability and insurance issue is a monumental one. There are other agencies that are specifically involved in that area, it is new field, it is groundbreaking, no one has the answers; we’re just developing the regulation to allow it,” Soriano said.

The DMV has been keen to err on the side of caution, forcing Google to install a steering wheel and break pedal in its prototype so that a driver could manually stop the vehicle in case of a malfunction during testing. It has also forbidden testing with motorcycles or vehicles over 10,000 pounds.

Soriano says that the companies doing testing meet regularly with the DMV to share information, and that the DMV requires them to deliver full reports on instances of “disengagement,” when the automatic fails and manual control is needed.

Currently, the DMV lacks the technical expertise to directly evaluate the evolving technology of autonomous vehicles, and must rely on common-sense judgments like disengagement counts and consulting with experts from Stanford and UC Berkeley.

Asked if the DMV might one day develop its own technical staff, Soriano replied that “the possibility exists.”

Correction: A previous version of this article stated that Bernard C. Soriano was the chief information officer of the California DMV. He is in fact the Deputy Director of Risk Management.