NEW YORK—The deadly attacks in Paris may soon reopen the debate over whether—and how—tech companies should let the government sidestep the data scrambling that shields everyday commerce and daily digital life alike.
So far, there’s no hard evidence that the Paris extremists relied on encrypted communications—essentially, encoded digital messages that can’t be read without the proper digital “keys”—to plan the shooting and bombing attacks that left 129 dead on Friday. But it wouldn’t be much of a surprise if they did.
So-called end-to-end encryption technology is now widely used in many standard message systems, including Apple’s iMessage and Facebook’s WhatsApp. Similar technology also shields the contents of smartphones running the latest versions of Apple and Google operating software. Strong encryption is used to protect everything from corporate secrets to the credit-card numbers of online shoppers to intimate photos and secrets shared by lovers.
That widespread use of encryption, which was previously restricted to more powerful desktop or server computers, is exactly what worries members of the intelligence and law enforcement communities. Some are now using the occasion of the Paris attacks to once again argue for restrictions on encryption, saying it hampers their ability to track and disrupt plots like this one.
“I now think we’re going to have another public debate about encryption, and whether government should have the keys, and I think the result may be different this time as a result of what’s happened in Paris,” former CIA deputy director Michael Morell said Monday on CBS This Morning.
The last such debate followed the 2013 disclosures of government surveillance by former National Security Agency contractor Edward Snowden. Since then, tech companies seeking to reassure their users and protect their profits have adopted more sophisticated encryption techniques despite government opposition. Documents leaked by Snowden also revealed NSA efforts to break encryption technologies.
In response, law-enforcement and intelligence officials have argued that companies like Apple and Google should build “backdoors” into their encryption systems that would allow investigators into otherwise locked-up devices during investigations. The Obama administration continues to encourage tech companies to include such backdoors, although it says it won’t ask Congress for new law that requires them.
The trouble with that approach, as Apple CEO Tim Cook said in an interview last week with The Daily Telegraph, is that “any backdoor is a backdoor for everyone.” In short, any shortcut for investigators could also be targeted by cybercriminals eager to hack major corporations—a la the devastating cyberattack on Sony late last year—or to target individuals for identity theft or extortion, as reportedly occurred following the disclosure of records from the infidelity dating site Ashley Madison.
A report on the state of digital security released this summer by MIT’s Computer Science and Artificial Intelligence lab argued against opening a peephole for government agencies. The analysis by 15 professors, security experts and researchers likened the idea of giving the government a digital key to encrypted communications to leaving a house key under the front doormat.
“The Snowden revelation showed that backdoors can be destructive, particularly when they’re done in secrecy without transparency,” says Will Ackerly, a former NSA security researcher and the co-founder of Virtru, which provides encryption technology for both companies and individual people.
Steven Bellovin, a Columbia University professor and computer security researcher, said he wasn’t surprised by the effort to bring back discussion on encryption backdoors. But he said it’s way too early to tie it to the Paris attacks.
“We don’t know how these people were communicating and with whom,” he said. “If they were communicating with homegrown software and there’s some indications of that, then a mandatory back door is not going to do any good.”