New Technology Could Threaten Human Survival, Says Stephen Hawking

Physicist Stephen Hawking has warned that new technologies will likely bring about “new ways things can go wrong” for human survival.
New Technology Could Threaten Human Survival, Says Stephen Hawking
Theoretical physicist Stephen Hawking poses for a picture ahead of a gala screening of the documentary "Hawking," a film about the scientist's life, at the opening night of the Cambridge Film Festival in Cambridge, England, on Sept. 19, 2013. Andrew Cowie/AFP/Getty Images
Jonathan Zhou
Updated:

Physicist Stephen Hawking has warned that new technologies will likely bring about “new ways things can go wrong” for human survival.

When asked how the world will end—“naturally” or whether man would destroy it first—Hawking said that increasingly, most of the threats humanity faces come from progress made in science and technology. They include nuclear war, catastrophic global warming, and genetically engineered viruses, he said.

Hawking made the comments while recording the BBC’s annual Reith Lectures on Jan. 7. His lecture, on the nature of black holes, was split into two parts and will be broadcast on radio on Jan. 26 and Feb. 2.

The University of Cambridge professor said that a disaster on Earth—a “near certainty” in the next 1,000 to 10,000 years—will not spell the end of humanity because by that time humans are likely to have spread out into space and to other stars.

We are not going to stop making progress, or reverse it, so we have to recognize the dangers and control them.
Stephen Hawking

“However, we will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” he joked, provoking laughter from the audience.

“We are not going to stop making progress, or reverse it, so we have to recognize the dangers and control them. I’m an optimist, and I believe we can,” he added.

One of Hawking’s hobby-horses is encouraging the creation of compassionate artificial intelligence (AI). In July, Hawking signed a letter along with Elon Musk and Steve Wozniak calling for a ban on autonomous weapons, and in October he conducted a Q&A on Reddit devoted solely to the question of AI risk.

“We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence,” Hawking wrote. “It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.”

The worry, Hawking said, wasn’t that an evil scientist would manufacture an immoral machine in a lab, but that an AI designed for normal purposes might malfunction and ending up harming humanity indirectly.

“The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble,” Hawking wrote. “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

The Associated Press contributed to this report.

Jonathan Zhou
Jonathan Zhou
Author
Jonathan Zhou is a tech reporter who has written about drones, artificial intelligence, and space exploration.
Related Topics