Stephen Hawking: We Need to Start Working on AI Safety Right Now

Stephen Hawking continues his crusade for AI safety in a long-awaited Ask Me Anything thread on Reddit, stating that the groundwork for such protocols needs to start not some time in the distant future, but right now.
Stephen Hawking: We Need to Start Working on AI Safety Right Now
An autonomous weapon military robot preforms demonstrations for spectators at the Memorial Service on the Intrepid on May 28, 2012. (Benjamin Chasteen/Epoch Times)
Jonathan Zhou
10/8/2015
Updated:
10/12/2015

Stephen Hawking continues his crusade for artificial intelligence (AI) safety in a long-awaited Ask Me Anything thread on Reddit, stating that the groundwork for such protocols needs to start not some time in the distant future, but right now.

“We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence,” the famed physicist wrote. “It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.”

Hawking has leveraged his prominence to bolster the AI safety movement since 2014, when he penned an editorial with other prominent AI researchers that warned about the existential risks advanced machines could pose to humanity. Other public figures in the science and technology sphere like Elon Musk and Steve Wozniak have since joined Hawking in trying to raise public awareness about AI risk, and earlier this year the three were among the signatories of an open letter that called for a ban on autonomous weapons, or armed robots.

The belief that humanity is on the verge of creating artificial intelligence greater than itself dates back to the 1950s. These expectations have repeatedly been foiled, but most AI researchers still believe that human-level machine intelligence will emerge within the next century.

For Hawking, the way that advanced AIs are depicted in popular culture belies the actual threats posed by machines. Movies like “The Terminator” envision diabolical killer-robots bent on destroying humanity, ascribing to machines motives that won’t exist in real life and understating the simplicity of minimizing existential AI risk (just don’t build killer robots).

“The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble,” Hawking wrote. “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. ”

The doomsday scenarios imagined by popular AI philosophers like Nick Bostrom involve benign programs that take the wrong turn in attempting to execute a seemingly benign task, such as deciding to eliminate humankind when ordered to “protect the environment.”

With help from those with deep pockets like Musk, who gave away $10 million to AI safety research earlier this year, work on forestalling a Terminator Skynet-like scenario is well underway in academia, but Hawking wants a culture-wide transformation of how we see AI.

“Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use,” Hawking replied to a teacher who taught an AI class. “When [the emergence of super-human level AI] eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.