Stephen Hawking continues his crusade for artificial intelligence (AI) safety in a long-awaited Ask Me Anything thread on Reddit, stating that the groundwork for such protocols needs to start not some time in the distant future, but right now.
“We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence,” the famed physicist wrote. “It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.”
Hawking has leveraged his prominence to bolster the AI safety movement since 2014, when he penned an editorial with other prominent AI researchers that warned about the existential risks advanced machines could pose to humanity. Other public figures in the science and technology sphere like Elon Musk and Steve Wozniak have since joined Hawking in trying to raise public awareness about AI risk, and earlier this year the three were among the signatories of an open letter that called for a ban on autonomous weapons, or armed robots.
The belief that humanity is on the verge of creating artificial intelligence greater than itself dates back to the 1950s. These expectations have repeatedly been foiled, but most AI researchers still believe that human-level machine intelligence will emerge within the next century.