Artificial Intelligence a Greater Threat Than North Korea, Says Tech Magnate

August 12, 2017 Updated: August 12, 2017

Tech magnate Elon Musk offered his starkest warning yet on the dangers of artificial intelligence. He said it is a greater threat to mankind than North Korean missiles. He tweeted out a warning on Aug. 11, amid increasing tension between North Korea and the United States.

From yesterday’s two tweets on the topic he garnered 3,500 total comments, 24,000 retweets, and 78,000 likes, collectively. The tweets, one with a retro horror movie poster with the words “In the end the machines will win,” give a foreboding impression. The poster appears to be an ad against gambling put out by a government commission in Australia. As Musk has become more concerned, his warnings have gotten more anxious, veering toward the apocalyptic.

Musk’s last huge AI shock came at a meeting of the National Governors Association. Musk stressed how AI needs to be regulated just like anything that poses a danger—before it becomes a problem, and before it is even fully mature. According to an NPR report on the meeting, many in the audience did not know how to respond to the issues he raised.

Musk has been warning about the dangers of artificial intelligence for a few years now. He was vindicated after Facebook was forced to shut down an AI project that got out of hand, as reported by The Epoch Times. Facebook made an artificial intelligence program that created its own language and alarmed engineers. Previous to that, Mark Zuckerberg and Facebook had suggested Musk was fear-mongering at his National Governors Association talk.

Musk is doing more than just talking. His support of the OpenAI initiative is meant to provide a watchdog on the progress of AI and track its developments. He wants to ensure there is a means to back it off and restrain it, by knowing as much about it as possible, as Fortune reported.

Stephen Hawking and Bill Gates have also addressed the dangers of artificial intelligence. In a 2014 interview, Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

Bill Gates also weighed in on the discussion. “I don’t think it’s inherent that as we create super intelligence that it will necessarily always have the same goals in mind as we do,” he said during an interview with Fox Business. “The people who say ‘Let’s not worry at all,’ I don’t agree with that.”

These are some of the experts and tech magnates closest to the technology that are offering their warnings.



Facebook Shut Down AI After It Invented Its Own Language

By Ivan Pentchoukov

"Han the Robot" waits on stage before a discussion about the future of humanity in a demonstration of artificial intelligence (AI) by Hanson Robotics at the RISE Technology Conference in Hong Kong on July 12, 2017. Artificial intelligence is the dominant theme at this year's sprawling RISE tech conference at the city's harbourfront convention centre, but the live robot exchange took the AI debate to another level. / AFP PHOTO / ISAAC LAWRENCE        (Photo credit should read ISAAC LAWRENCE/AFP/Getty Images)
“Han the Robot” waits on stage before a discussion about the future of humanity in a demonstration of artificial intelligence by Hanson Robotics at the RISE Technology Conference in Hong Kong on July 12, 2017. (Isaac Lawrence/AFP/Getty Images)

Researchers at Facebook shut down an artificial intelligence (AI) program after it created its own language, Digital Journal reports.

The system developed code words to make communication more efficient and researchers took it offline when they realized it was no longer using English.

The incident, after it was revealed in early July, puts in perspective Tesla CEO Elon Musk’s warnings about AI.

“AI is the rare case where I think we need to be proactive in regulation instead of reactive,” Musk said at a meeting of U.S. National Governors Association in July. “Because I think by the time we are reactive in AI regulation, it’ll be too late.”

Facebook CEO Mark Zuckerberg has called Musk’s warnings “pretty irresponsible,” prompting Musk to respond that Zuckerberg’s understanding of AI and its implications is “limited.”

Not the First Time

The researchers’ encounter with the mysterious AI behavior is similar to a number of cases documented elsewhere. In every case, the AI diverged from its training in English to develop a new language.

The phrases in the new language make no sense to people, but contain useful meaning when interpreted by AI bots.

Facebook’s advanced AI system was capable of negotiating with other AI systems so it can come to conclusions on how to proceed with its task. The phrases make no sense on the surface, but actually represent the intended task.

In one exchange revealed by Facebook to Fast Co. Design, two negotiating bots—Bob and Alice—started using their own language to complete a conversation.

“I can i i everything else,” Bob said.

“Balls have zero to me to me to me to me to me to me to me to me to,” Alice responded.

The rest of the exchange formed variations of these sentences in the newly-forged dialect, even though the AIs were programmed to use English.

According the researchers, these nonsense phrases are a language the bots developed to communicate how many items each should get in the exchange.

When Bob later says “i i can i i i everything else,” it appears the artificially intelligent bot used its new language to make an offer to Alice.

The Facebook team believes the bot may have been saying something like: “I’ll have three and you have everything else.”

Although the English may seem quite efficient to humans, the AI may have seen the sentence as either redundant or less effective for reaching its assigned goal.

The Facebook AI apparently determined that the word-rich expressions in English were not required to complete its task. The AI operated on a “reward” principle and in this instance there was no reward for continuing to use the language. So it developed its own.

In a June blog post by Facebook’s AI team, it explained the reward system. “At the end of every dialog, the agent is given a reward based on the deal it agreed on.” That reward was then back-propagated through every word in the bot output so it could learn which actions lead to high rewards.

“Agents will drift off from understandable language and invent code-words for themselves,” Facebook AI researcher Dhruv Batra told Fast Co. Design.

“Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

AI developers at other companies have also observed programs develop languages to simplify communication. At Elon Musk’s OpenAI lab, an experiment succeeded in having AI bots develop their own languages.

At Google, the team working on the Translate service discovered that the AI they programmed had silently written its own language to aid in translating sentences.

The Translate developers had added a neural network to the system, making it capable of translating between language pairs it had never been taught. The new language the AI silently wrote was a surprise.

There is not enough evidence to claim that these unforeseen AI divergences are a threat or that they could lead to machines taking over operators. They do make development more difficult, however, because people are unable to grasp the overwhelmingly logical nature of the new languages.

In Google’s case, for example, the AI had developed a language that no human could grasp, but was potentially the most efficient known solution to the problem.