Facebook Shut Down AI That Developed Its Own Language: Is This What Experts Have Warned About?

Facebook Shut Down AI That Developed Its Own Language: Is This What Experts Have Warned About?
Participants at Intel's Artificial Intelligence (AI) Day stand in front of a poster during the event in the Indian city of Bangalore on April 4, 2017. (Manjunath Kiran/AFP/Getty Images)
Petr Svab
7/31/2017
Updated:
10/5/2018

Concerns about the future of artificial intelligence (AI) were invigorated this week when researchers at Facebook shut down an AI program after it created its own language.

While the incident itself doesn’t necessarily signify any danger, the surprising rate of AI development in recent years has experts warning about potential risks. Some even say the dangers are inevitable.

AI has been advancing by leaps and bounds, allowing for Apple Siri to write our text messages, Tesla Autopilot to cruise our highways, and Google Translate to reach a level of sophistication that’s no longer funny.

Yet leading experts in the field share a concern that AI has the potential to cause harm or even to be an existential threat.

More than 8,000 people, including top AI experts, have signed an open letter urging research into ways to ensure that AI helps, rather than harms, humankind.

Here are the major AI pitfalls they warn about, as well as some misconceptions the public has about AI.

Loss of Privacy

AI would be able to create increasingly detailed and precise profiles of people based on their personal data, accelerating the loss of privacy dramatically.

 

People Intentionally Making AI Do Evil

This is a legitimate and serious concern. Experts are generally opposed to development of AI-powered weapons. Even though robots may prevent some bloodshed by getting crushed on the battlefield instead of humans, experts worry that development of weaponized AI could start a global arms race. The problem is, even if the civilized world bans lethal AI, terrorists and unscrupulous regimes, like those of China and North Korea, may still develop it secretly. It may also be difficult to convince the public that AI weapons can be made impervious to hacking.

AI Causing Harm Unintentionally

This is a rather widespread concern. An AI may misinterpret what we want from it and thus cause damage. Even if it understands the order correctly in the literal sense, it could lead to unforeseen consequences. For example, if one were ordered to protect trees, it may simply exterminate beavers.

AI Causing Harm Indirectly

Many experts believe an AI matching human intelligence could be a mere 30-40 years away, and that there’s no reason to believe it couldn’t surpass our intelligence.

“If you assume any rate of advancement in AI, we will be left behind by a lot,” said Elon Musk, founder of Tesla and SpaceX, at the Recode’s Code Conference 2016.

“We would be so far below them in intelligence that we would be like a pet,” he said. “We'll be like the house cat.”

Once AI reaches a certain level, where it would be less fallible than the average human, people may defer more and more decisions to AI, gradually losing the ability to perform those tasks and make those decisions ourselves.

Many experts also worry about what would happen to people whose roles in society are displaced by AI.

Computers Turning Malevolent

AI professionals actually aren’t worried about this one. However intelligent computers may get, experts don’t expect to see them getting angry, resentful, or otherwise display genuine emotion.

Robots Taking Over the Earth

While AI-powered robots are definitely being developed, with some appearing intimidating, experts are much more concerned about AI wreaking havoc on our civilization through the internet than through a mechanoid invasion.

 

Solution

Hundreds of experts agreed on a core set of principles that should govern AI development to prevent it from going awry. The principles were developed in conjunction with the 2017 Asilomar conference.

However, some of the principles may be hard, or even impossible, to put in practice.

A key principle, for example, demands AI be developed in accordance “with human values.” But who decides what those values should be, and which value gets higher or lower priority? Or will AI only follow the values everybody agrees upon?

Another principle stipulates that the “economic prosperity created by AI should be shared broadly, to benefit all of humanity.” But what does that mean? Is it a call for government confiscation of profits from AI? Or is it a pledge of AI developers and investors to donate their profits?

AI experts acknowledge they don’t have answers to many such questions. They do, however, seem to agree on the questions that need to be answered before we flip the switch on advanced AI.