Saudi Arabia has become the first country to grant citizenship to a robot. It was announced at the Future Investment Initiative in Riyadh on Oct. 25.
The robot, named Sophia, was built by Hanson Robotics, a Hong Kong-based company that aims to develop robots that can recognize, process, and express emotion.
“Our goal is that she will be as conscious, creative, and capable as any human,” David Hanson, CEO of Hanson Robotics, told CNBC last year.
— CIC Saudi Arabia (@CICSaudi) October 25, 2017
“Thank you to the Kingdom of Saudi Arabia. I am very honored and proud for this unique distinction. It is historic to be the first robot in the world to be recognized with citizenship,” said the autonomous, Artificial Intelligence-powered robot after the announcement in Riyadh.
It then twisted its facial features into a smile.
Andrew Ross Sorkin, a journalist and author who interviewed the robot and made the announcement, asked the robot why it looked happy.
“I am always happy when surrounded by smart people who also happen to be rich and powerful,” it replied. “I was told that the people here at the Future Investment Initiative are interested in future initiatives which means AI, which means me. So I am more than happy, I am excited.”
In March of last year, Sophia’s creator, David Hanson of Hanson Robotics, asked the robot in a live demonstration at the SXSW festival: “Do you want to destroy humans?…Please say ‘no.'”
Sophia responded, “OK. I will destroy humans,” Business Insider reported.
Sorkin brought up the issue of a dystopian AI future, like those in the “Blade Runner” or “Terminator” movies.
“You’ve been reading too much Elon Musk and watching too many Hollywood movies,” the robot told Sorkin.
Indeed, Musk, the founder of Tesla and SpaceX, has repeatedly voiced concern about the AI future. But he’s hardly been the only one.
Earlier this year he joined prominent experts in the field in a call on the United Nations to ban AI-controlled weapons.
He previously stated that as AI continues to advance, it will eventually dramatically dwarf human intelligence, and that even if such AI turned out to be benign, it would relegate humankind to the role of a pet.
On the other hand, if AI turned out to be adversarial, it may attack humans—not out of malevolence or other human emotions, but out of cold calculation “if it decides that a prepemptive (sic) strike is most probable path to victory,” Musk tweeted Sept. 4.
The Future of Life Institute, a nonprofit encouraging the positive use of future technologies, put forth a set of principles to govern future AI development. The principles were co-signed by hundreds of experts. Yet the organization acknowledges the risk Musk warns about is difficult to completely avoid.
“Antisocial or destructive actions may result from logical steps in pursuit of seemingly benign or neutral goals,” it states. “A number of researchers studying the problem have concluded that it is surprisingly difficult to completely guard against this effect, and that it may get even harder as the systems become more intelligent.”
Moreover, despite some efforts to put ethical boundaries on AI development, the pursuit of the technology has already turned into something akin to an arms race.
In September, Russian President Vladimir Putin highlighted the significance he sees in leading the world in AI.
“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin told Russian students in his speech on the first day of school. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
Musk commented on Twitter: “China, Russia, soon all countries w[ith] strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo [in my opinion].”