World War III won’t break out over a nuke launch from North Korea, but rather over competition for the most advanced artificial intelligence (AI), according to Elon Musk, CEO of SpaceX and Tesla.
While North Korea rattles its nuclear and intercontinental ballistic sabers, it would be suicidal for its regime to actually launch a nuclear missile at another country, Musk opined, as South Korea, the United States, and China would invade.
While in the 1950s the Chinese regime propped up its communist comrade in the Korean War against the United States, today’s China has been signaling it would at least stay neutral in a potential conflict.
The AI race, on the other hand, has already begun, Musk said, noting the Sept. 1 remarks of Russian President Vladimir Putin that world dominance will belong to the leader in AI.
“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin told Russian students in his speech on the first day of school. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
Musk responded on Twitter: “China, Russia, soon all countries w[ith] strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo [in my opinion].”
Musk has been vocal about the risks of AI, joining experts in the field in a call on the United Nations to ban AI-controlled weapons.
He previously stated that if AI continues to advance, it will eventually dramatically dwarf human intelligence, and that even if such AI turned out to be benign, it would relegate humankind to the role of a pet, he said.
However, if AI turned out to be adversarial, it may attack humans—not out of malevolence or other human emotion, but out of cold calculation “if it decides that a prepemptive (sic) strike is most probable path to victory,” Musk tweeted Sept. 4.
The Future of Life Institute, a non-profit trying to encourage the positive use of future technologies, put forth a set of principles to govern future AI development. The principles were co-signed by hundreds of experts. Yet the organization acknowledges the risk Musk warns about is difficult to avoid completely.
“Antisocial or destructive actions may result from logical steps in pursuit of seemingly benign or neutral goals,” it states. “A number of researchers studying the problem have concluded that it is surprisingly difficult to completely guard against this effect, and that it may get even harder as the systems become more intelligent. They might, for example, consider our efforts to control them as being impediments to attaining their goals.”