Elon Musk in Interview With Tucker Carlson Warns AI Could Cause ‘Civilizational Destruction’

Elon Musk in Interview With Tucker Carlson Warns AI Could Cause ‘Civilizational Destruction’
Elon Musk speaks at the 2020 Satellite Conference and Exhibition in Washington on March 9, 2020. (Win McNamee/Getty Images)
Samantha Flom
4/14/2023
Updated:
4/23/2023
0:00

Tech billionaire Elon Musk is sounding the alarm about the risks of artificial intelligence (AI)—specifically, its potential for “civilizational destruction.”

In an April 14 preview of his interview with Fox News’ Tucker Carlson, Musk stresses that the ramifications of such technology could be disastrous for humanity.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential—however small one may regard that probability, but it is non-trivial—it has the potential of civilizational destruction.”

The CEO of Tesla, SpaceX, and Twitter should know, given that he also co-founded OpenAI—the nonprofit lab that created ChatGPT—in 2015.

‘Profound Risks’

An interactive chatbot, ChatGPT was released as a prototype in November 2022 to much fanfare and has since grabbed the attention of more than 100 million users. But not all of the feedback has been positive.

A growing sense of unease about AI and its implications has begun to give many, like Musk, pause.

Last month, the billionaire—who’s no longer associated with OpenAI—joined dozens of other industry experts and executives in calling on all AI labs to pause training of systems more powerful than OpenAI’s GPT-4 for at least six months in a March 22 letter that has since garnered more than 25,000 signatures.

Holding that AI can pose “profound risks to society and humanity,” the AI experts asserted: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

Calling out OpenAI in particular, the signatories also noted that the organization itself had recently acknowledged that, “at some point,” it might be necessary to impose limitations on such systems’ rate of growth.

“We agree,” they wrote. “That point is now.”

However, while participating in a Massachusetts Institute of Technology discussion on April 13, OpenAI CEO Sam Altman said he felt the letter lacked “most technical nuance” regarding where and how efforts should be paused.

Although he said that safety should be a concern, he clarified that the lab isn’t currently training GPT-5.

“We are not and won’t for some time,” Altman said. “So, in that sense, it was sort of silly.”

Musk’s “Tucker Carlson Tonight” interview is set to air in two parts on April 17 and April 18 at 8 p.m. New York time. Other topics he reportedly will address include his Twitter takeover and future plans for the social media platform.

Samantha Flom is a reporter for The Epoch Times covering U.S. politics and news. A graduate of Syracuse University, she has a background in journalism and nonprofit communications. Contact her at [email protected].
Related Topics