ChatGPT Co-Creator Says the World May Not Be ‘That Far Away From Potentially Scary’ AI

ChatGPT Co-Creator Says the World May Not Be ‘That Far Away From Potentially Scary’ AI
Screens display the logos of OpenAI and ChatGPT in Toulouse, France, on Jan. 23, 2023. (Lionel Bonaventure/AFP via Getty Images)
Bryan Jung
2/20/2023
Updated:
2/20/2023
0:00

The co-creator of ChatGPT warned that the world may not be “that far away from potentially scary” artificial intelligence (AI).

Sam Altman, the CEO of ChatGPT creator OpenAI, said in a series of tweets on Feb. 18 that it was “critical” for AI to be regulated in the future, until it can be better understood. He stated that he believes that society needs time to adapt to “something so big” as AI.

“We also need enough time for our institutions to figure out what to do. Regulation will be critical and will take time to figure out. Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” Altman tweeted.

Altman further said that the path to an AI-enhanced future is “mostly good, and can happen somewhat fast,” comparing it to the transition from the “pre-smartphone world to post-smartphone world.”

He said that one issue regarding society’s adoption of AI chatbot technology is “people coming away unsettled from talking to a chatbot, even if they know what’s really going on.”

Altman had written about about regulating AI in his blog back in March 2015: “The U.S. government, and all other governments, should regulate the development of SMI,” referring to superhuman machine intelligence.
“In an ideal world, regulation would slow down the bad guys and speed up the good guys. It seems like what happens with the first SMI to be developed will be very important.”

Microsoft’s ChatGPT AI Faces Criticism for ‘Woke’ Responses to Users

Meanwhile, there have been well-publicized problems with with Microsoft’s ChatGPT-powered Bing search engine in the past week.

Bing has reportedly given controversial responses to queries, which ranged from “woke”-style rhetoric, deranged threats, to engaging in emotional arguments with users.

Microsoft noted in a blog post last week that certain user engagements can “confuse the model,” which may lead the software to “reflect the tone in which it is being asked to provide responses that can lead to a style we didn’t intend.”
According to a blog post on Feb. 17, Microsoft will now limit the number of exchanges users can have with the bot to “50 chat turns per day and five chat turns per session,” until issues were addressed by programmers.

Musk Calls for AI Regulation at Dubai

Industrialist Elon Musk, a co-founder and former board member of Open AI, has also advocated for proactive regulation AI technology.

The current owner of Twitter once claimed that the technology has the potential to be more dangerous than nuclear weapons and that Google’s Deepmind AI project could one day effectively takeover the world.

According to CNBC, Musk told attendees at the the 2023 World Government Summit in Dubai last week that “we need to regulate AI safety” and that AI is “I think, actually a bigger risk to society than cars or planes or medicine.”

However, Musk still thinks that the Open AI project has “great, great promise” and capabilities—both positive and negative, but needs regulation.

He was also critical of Open AI’s direction in a tweet on Feb. 17.

Musk said he helped found it with Altman as an open source nonprofit company to serve as a counterweight to Google’s Deepmind AI project, “but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”

Musk announced his resignation from OpenAI’s board of directors in 2018 to “eliminate a potential future conflict” with Tesla’s self-driving car program.
He later wrote in a tweet in 2019 that “Tesla was competing for some of same people as OpenAI and I didn’t agree with some of what OpenAI team wanted to do.”
Others involved in the project, such as Mira Murati, OpenAI’s chief technology officer, told Time on Feb. 5 that ChatGPT should be regulated to avoid misuse and that it was “not too early” to regulate the technology.
Bryan S. Jung is a native and resident of New York City with a background in politics and the legal industry. He graduated from Binghamton University.
Related Topics