ChatGPT Makers Say AI Technology Could Surpass ‘Expert’ Human Skill Levels Within 10 Years, Flag ‘Existential Risk’

ChatGPT Makers Say AI Technology Could Surpass ‘Expert’ Human Skill Levels Within 10 Years, Flag ‘Existential Risk’
Sam Altman, the CEO of OpenAI, testifies at a senate hearing on May 16, 2003. (Senate Judiciary Committee/Screenshot via EET)
Katabella Roberts
5/23/2023
Updated:
5/23/2023

The makers of ChatGPT have warned that Artificial Intelligence (AI) could “exceed expert skill level” across most domains within the next 10 years as “superintelligence” becomes more powerful than any “other technologies humanity has had to contend with.”

Open AI executives including CEO Sam Altman; President, Chairman, & Co-Founder, Greg Brockman; and Co-Founder and chief scientist Ilya Sutskever, made the comments in a blog post published on May 22.

The executives noted that it is “conceivable” that within the next ten years, AI systems could carry out as much productivity as one of today’s largest corporations.

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” they wrote. “We can have a dramatically more prosperous future but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”

The executives went on to lay out three proposals to address the risks associated with the increasingly widespread use and advancement of AI, including coordination across AI development leaders to ensure that the technology grows “in a manner that allows us to both maintain safety and help smooth integration of these systems with society.”

As part of this effort, major governments around the world could establish a growth rate under which AI capability is limited each year, the executives said.

The senior experts also noted that individual companies should be held to an“ extremely high standard” when it comes to acting responsibly.

Elsewhere, the Open AI executives suggested the creation of an agency like the International Atomic Energy Agency (IAEA) to oversee superintelligence efforts associated with the advancement of AI “above a certain capability.”

Such advanced AI would be subject to audits, inspections, and testing to ensure compliance with safety standards.

The ChatGPT artificial intelligence software, which generates human-like conversation, on Feb. 3, 2023. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
The ChatGPT artificial intelligence software, which generates human-like conversation, on Feb. 3, 2023. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

‘Tremendous Upsides’ to Advanced AI

The agency would also place restrictions on degrees of deployment and levels of security, the execs said. This could initially start on a voluntary basis before being implemented by countries, they noted.

“It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say,” the execs wrote.

Finally, they stated that technical capability is needed to ensure that superintelligence is safe.

“This is an open research question that we and others are putting a lot of effort into,” they wrote in the blog post.

Altman and his colleagues noted, however, that it would be “unintuitively risky and difficult” to prevent the creation of superintelligence, citing “tremendous upsides” to such technology as well as the difficulties in preventing its development, which they said would require something akin to a “global surveillance regime” that may not actually work.

They also stressed the importance of allowing companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation they recommend in their blog post.

“Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate,” they wrote. “By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.”

Electric car maker Tesla CEO Elon Musk attends the 6th edition of the "Choose France" Summit at the Chateau de Versailles, outside Paris on May 15, 2023. (Ludovic Marin/AFP via Getty Images)
Electric car maker Tesla CEO Elon Musk attends the 6th edition of the "Choose France" Summit at the Chateau de Versailles, outside Paris on May 15, 2023. (Ludovic Marin/AFP via Getty Images)

Musk, Gates Warn of AI Capabilities

The comments from the senior executives come shortly after Altman appeared before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law to address concerns regarding the risks posed by advanced artificial intelligence.
During that testimony (pdf), Altman noted that a number of companies—including financial firms and platforms for legal professionals—are already using ChatGPT for everything from improving customer support operations to answering technical documentation, detecting fraud, and researching legal issues.

Despite listing the various benefits of ChatGPT, Altman also told lawmakers that rapidly advancing AI technology will ultimately lead to widespread layoffs across various sectors.

“There will be an impact on jobs,” he said. “We try to be very clear about that.”
Multiple experts, including businessman Elon Musk and Apple co-founder Steve Wozniak, have called for a temporary pause to the training of systems more powerful than Chat GPT-4 for at least six months due to concerns over its risks to society and humanity.
In a May 16 interview with CNBC, Musk again warned that AI has the potential to “destroy humanity” if it goes wrong.

On Monday, Microsoft co-founder Bill Gates also warned that AI developers are currently competing to develop the first personalized AI agents which could render retail giants such as Amazon useless, as they change the way consumers do everything from using search engines to shopping online.

“Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” Gates said, CNBC reports.

Gates noted that while personalized digital agents are still in the early stages of development, they have the power to impact the employment landscape for both white-collar and blue-collar workers as more companies opt to use such advanced technology over human workers.