AI Firm Cannot Go Public as ‘Strange’ Decisions May Need to Be Taken, Claims CEO

AI Firm Cannot Go Public as ‘Strange’ Decisions May Need to Be Taken, Claims CEO
Sam Altman, the CEO of OpenAI, testifies at a Senate hearing, on May 16, 2003. (Senate Judiciary Committee/Screenshot via The Epoch Times)
Naveen Athrappully
6/7/2023
Updated:
6/7/2023
0:00

Sam Altman, the chief executive of OpenAI, claims there is no intent to take the organization public as he wants to maintain complete control in order to be able to take “strange” decisions when necessary.

“When we develop superintelligence, we’re likely to make some decisions that public market investors would view very strangely,” Altman said during an event in Abu Dhabi on Tuesday, according to Bloomberg. “And I think the chance that we have to make a very strange decision someday is non-trivial.” When asked about having no equity in the company, Altman said that he likes to remain “non-conflicted.”

The OpenAI chief executive also expressed his willingness to work with regulators in developing frameworks for regulating artificial intelligence (AI). Altman has been touring the world in recent weeks, meeting lawmakers who are considering developing rules for AI use.

Altman’s comments in Abu Dhabi come just three weeks after he testified at a U.S. Senate Committee on the Judiciary in May where he appealed to lawmakers to create regulations for AI.

“I do think some regulation would be quite wise on this topic,” he said. “People need to know if they’re talking to an AI, if content they’re looking at might be generated or might not.”

Altman expressed concerns about AI becoming a potential threat to democracy by spreading misinformation during elections. “My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. I think that can happen in a lot of different ways,” he said.

Altman proposed that independent audits be conducted on organizations like OpenAI and that a new agency be created that would be focused on licensing AI firms. This agency would be tasked with ensuring that AI companies operate while complying with ethical standards.

The Threat of AI

Altman’s comments on having to take strange decisions come as experts around the world are concerned about the growth of AI.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the Center for AI Safety said in a statement that attracted signatures from AI scientists from around the world.

In March, an open letter from the Future of Life Institute (FLI) warned that AI advancements should be “planned for and managed with commensurate care and resources.”

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.” The letter has attracted over 31,800 signatures.

In a May 30 post, FLI called for developing and instituting international agreements to limit the proliferation of “particularly high-risk AI,” akin to the Biological Weapons Convention (BWC) and the Nuclear Non-Proliferation Treaty (NPT).

It also suggested setting up intergovernmental organizations to promote peaceful uses of AI while ensuring guardrails are enforced and risks are mitigated, giving the example of the International Atomic Energy Agency (IAEA).

US AI Regulations

In the United States, attempts are being made to understand risks posed by a variety of AI and then frame regulations to keep them in check.

A group of bipartisan senators has planned three summer hearings on the potential dangers of AI technologies. The three meetings will cover the following issues: a) where AI is today; b) what the frontier of AI is and how America can maintain leadership; and c) how AI is being used by the U.S. intelligence community and the Department of Defense and how America’s adversaries are using AI.

Back in May, Senator Richard Blumenthal (D-Ct.) called on ensuring that an AI-dominated future aligns with societal values. “Congress failed to meet the moment on social media,” he said. “Now [we] have the obligation to do it on AI before the threats and the risks become real.”

On April 11, the National Telecommunications and Information Administration (NTIA), a Commerce Department agency that advises the White House on telecommunications and information policy, said that it will be spending 60 days examining options to ease public anxiety surrounding AI technologies.