‘There’s the Possibility That AIs Will Run Out of Control’: Bill Gates

‘There’s the Possibility That AIs Will Run Out of Control’: Bill Gates
Microsoft founder-turned-philanthropist Bill Gates smiles during the Global Investment Summit at the Science Museum in London on Oct. 19, 2021. (Leon Neal/POOL/AFP via Getty Images)
Naveen Athrappully
3/30/2023
Updated:
3/30/2023
0:00

Bill Gates recently praised the evolution of artificial intelligence, his relationship with OpenAI, and gave a short warning on the situation being portrayed differently by other subject experts, including Elon Musk.

The Microsoft co-founder started out his March 21 GatesNotes post on AI in a hopeful tone: “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”

Gates said that AI can help with several progressive agendas, including climate change and economic inequities, but that the technology is “disruptive,” and will “make people uneasy.”

“AIs also make factual mistakes and experience hallucinations.” AI hallucinations are confident responses by an AI that are not grounded in its training data. Frequent hallucinations are considered to be a major issue with large language models like ChatGPT.

“In addition, advances in AI will enable the creation of a personal agent. Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with,” Gates said.

These personal assistants will be part of company meetings, and take care of administrative tasks like “filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit” in the health care industry. In the later stage, “they’ll be able to predict side effects and figure out dosing levels.”

Regarding the education sector, Gates said: “It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.”

The Other Side of AI

Gates starts this section with the fact that AI does not understand “context for a human’s request,” leading to “strange results.” For example, “when you ask for advice about a trip you want to take, it may suggest hotels that don’t exist.”

Although such technical issues will get resolved, there are some problems that pose a greater danger.

“For example, there’s the threat posed by humans armed with AI. Like most inventions, artificial intelligence can be used for good purposes or malign ones.”

He then added, “Then there’s the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?”

Gates then proceeds to talk about superintelligent AIs—a learning algorithm that runs at the speed of a computer—which are maybe “a decade away or a century away.”

“These ‘strong’ AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests?”

Gates mentioned his relationship with OpenAI—the company behind ChapGPT—going back to 2016. At the end of January, OpenAI and Microsoft shared an announcement regarding their partnership and investment.

OpenAI makes use of Microsoft’s Azure cloud platform. Microsoft is investing $10 billion into OpenAI, building on previous funding rounds done in 2019 and 2021. OpenAI and Microsoft have a complicated partnership structure with the AI platform remaining a “capped-for-profit” company while its operations are governed by the OpenAI non-profit organization.

However, not everyone has a positive take on AI as well as Gates’ relationship with OpenAI.

Elon Musk’s Not-So-Bullish Response

Elon Musk said in a tweet on March 27, “I remember the early meetings with Gates. His understanding of AI was limited. Still is.”

Musk’s relationship with OpenAI began in 2015 when the project launched along with other industry veterans like Y Combinator’s Sam Altman and Ilya Sutskever, a research scientist at Google. Musk was one of the original funders of OpenAI. He left the organization in 2018, possibly due to a conflict of interest with Tesla’s AI division.

Generally, Musk has not put forward a rosy picture of AI, unlike Gates. He said in a tweet back in 2014, “We need to be super careful with AI. Potentially more dangerous than nukes.”
In a December tweet, he said, “ChatGPT is scary good. We are not far from dangerously strong AI.”
After the recent popularity and explosive growth of ChatGPT, Musk along with several other experts in the field, now numbering 1,377, signed an open letter titled, “Pause Giant AI Experiments” calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

GPT-4 is the latest version of ChatGPT, released in March.

The letter started off by saying that AI should be “planned for and managed with commensurate care and resources” but that it is not happening.

As AI becomes “human-competitive at general tasks,” the letter asks: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders.”

The letter calls for “safety protocols” in building such technology with AI developers working in tandem with policymakers. These systems must make AI “safe beyond a reasonable doubt.”

“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” concluded the letter.