US Government ‘Decades Behind’ Understanding AI: Former Presidential Candidate Andrew Yang

US Government ‘Decades Behind’ Understanding AI: Former Presidential Candidate Andrew Yang
Democrat presidential candidate Andrew Yang speaks during a campaign event in Keene, N.H., on Feb. 5, 2020. (Justin Sullivan/Getty Images)
Naveen Athrappully
4/24/2023
Updated:
4/24/2023
0:00

Former presidential candidate Andrew Yang believes the American government is years behind in understanding the consequences of artificial intelligence and warns that science fiction scenarios are now becoming real.

“It can be a force for civilizational progress, but it can also destroy us at the high end. So, you would want a tailored approach. You'd want an AI-dedicated body that actually understands the pluses and minuses and the different use cases of AI,” Yang, who is the co-chair of the Forward Party, said in an interview with Fox Business on April 24. “And that’s something we don’t have because our government is decades behind this curve.”

Yang said the United States has gotten away with having a government that’s behind on technology for a long time, but with the advances in AI, this position is becoming more and more dangerous.

“I was talking to my friend about this and she said, ‘Hey, what’s the worst that could happen?’ And I said, ‘Well, unwarranted military conflict, mass identity theft, spoofing of people by voices of their loved ones giving them a call,’” Yang explained. “All of these things are now on the table. Science fiction-type scenarios are here with us.”

According to Yang, since there is an AI race, tech firms are now incentivized to go “as fast as possible” in terms of developing artificial intelligence. “And in that kind of context, bad things are likely to happen.”

Supporting Musk

Yang also pointed out that Elon Musk was “spot on” in calling for halting AI development, referring to a March 22 letter signed by 1,000 industry leaders that asked AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

GPT-4 is an artificial intelligence chatbot developed by OpenAI, which the organization claims can “solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.”

The letter warned that AI systems were becoming human-competitive in general tasks, which threatens the structure of human society.

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?” it asked.

The letter argued that powerful AI systems should only be developed once the world is confident that the effects of such AI will be positive and the risks arising from these systems “manageable.”

Other signatories of the letter include Apple co-founder Steve Wozniak, Stability AI founder and CEO Emad Mostaque, and engineers from Meta and Google.

“I think Elon’s spot on in calling for a pause, that’s why I joined him in that letter,” Yang said.

Public Fear of AI

The proliferation of artificial intelligence is also raising concerns among the American public.
A recent survey by Morning Consult found that 61 percent of adults see AI “posing an existential threat to humans,” a figure that jumped to 69 percent among weekly AI users.

Fifty-six percent of adults surveyed also support instituting a “pause on the development of advanced AI.” This number was at 64 percent among those who used AI weekly.

Roughly 3 in 4 weekly AI users are supportive of framing an international agreement on the usage of AI as well as the creation of shared safety protocols, and 4 in 5 users believe current AI tools like ChatGPT have the capability to act outside of human input.

A YouGov poll of over 20,000 U.S. adults conducted in early April asked respondents how concerned they are about the possibility of AI causing the end of the human race on Earth. While 27 percent admitted to being “somewhat concerned,” 19 percent were “very concerned.”