EXCLUSIVE: Lawmaker With Doctorate in AI Warns About Technology’s Real Danger—It’s Not Killer Robots

EXCLUSIVE: Lawmaker With Doctorate in AI Warns About Technology’s Real Danger—It’s Not Killer Robots
Rep. Jay Obernolte (R-Calif.). (Illustration by The Epoch Times/Getty Images)
Joseph Lord
4/21/2023
Updated:
4/25/2023

The only member of Congress with an advanced degree in artificial intelligence (AI) is urging caution as other lawmakers and industry leaders rush to regulate the technology.

Rep. Jay Obernolte (R-Calif.) is one of only four computer programmers in Congress and the only member with a doctorate in artificial intelligence—and he says the rush to regulate is misguided. Obernolte said his larger concerns about AI center around the potentially “Orwellian” uses of the technology by the state.

Recently, a coalition of technology leaders such as Twitter owner Elon Musk and Apple CEO Tim Cook, among others, called for a total shutdown of AI research and development. This followed the release of ChatGPT 4, an extremely powerful artificial intelligence chatbot that has, among other milestones, completed the bar exam in the 90th percentile and passed the SAT.

The release of ChatGPT 4, easily the most powerful consumer AI on the market, prompted fears that AI was getting much more intelligent much more quickly than expected. In their letter, tech leaders called for a six-month shutdown of new AI development and called on Congress to regulate the technology.

Obernolte sat down with The Epoch Times to discuss AI, saying that regulation without more knowledge is ill-advised and based on a fundamental misunderstanding of AI technology.

“I’m not standing up and saying we shouldn’t regulate,” Obernolte said. “I think that regulation will ultimately be necessary.”

The ChatGPT logo at an office in Washington on March 15, 2023. (STEFANI REYNOLDS/AFP via Getty Images)
The ChatGPT logo at an office in Washington on March 15, 2023. (STEFANI REYNOLDS/AFP via Getty Images)

But he said lawmakers need to ensure that they “understand what the dangers are that [they’re] trying to protect consumers against.”

“If we don’t have that understanding, then it’s impossible for us to create a regulatory framework that will guard against those dangers, which is the whole point of regulating in the first place,” Obernolte said. “Right now, it’s very clear that we do not have a good understanding of what the dangers are.”

Others in Congress have called for prompt action on the issue.

“This is something that is going to sneak up on us, and we'll get to the point where we’re in too deep to really make meaningful changes before it’s too late,” Rep. Lance Gooden (R-Texas) told Fox News.

He and others from both parties have raised concerns over the potential for AI to take over jobs previously done by humans. Others worry about the so-called singularity, a predicted moment in AI development at which AI will supersede human intelligence and capabilities.

AI Not Likely to Take Over the World

Obernolte said the letter from tech leaders “is helpful in calling attention to the emergence of AI and the impacts it’s going to have on our society.” But he observed that for laymen, the greatest fears about AI are like those displayed in the Terminator movies, in which AI takes control of human computer networks and destroys the world in a nuclear apocalypse.

“The layman probably thinks that the largest danger in AI is an army of evil robots rising up to take over the world,” he said. “And that’s not what keeps thinkers in AI up at night. It certainly doesn’t keep me up at night.”

Before Congress can even consider regulating, Obernolte said, it needs to define “danger” in the context of AI.

“What are we afraid might happen? We need to answer that question to answer the question [of how to regulate],” he said.

A screenshot of the letter signed by innovator Elon Musk and others warning against the dangers of rushing artificial intelligence development. (Screenshot by The Epoch Times)
A screenshot of the letter signed by innovator Elon Musk and others warning against the dangers of rushing artificial intelligence development. (Screenshot by The Epoch Times)

One big fear that Obernolte cited is the development of “emergent capabilities” in AI, which are abilities AI develops that it wasn’t initially programmed to do. But he said this isn’t as big of an issue as some say, as it follows trends similar to those observed in primate brains.

“That’s very frightening and alarming to people,” he said. “But if you think about it, it shouldn’t be that alarming, because these are neural nets. Our brain is a neural net. And that’s the way our brain works. You know, if you look at it, primate brain sizes, you know, as you grow the brain size, all of a sudden things like language begin to emerge ... and we’re discovering the same things about AI. So I don’t find that alarming.”

Obernolte said the real function of ChatGPT 4 only bolsters his position.

“If you look closely at ChatGPT 4, it reinforces the veracity of what I’m saying,” Obernolte said. “AI is a tremendously powerful and efficient pattern recognizer.

“ChatGPT 4 is designed to take in this enormous amount of language, images, and prose in order to synthesize answers to questions. If you think about what has alarmed [AI critics], in the context of all of the data it’s been trained to recognize patterns in, it becomes a lot less alarming.”

AI Can’t Think or Reason

Another important aspect of AI that Obernolte pointed to is its inability to pass the Turing test or reason independently.

Proposed by World War II British codebreaker Alan Turing, the Turing test describes a machine’s resemblance to a human being. For an AI to “pass” the Turing test, human beings speaking to it via chatbox shouldn’t be able to tell that they’re speaking to an AI. The test was proposed as a measure of the refinement of AI technology.

An AI robot titled "Alter 3: Offloaded Agency" is pictured during a photo call to promote the exhibition entitled "AI: More than Human" at the Barbican Centre in London on May 15, 2019. (Ben Stansall/AFP via Getty Images)
An AI robot titled "Alter 3: Offloaded Agency" is pictured during a photo call to promote the exhibition entitled "AI: More than Human" at the Barbican Centre in London on May 15, 2019. (Ben Stansall/AFP via Getty Images)

Many see the Turing test as the gold standard for measuring AI intelligence. Thus far, no AI has been able to pass the Turing test.

This is important because if AI can’t reason like a human being and act independently, it poses little risk to humans. Almost all fears about AI involve AI’s becoming independent from humanity and working against the interests of humanity.

Obernolte opined that even if in the future an AI could pass the Turing test, that wouldn’t necessarily mean that it’s a “thinking, reasoning entity.”

It’s a matter of philosophical debate whether AI could ever have motives or carry out independent actions in the same sense as human beings can. And for at least the foreseeable future, there’s no reason to worry, he said.

“Certainly ChatGPT 4 cannot pass the Turing test,” Obernolte said. “It may be the case that ChatGPT 6 or 7 can pass the Turing test. It might be that it can. You could sit for an hour, talking back and forth, and not be able to determine whether or not it’s a person or a computer—that still will not mean that we have created a thinking, reasoning entity.”

Regulating Would Empower US Foes

Obernolte said shutting down U.S. research into AI technology would only serve to empower enemies of the United States.

“In the most draconian case, let’s say that tomorrow I introduced a bill that required everyone in the United States of America to stop development of AI that was anything beyond the capabilities of GPT 4,” he said. “But we would still have bad actors in the United States who saw financial gain in continuing development of advanced AI that would continue to do it and flout the law. We would still have foreign adversaries using it.”

Soldiers of the People's Liberation Army's Honor Guard Battalion march outside the Forbidden City, near Tiananmen Square, on May 20, 2020, in Beijing. (Kevin Frayer/Getty Images)
Soldiers of the People's Liberation Army's Honor Guard Battalion march outside the Forbidden City, near Tiananmen Square, on May 20, 2020, in Beijing. (Kevin Frayer/Getty Images)

Thus, AI development would still occur—it would just occur in the black market and among U.S. adversaries, according to Obernolte.

“It’s undeniable that we would put our country at greater risk of attack from advanced AI if we stopped our development of it,” he said. “Because when we resume it, our AI is not going to be as advanced as those of the people that didn’t stop. So it’s just not realistic to say, ‘Everyone stop what you’re doing.’

“Let’s talk about this. I’m glad that we’re talking about it.”

‘Orwellian’ Uses

Obernolte said he isn’t afraid of AI becoming independent and destroying humanity but that he is concerned about the “Orwellian” uses the technology could have.

“I do worry about some other very real dangers that, in their own way, are just as consequential and hazardous as robots taking over the world, but in different ways,” he said.

For one, Obernolte cited AI’s “uncanny ability to pierce through personal digital privacy.”

The result could be to help government or corporate entities predict and control behavior, he said.

Obernolte said AI could put formerly disaggregated personal information together “and use it to form behavioral models that make eerily accurate predictions about future human behavior.” And then it could “give people clues on how to influence that future human behavior.”

“It’s already being done,” he said, pointing to the social media companies whose whole business model revolves around the collection and sale of personal data.

On a corporate level, Obernolte said this could mean a few major players form a “monopoly” over data with effectively insurmountable barriers to entry.

But the effects could be far worse if a state got hold of the technology, he predicted.

“I worry about the way that AI can empower a nation-state to create, essentially, a surveillance state, which is what China is doing with it,” Obernolte said. “They’ve created, essentially, the world’s largest surveillance state. They use that information to make predictive scores of people’s loyalty to the government. And they use that as loyalty scores to award privileges. That’s pretty Orwellian.

“So this is a disruptive way that government can use it. And as we have learned to our misfortune in the history of our country, we need to put guardrails around government as well as industry.”