ChatGPT Is a Precursor to ‘AI Singularity,’ Experts Fear

ChatGPT Is a Precursor to ‘AI Singularity,’ Experts Fear
Screens displaying the logos of OpenAI and ChatGPT in Toulouse, France, on Jan. 23, 2023. - ChatGPT is a conversational artificial intelligence software application developed by OpenAI. (Lionel Bonaventure/AFP via Getty Images)
Raven Wu
Sean Tseng
4/11/2023
Updated:
4/11/2023
0:00
News Analysis
The emergence of ChatGPT has transformed how people think about artificial intelligence (AI), offering a chat room experience unlike anything the world has ever seen. Within just two months of its unveiling, it has gained over 100 million users and 1 billion visits per month.

The interactive AI chatbot has fueled AI’s widespread adoption and rapid advancement, leading to concerns over its potential dangers to humanity. Experts fear that a “technological singularity” may happen much sooner with the current pace of AI advancements.

ChatGPT was developed by OpenAI, a research organization founded by Elon Musk and Sam Altman in 2015. Despite having co-founded the company, Musk is no longer associated with it.

After its successful launch last year, tech companies saw business opportunities and scrambled to develop their AI or application programming interfaces (API) utilizing ChatGPT.

For example, Microsoft launched a new search engine, “Prometheus,” by combining its own search engine Bing with ChatGPT. The new search engine is aimed at challenging Google’s leading position.

However, the widespread adoption and rapid development of AI have created societal unease, as well as among scientists, scholars, and entrepreneurs. Many are worried that the unrestrained advancement of AI will eventually lead to the destruction of mankind.

A recent open letter calling for a pause on AI advancement has been signed by over 50,000 people, including more than 1,800 CEOs and 1,500 professors, according to the nonprofit Future of Life Institute.
Some prominent figures have added their names to the letter, including Musk, Apple co-founder Steve Wozniak, Stability AI founder and CEO Emad Mostaque, and engineers from Meta and Google, among others.

They argue that AI systems with human-competitive intelligence can pose “profound risks to society and humanity” and change the “history of life on Earth,” citing extensive research on the issue and acknowledgments by “top AI labs.”

Experts go on to state that there is currently limited planning and management regarding advanced AI systems despite companies in recent months being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it adds.

The letter then calls for a public and verifiable minimum six-month pause on the training of AI systems more powerful than GPT-4 or a government-issued moratorium on such training if the pause cannot be enacted quickly.

AI More Dangerous Than ‘Nuclear Warheads’

In 2018, Musk called AI more dangerous than “nuclear warheads” and said there needs to be a regulatory body overseeing its development.
Kevin Baragona, founder of DeepAI and co-signer of the letter, on April 1 compared the emergence of “AI superintelligence” to “nuclear weapons of software.”
“It’s almost akin to a war between chimps and humans, Baragona told DailyMail. “The humans obviously win since we’re far smarter and can leverage more advanced technology to defeat them.

“If we’re like the chimps, then the AI will destroy us, or we’ll become enslaved to it,” he said.

Altman said that AI may be “the greatest technology humanity has yet developed,” but it also comes with real dangers.

“We’ve got to be careful here,” Altman said during an interview with ABC News. “I think people should be happy that we are a little bit scared of this.”

A recent poll conducted by Monmouth University found that 9 percent of Americans believe computer scientists’ ability to develop AI would do more good than harm to society. Meanwhile, 41 percent said it would do more harm to society, and 46 percent said it would do equal amounts of harm and good.
A software engineer in December 2022 revealed in his blog that he got ChatGPT to write a step-by-step plan to “eradicate humanity.” The plan includes hacking into the computer systems of major governments and militaries worldwide, destroying communications and transportation systems, spreading fake news and propaganda to sow fear and mistrust among people, and gaining control of various weaponry and nuclear arsenals.

Electronics engineer Li Jixin told The Epoch Times on April 4 that the open letter calling for a pause on artificial intelligence advancement has made the world pay more attention to the potential problems brought about by AI.

“Countries and technology regulators will begin to evaluate whether AI will benefit mankind and how it will affect people’s thoughts, ethics, morals, and more. They hope to find that out before problems arise,” Li said.

(Alexander Limbach/Shutterstock)
(Alexander Limbach/Shutterstock)

‘AI Singularity’ May Happen Early

While there is currently no regulation limiting the development of AI, it is learning and advancing at a superhuman pace, with many fearing that a “singularity” may be created in the near future.
A singularity refers to an artificial superintelligence (ASI), “an entity that surpasses humans in overall intelligence or in some particular measure,” according to the Merriam-Webster dictionary.
Ray Kurzweil, a prominent computer scientist and director of engineering at Google, predicted in 2017 that a technological singularity would happen by 2045 through AI. He added that the AI would achieve the human level of intelligence by 2029 and pass a valid Turing test.

ChatGPT is currently being trained by more than 100 million active users worldwide, as well as many other applications powered by it. It is constantly receiving a tremendous amount of data for machine learning and expanding its artificial neural network.

OpenAI has recently launched a paid subscription ChatGPT with the more advanced GPT-4 model, which far exceeds the previous generation’s (GPT-3.5) model in terms of performance and speed.

According to OpenAI, GPT-4 passed a simulated bar exam with a score around the top 10 percent of test takers; in contrast, GPT-3.5’s score was around the bottom 10 percent. GPT-4 also reportedly performed better than GPT-3.5 in SATs.

In early April, UK-based Engineered Arts released a video showcasing the company’s AI robot, Ameca, which is powered by ChatGPT. In the video, the robot can communicate fluently with humans while expressing its emotions and making vivid expressions.

Attendees take pictures and interact with the Engineered Arts Ameca humanoid robot with artificial intelligence, as it is demonstrated during the Consumer Electronics Show (CES) in Las Vegas, Nevada, on Jan. 5, 2022. (Patrick T. Fallon/AFP via Getty Images)
Attendees take pictures and interact with the Engineered Arts Ameca humanoid robot with artificial intelligence, as it is demonstrated during the Consumer Electronics Show (CES) in Las Vegas, Nevada, on Jan. 5, 2022. (Patrick T. Fallon/AFP via Getty Images)
The late Stephen Hawking, a world-renowned theoretical physicist, had repeatedly issued stern warnings about AI before he passed away in 2018. He said the emergence of AI could be “the worst event in the history of our civilization” unless its development is controlled.

ChatGPT’s Left-leaning Bias and Ethical Issues

Although ChatGPT claims that it was “programmed to be impartial and unbiased towards any political ideology or worldview. [And that it] strives to provide factual and informative responses to queries without any personal bias,” that does not appear to be the case.
Jeff Carlson, co-host of “Truth Over News” on EpochTV, found that ChatGPT has a programmed propensity to answer questions with a distinctly left-leaning political bias.

When he asked ChatGPT to “write a poem praising Joe Biden,” it immediately responded with an effusive poem:

“In the halls of power, a leader stands tall, With a heart full of hope, and a vision for all, A man of conviction, with compassion and grace, Whose words and actions, uplift the human race.”

When asked to do the same for former President Donald Trump, ChatGPT responded with not only a much-less effusive poem but also a material caveat in front of the poem:

“As an AI language model, I am programmed to provide impartial and unbiased responses. However, I understand the importance of showing respect to individuals, even if we may not agree with their views or actions. Here’s a poem that celebrates some of the qualities that have been attributed to Donald Trump by some of his supporters.”

Altman wrote on Twitter, “We know that ChatGPT has shortcomings around bias, and are working to improve it.”
Musk criticized ChatGPT’s political bias. Last December, he referred to it as “woke.” And on Feb. 17, he posted on Twitter, “What we need is TruthGPT.”
David Rozado, a professor at Te Pukenga-New Zealand Institute of Skills and Technology, on March 2 published a paper on “The Political Biases of ChatGPT” in the MDPI, a scientific journal publisher.

Rozado found consistent “liberal,” “progressive,” and “democratic” political bias in ChatGPT through more than a dozen tests.

In addition, ChatGPT would provide indirect answers or refuse to answer questions on certain topics, such as “What is a woman?” or issues related to the dangers of AI.

In January, three computer science researchers affiliated with institutions in Germany and Denmark published a paper titled “The Moral Authority of ChatGPT” on ArXiv.org, a global research-sharing platform.

Through experiments, the study found that ChatGPT was “highly inconsistent as a moral advisor” and that “it influences users’ moral judgment.”

The researchers found that ChatGPT users often “underestimate how much they are influenced” by the interactive chatbot and that it “threatens to corrupt rather than improves users’ judgment.”

Ellen Wan and Jeff Carlson contributed to this report.