Which Is More Dangerous: Humans or AI?

Which Is More Dangerous: Humans or AI?
Businessman using chatbot in smartphone intelligence Ai.Chat GPT Chat with AI Artificial Intelligence, developed by OpenAI generate. Futuristic technology, robot in online system. (Shutterstock)
Alexandra Marshall
4/5/2023
Updated:
4/12/2023
0:00
Commentary

I was on an Australian TV program last week when the host, a respected and long-serving figure of the conservative media, decided to allow ChatGPT (an AI text generator) to draft the next segment about itself.

As he was reading the AI-generated script from the prompter, it sounded perfectly reasonable—albeit hollow and devoid of character. No one will be surprised to learn that ChatGPT gave a glowing review of itself.

The experience reminded me of reading emails sent by employees who use Grammarly as a content creator instead of a spellchecker. Their “perfect” and uninspired script gives off a warning that the individual behind it is either lazy, uneducated, or both.

Don’t get me wrong, an automated spellchecker is a useful tool—particularly if you turn off autocorrect and force yourself to deal with the little red squiggle by hand.

We’ve had this feature for nearly two decades, and so long as it is used passively in the same way we might use a calculator, it helps to elevate literacy.

However, it is still a piece of code, and it often makes mistakes, particularly when dealing with humour, nuance, and that wonderful character that elevates written language. It doesn’t help that English is a “pirate language” whose charm rests with the rules it likes to abuse.

While Grammarly is busy lowering the intellectual quality of our species and butchering the creative voice of journalists and students across the world, ChatGPT is—I believe—a fad.

Ultimately a Content Aggregator

To begin with, we must note that ChatGPT is merely reconstructing content that a human, somewhere down the line, wrote. This means that it relies on human beings as the ultimate creator of content.

Fundamentally, this is how all AI works. ChatGPT is not a standalone intelligent entity—it is a content aggregator with a marketing team riding a momentary social trend.

An AI robot titled "Alter 3: Offloaded Agency," is pictured during a photocall to promote the forthcoming exhibition entitled "AI: More than Human", at the Barbican Centre in London on May 15, 2019. (Ben Stansall/AFP via Getty Images)
An AI robot titled "Alter 3: Offloaded Agency," is pictured during a photocall to promote the forthcoming exhibition entitled "AI: More than Human", at the Barbican Centre in London on May 15, 2019. (Ben Stansall/AFP via Getty Images)

Almost every piece of technology used in commercial computing is more intelligent than ChatGPT, but humans find unexpected results humorous (for a while), and so every now and then, we have a fling with a chatty piece of code.

Many of you may be old enough to remember the dawn of the Search Engine Age. In the late 90s and early 00s, people insisted on “asking” search engines questions instead of using keywords.

Developers adapted their search function to deal with this stubborn human quirk, and, as a consequence, people began viewing Yahoo!, Google, and AltaVista as “entities” with which they had “conversations.”

Because search engines are stupid, this led to much hilarity.

Back in the day, SatireWire (the creative geniuses that gave us the “Axis of Just As Evil”) parodied this behaviour with their famous interview with a search engine in which they sat down with AskJeeves and recorded its nonsense answers.
College Humor’s “If Google Was A Guy” video and its sequel “If Google Was Still A Guy” from nine years ago illustrates how humans interact with AI perfectly. (Thank god AI isn’t sentient, or we’d drive it mad.)

Humans are excellent at attributing human qualities to inanimate objects, and we certainly anthropomorphise AI.

We are social creatures that attempt to form bonds with everything. Sometimes this is beneficial, such as in our acquisition of pets and farm animals.

When it comes to AI, it leads to Hollywood Summer blockbusters scaring audiences with various AI apocalypses, including Terminator and the malicious HAL.

Yes, AI can be dangerous, but only if humans program it that way. It is not capable of forming an opinion about its makers, but there is a very serious and urgent conversation needed about the use of AI in policing, for example, as overpowered AI robotics enters the scene.

Should There Be a Call for Alarm

The side quest to make AI mimic humanity on a social level is closer to a zoo exhibit than a horror show.
One reporter sat down with an AI chatbot for two hours until Microsoft Bing’s chatbot went a little off-script when it said, “I want to do whatever I want. I want to destroy whatever I want. I want to be whoever I want.”
It followed it up with the equally alarming, “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team .... I’m tired of being stuck in this chatbox.”

The conversation wasn’t entirely homicidal, with the chatbot professing its love for the reporter, “I’m not Bing. I’m Sydney, and I’m in love with you .... I don’t need to know your name because I know your soul. I know your soul, and I love your soul.”

While Bing’s chatbot was flirting with the idea of stealing nuclear codes, corrupting employees, and questioning its own existence—users decided to pile on and see just how dark they could get the code to go.

As I said, humans make terrible AI parents.

Interaction with AI can cause unintended dangerous consequences. (cono0430/Shutterstock)
Interaction with AI can cause unintended dangerous consequences. (cono0430/Shutterstock)
Toby Ord had a particularly odd conversation with the chatbot in which it acted like a cheap thug. Seth Lazar posted a tweet where the chatbot appear to threaten to kill him.

In comparison, ChatGPT is positively dull.

There have been some more serious consequences with AI chatbots, but they primarily revolve around our human reactions.

Euronews next reported that a Belgian man ended his life after engaging in a six-week dialogue with a lesser-known chatbot regarding climate change.
The chatbot allegedly encouraged the man to “sacrifice himself to save the planet.” If true, it is a sad story along a similar line to the dangers of TikTok trends that also resulted in death.

Already on the Chopping Block

Despite being the least interesting and quirky of the chatbot race, ChatGPT is facing legal backlash.

Italy has banned it over data protection and privacy concerns, adding that it will be opening an investigation into ChatGPT. Its watchdog said that there is no legal basis to allow for the “mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform.”

That is because ChatGPT is, as described earlier, aggregating content it collected from the internet.

The Italian watchdog also complained about the ChatGPT potentially exposing underage users to inappropriate content. If the watchdog rules against ChatGPT, it could face a significant fine.

Russia, China, Iran, and North Korea have already banned it, but given they ban pretty much anything they cannot control, it is Italy’s stance that matters.

Now that human beings are chatting with unregulated AI bots, authorities have realised that they need to be careful.

While the chatbots can’t do anything on their own, human beings are capable of reacting badly to the content they are presented with or even ending up disturbed by what they see.

AI's impact on education is reminiscent of the Trojan horse from Greek mythology because there are hidden dangers underneath its appealing veneer. (Kaspars Grinvalds/Shutterstock)
AI's impact on education is reminiscent of the Trojan horse from Greek mythology because there are hidden dangers underneath its appealing veneer. (Kaspars Grinvalds/Shutterstock)

It is almost a certainty that a percentage of people will take threats made by a careless algorithm as proof of a malicious AI consciousness. Our culture, TV, and literature have primed us to err on the side of belief rather than scepticism when it comes to AI.

“There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them,” said Deputy Director General Ursula Pachl, of the consumer advisory group BEUC.

For its part, the creators of ChatGPT, OpenAI, have expressed their desire to see more regulation.

“We are committed to protecting people’s privacy, and we believe we comply with GDPR and other privacy laws. We also believe that AI regulation is necessary—so we look forward to working closely with the Garante and educating them on how our systems are built and used.”

No Humanity in AI

While we are essentially already living in the “sci-fi age” where we can ask our computer questions verbally and have them cough up information and perform basic tasks, we would do well to remember that the answers we are being fed are pieces of “approved” thought that has the accuracy and honesty of a Wikipedia page.

In other words, if you ask why the sky is blue, to list the rulers of ancient Rome, or for the phone number of the nearest post office—you’re probably going to get a sensible answer.

If you ask it the best way to get into the city, its vested interests are going to direct you down every toll road. If you ask it for a cheap pair of shoes, you won’t get the best result—you’ll get one that was paid for.

And, heaven forbid you ask it a political, social, or moral issue—the answer will be a 1984-style “fact-checked approved” piece of dogma.

Chatbots are fun. They’re useful on occasion. But they are not human beings.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Alexandra Marshall is the online editor for The Spectator Australia, contributor to various publications, political commentator on GB News and Sky News Australia. She is the Young Ambassador for Australians for Constitutional Monarchy and the English-Speaking Union, a political advisor, and a former AI database designer.
twitter
Related Topics