AI for the Win: A Dissenting Perspective

AI for the Win: A Dissenting Perspective
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken on Feb. 23, 2023. (Dado Ruvic/Reuters)
Michael Ryall
Siri Terjesen
6/13/2023
Updated:
6/14/2023
0:00
Commentary
The explosive adoption of the AI language model ChatGPT has been accompanied by a proportionate increase in alarmist assessments of the dangers of artificial intelligence. The alarm is sounding across the political spectrum, from The New York Times to Steve Bannon’s War Room, with concerns ranging from widespread job losses to more effective disinformation campaigns to facilitating oppression by authoritarian regimes to the very destruction of the human race.

These concerns aren’t shared evenly across the political spectrum. The left primarily focuses on the disinformation angle; the potential for oppression is the right’s primary worry. Everyone seems nervous that humanity may have opened a Pandora’s box that will ultimately lead to its own annihilation.

We agree that ChatGPT-3.5 (and its 4.0 successor) represents a stunning leap forward in AI capabilities, which will certainly result in massive disruptions in ways both imaginable and unimaginable. However, a sober assessment of the state of AI indicates that these technologies are nowhere near the point of achieving “human-like” general intelligence, much less surpassing it. More importantly, a panicking left should always give pause for reflection on possible opportunities to advance the cause of liberty.

The state-of-the-art in AI is machine learning, which has developed sophisticated algorithms for discovering subtle correlations in data. A familiar example is Netflix using its huge database to keep track of what subscribers watch by theme, genre, director, actors, and so forth. Machine learning works by “training” the machine to predict subscriber movie choices based upon correlations with past viewing behavior of “similar” subscribers. With tons of data (which Netflix has), the predictions become quite good. Google, Amazon, and Facebook all use this technology to nudge users toward additional content or advertising.

As remarkable as ChatGPT’s capabilities are, this technology is an application of the same correlation-finding technology used by Netflix and others to predict your buying preferences. Loosely, ChatGPT works by predicting the most likely next word in a “chat,” word by word, based upon an enormous corpus of text data (including such sources as Wikipedia, a massive number of web pages, books, and other texts). GPT chats are so incredibly convincing precisely because the dataset used to train it was unfathomably large.

While this demonstrates the incredible accuracy with which a machine can identify correlations given enough data, it doesn’t demonstrate actual intelligence. For example, ChatGPT occasionally “hallucinates” when its predict-the-next-word algorithm goes off the rails. ChatGPT doesn’t know any better. Moreover, it can’t distinguish causality from correlation. For example, the data show that ice cream consumption and crime are positively correlated. Yet even a 12-year-old knows that trying to reduce crime by banning ice cream is a silly idea.

Relatedly, humans have a remarkable capacity to make decisions in novel circumstances—a phenomenon that computer scientists call “one-shot” learning. The AI field has made very little progress on this problem.

Finally, machines can’t, even in principle, grasp the meaning in abstract concepts. For example, a human mathematician understands the Pythagorean theorem not only as a set of formulas and rules but also as a fundamental concept about the relationships between the sides of a right triangle. His insight into the nature of the theorem extends beyond its mere formulaic expression.

The remaining issue is the adoption of these technologies for the purpose of political oppression. In his recent speech at The Heritage Foundation, Tucker Carlson noted that we’re no longer in a world in which everyone wants what’s good for the nation but disagrees on how best to get there. Rather, Carlson went on, his political opponents are powerful foes bent on the promotion of ugliness, disorder, and destruction for their own sake. In other words, evil. When asked what has changed the most to affect the lives of everyday Americans, he answered: the centralization and control of information that has left millions of Americans unaware of essential facts about what’s happening around them.
Many conservatives will agree that this is an accurate summary of the unnerving situation in which we find ourselves. Hence, conservatives are right to be concerned that AI, like everything else, will be ruthlessly turned by the powerful against their political opponents. Yet here we pause. If AI provides such wonderful opportunities to the left, why the panic? And there’s panic: Ireland, Australia, and Canada all passed legislation giving their governments essentially carte blanche control over the information viewed by their citizens. The U.S. “TikTok” bill, if passed, will give the U.S. government similar powers.

The “color revolution” is a method that was honed by national security services to destabilize and replace undesirable foreign governments. The process involves fanning and exploiting existing fears and grievances within the local population through propaganda and disinformation, creating economic instability, and manipulating elections to undermine and ultimately overthrow the existing regime. These techniques have been successfully turned against the U.S. population itself.

For example, fear of COVID-19 was amplified by organs of the state and the private members of the information cartel to concentrate power by stripping rights away from a frightened public willing to be denied civil rights. Now, we hear incessantly about the scary dangers of AI accompanied by the familiar proposal to solve the “problem” by investing the government with more powers of control.

Why AI and why now? The answer lies in what the left says it fears: “disinformation”—by which the left means any information that disrupts their narrative (i.e., saying true things). The powerful are precisely afraid of the potential for recent advances in AI to be picked up by dissidents and used against them.

To see the problem from their perspective, note that one amazing ChatGPT capability is writing code. We have colleagues who, knowing nothing about the specific programming language, use ChatGPT to write programs that do things such as act as automated teaching assistants. And they implement these programs in a matter of days.

Self-writing programs are inherently democratizing. ChatGPT disrupts the need for massive staffs of software engineers to accomplish sophisticated outcomes. This, and technologies like it, have the potential to empower a dissident Army of Davids with the ability to resist and, in time, break the power cartel’s lock on information.
The recently introduced, President Reagan-inspired GIPPR chatbot and the censorship-free TUSK browser are promising developments along these lines. Indeed, their danger to the power monopoly extends beyond information wars since AI can be pitted against AI in an offensive fashion to disrupt other systems of oppression.

Summing up, advances in AI are real and will be disruptive. However, we aren’t on the brink of being surpassed by machines with superhuman general intelligence. What we do have, as dissidents against a brute power monopoly, is a table-turning tool with the potential to even the playing field.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Michael Ryall is professor of strategic management and director of the Executive Virtue Development Lab at the University of Toronto.
Related Topics