Is Your AI Woke?

Is Your AI Woke?
A live demonstration uses artificial intelligence and facial recognition in dense crowd spatial-temporal technology at an exhibit in Las Vegas on Jan. 10, 2019. (David McNew/AFP via Getty Images)
Mark Stamp
9/8/2021
Updated:
9/12/2021
Commentary

At its core, artificial intelligence (AI) involves statistical discrimination; that is, AI algorithms extract decision-making insights from statistical information. In contrast to run-of-the-mill statistical discrimination techniques, AI algorithms are able to “learn” through a process that involves training on data.

The key to the success of AI is that the algorithms can generalize from the training data, as opposed to simply memorizing information.

The quest for AI isn’t new, as the original work regarding artificial neurons dates to the 1940s, with some basic models developed in the 1950s still finding use today. However, the widespread deployment of AI is a recent trend—a trend that’s likely to increase exponentially in the coming years.

Since AI has such a long history, why has it only recently burst onto the scene? With apologies to Andrea True, the answer is “more, more, more.” Specifically, we have more computing power and more data, which enables us to build models with more layers of artificial neurons. This “deep learning” approach has produced much more powerful and useful models than were previously possible.

Claims of AI being biased are fairly common today. In response, there’s a movement afoot in academia and industry to root out some types of biases by building AI systems that incorporate various concepts of fairness. In my opinion, such efforts are likely to hobble AI in many application domains and, at worst, could turn AI into little more than a pseudo-science.

The old adage of “garbage in, garbage out” certainly applies to AI. If the data used to train an AI model is biased, the resulting model will learn to faithfully reproduce that bias.

In contrast to the data used for training, an AI algorithm has no inherent bias, as it pursues the same learning strategy, regardless of the training data. So, it might seem obvious that charges of AI unfairness would boil down to obtaining better data on which to train our AI models.

But this isn’t the case, as current research into fairness largely focuses on constructing models that won’t produce specified outcomes, regardless of the training data. That can involve either modification to the training data or tinkering with the inner workings of AI training algorithms.

In either case, the goal is to prevent the resulting model from producing certain undesirable results, regardless of what the data might be telling the model.

Suppose that we collect data consisting of various statistics for a large number of people (height, weight, shoe size, etc.). Further, suppose that we want to be sure that our model doesn’t discriminate against tall people. We could simply ignore the tallness feature in our data and thereby prevent our AI model from directly using height as a discriminating feature.

Yet shoe size and weight might indirectly indicate height, resulting in tallness still being a factor in the AI decision-making process. So, the crude act of simply blotting out a feature from the training data might not be sufficient to prevent a specified bias from leaking into a trained AI model.

An alternative is to modify the training process itself, which is at the heart of any AI algorithm. Depending on the particulars of a specific AI technique, there are a variety of ways to modify the training algorithm so that it won’t discriminate based on, say, the height of the subjects in our training sample.

Whether we modify the data or the algorithm, we’ve artificially limited the information available to our AI models. Modifying the algorithm is likely to be the more direct and effective approach.

The deep learning algorithms that dominate AI today are notoriously opaque, in the sense that it’s difficult to understand how the models are making their decisions. By fundamentally altering those models to eliminate supposedly undesirable results, we open the door to manipulations that will—unintentionally or otherwise—introduce their own sets of biases.

Statistical discrimination is at the core of AI, and our models will still discriminate based on some aspects of the training data. And since those models are opaque, it may be well-nigh impossible to ferret out the source of an introduced bias after the fact.

Using such fairness principles, it isn’t hard to imagine AI models designed to detect ill-defined concepts, such as “fake news” or “hate speech,” being constructed in ways that are biased toward one side of the political spectrum. Such models would add a sheen of scientific respectability to their (biased) results, and it would be difficult to unearth the source and degree of any built-in bias.

While the goals may be laudable and the research problems are indeed interesting, fairness in AI dramatically increases the scope for mischief-making. Ultimately, those types of manipulations would threaten to weaken trust in AI.

“Everyone is entitled to his own opinion, but not his own facts,”  the late New York Sen. Daniel Patrick Moynihan famously said.

AI models that include “fairness” hold out the prospect of entangling facts and opinions at a level that could make it virtually impossible to separate the two. In such cases, the opinions of the AI developer might be promoted to the level of objective “scientific” fact, at least by those who agree with the developer’s opinions.

In contrast, those who disagree with the developer’s opinions would be justified in their suspicion that the AI was rigged so as to produce a predetermined outcome.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Mark Stamp is a professor of computer science at San Jose State University. His teaching and research is focused on information security and machine learning. He has published more than 125 research articles on various topics involving information security and machine learning, and he has written well-regarded textbooks, including “Information Security: Principles and Practices” (Wiley) and “Introduction to Machine Learning with Applications in Information Security” (Chapman and Hall/CRC).
Related Topics