Despite Stock Woes, Big Tech Toes the Line of Political Correctness

Despite Stock Woes, Big Tech Toes the Line of Political Correctness
Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill on April 11, 2018. (Chip Somodevilla/Getty Images)
Petr Svab
12/24/2018
Updated:
12/28/2018

Giant tech corporations like Google, Facebook, and Twitter have faced government scrutiny and losses in the stock market in 2018. Yet more uncertainty awaits as they finish the year committed to policing their users’ behavior ever more strongly—up to a standard that may prove to be untenable.

After having their executives grilled in three congressional hearings over bias, privacy, and ”Russian bots,” the companies finished the year with their stocks in a slump. Twitter and Google were down about 20 percent compared to their 2018 peaks, a decline mirroring the broader market. Facebook, on the other hand, crashed by some 40 percent, wiping out the gains of nearly two years.

The legacy media and Democratic lawmakers have largely focused on the companies’ struggles to purge from their platforms foreign actors (mainly Russian) that are meddling in U.S. elections, and to stop user data from getting hacked or improperly shared.

But conservatives, including President Donald Trump, have repeatedly accused the companies of political bias as a lineup of influential right-leaning users have been booted from the platforms.

Under pressure from both sides, the companies have plowed ahead on the course of heightened restrictions of content they deem harmful, aligning with the ideology of political correctness in general, with a focus on hate speech.

While the crackdown on hate speech is led under the banner of protecting people from being disparaged based on their identity, several experts have warned that the definition of hate speech is too vague and broad to be enforced consistently and evenly.

Deciding Hate

Hate speech usually refers to derogatory statements based on someone’s “protected characteristics,” such as race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, serious disease, or disability. The trouble is, no two people seem to agree on what exactly should or shouldn’t qualify as hate speech.
Facebook has pledged a commitment to “removing hate speech any time we become aware of it,” touting the removal of nearly 300,000 posts a month for violations. But the company also acknowledged, in the same blog post, that “there is no universally accepted answer for when something crosses the line.”

Derogatory speech, no matter who is targeted, is protected from government regulation under the First Amendment, unless it directly threatens or incites violence. But the platforms have taken it upon themselves to police statements that may have been meant as mere provocation, generalization, or a personal insult.

Facebook, for instance, prohibits “expressions of contempt” such as “I hate,” “I don’t like,” or “X are the worst” that “target a person or group of people who share any of the [protected] characteristics.”

Yet the company acknowledges that many expressions that may technically fit the bill are part of legitimate discourse.

“To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith,” it stated. “To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh.”

Ultimately, Facebook acknowledges that its content police, which has tripled since last year to 30,000 strong, has to make a judgment call in each case.

Former senior Facebook engineer Brian Amerige described Facebook’s company climate as a political monoculture, in which “Facebook’s community standards are chaotically, almost randomly, enforced, with escalations and occasional reversals happening when the screw-ups are prominent enough to cause media attention.”

He tried to change the culture from within and even received attention from the leadership, but eventually reached an impasse on the issue of hate speech.

“Hate speech can’t be defined consistently and it can’t be implemented reliably, so it ends up being a series of one-off ‘pragmatic’ decisions,” he said. “I think it’s a serious strategic misstep for a company whose product’s primary value is as a tool for free expression.”

What’s the Harm

Proponents of hate speech regulations argue that such statements are harmful, because they may make people feel threatened.
“We must continue to make improvements to our service so that everyone feels safe participating in the public conversation—whether they are speaking or simply listening,” Twitter CEO Jack Dorsey said in his Sept. 5 written Congress testimony.

But what makes people feel threatened can differ from an actual threat.

“These companies do have to kind of figure out whether they’re trying to protect people from feeling physically threatened, or whether they’re trying to protect users from feeling threatened in other ways, emotionally or psychologically,” said Thomas Healy, a law professor at Seton Hall University and an expert on free speech.

Another argument for hate speech regulation is that it may eventually create an environment where some people may feel encouraged to commit actual violence. Indeed, oppressive regimes have commonly eliminated their enemies after subjecting them to a barrage of dehumanizing propaganda.

Yet trying to prevent physical harm that may or may not happen in the future by policing speech of individuals carries its own pernicious consequences.

“This has already moved us in a dystopic direction, into the realms of ‘Nineteen Eighty Four,’ ‘Brave New World,’ and ‘Minority Report’—the world of thought crimes and precrimes, with people afraid to state their views because they have no idea what will bring them under suspicion, cut them off from modern modes of communication, or put them in jail,” said William McGowan, a veteran journalist, Epoch Times contributor, and author of several books on the decline and corruption of media.
Some European countries indeed have laws on books imposing penalties for hate speech, pushing the online platforms to remove such content or perhaps face liabilities themselves.

Heckler’s Veto

The majority of Americans (52 percent) are against the country becoming more politically correct and are “upset that there are too many things people can’t say anymore,” according to a Nov. 28–Dec. 4 NPR/PBS/Marist poll (pdf).

But the opposition splits along party lines, with 55 percent of Democrats wanting more political correctness and a mere 14 percent of Republicans wanting the same.

Since online platforms commonly rely on user reports to alert them of violations, the mechanism appears doomed to produce political bias.

Not only are left-leaning Americans more in favor of political correctness, but some on the more far-left, progressive spectrum have formed groups that specialize in getting right-leaning voices barred from online platforms.

That leads to a situation in which a relatively small, organized group can scour someone’s social media for anything that approximates hate speech and inundate a tech company with complaints—an online mutation of sorts of the “heckler’s veto” phenomenon.

Conservative commentator Ben Shapiro has used the phrase “heckler’s veto” to describe the situation he and others have faced at college campuses, where some individuals feel the need to not only protest or contest certain invited speakers, but also try to prevent them from speaking in the first place.

Nearly two-thirds of the attempts to disinvite a speaker have come from the political left, based on 379 incidents over the past 15 years collected by the Foundation for Individual Rights in Education (FIRE).

And the ratio has skewed further left recently. Since 2017, FIRE recorded 45 disinvitation attempts, with only six originating from the right.

The pro-free speech civil rights advocacy group stated that the attempts have not only increased in frequency, but are also increasingly successful.
Opponents were able to cancel invitations to 25 speakers over the past two years, while in 2000–2001, only 10 attempts were recorded by FIRE, with just one being successful.

Belief in Free Speech

In a 2016 interview, Healy pointed out that while private companies are not bound by the First Amendment, the principles of free speech have been an important value of the nation.

“We rely on counter speech,” he said. “If the speech is dangerous or false, we rely on counter speech to correct that.”

While the mechanism is not perfect, he acknowledged, it’s based on Americans’ faith “that it’s superior to government regulation or regulation by Facebook or whatever other entity we’re talking about.”

While online platforms, in their rhetoric, have uniformly pledged to support free speech, in practice they’ve been moving away from it, according to a Google internal research document dated in March and later leaked to Breitbart.
The document, titled “The Good Censor,” argues the “tech firms have gradually shifted away from unmediated free speech and towards censorship and moderation.”
Several competitors have emerged, like Gab and Minds, capitalizing on their commitment to free speech and privacy. Minds has attracted about 200,000 monthly active users, while Gab has released data only on registered users, counting some 800,000. Twitter, on the other hand, has almost 330 million monthly active users, which is further dwarfed by Facebook’s nearly 2.3 billion.
Correction: Previous version of the article incorrectly identified the Foundation for Individual Rights in Education as right-leaning. Though the organization has accepted funding from right-leaning donors, it presents itself as non-partisan and has defended the rights of individuals tied to both right- and left-leaning causes. The Epoch Times regrets the error.