Facebook’s Grand Deception

Facebook’s Grand Deception
A woman holds a smartphone with the Facebook logo in front of a displayed Facebook's new rebrand logo Meta in this illustration picture taken on Oct. 28, 2021. (Dado Ruvic/Illustration/Reuters)
Dinesh D’Souza
12/20/2021
Updated:
12/20/2021
Commentary

Most people are by now aware that the mainstream digital platforms—Facebook, YouTube, and Twitter—don’t respect freedom of speech and have become venues for comprehensive restriction, suppression, and censorship. This censorship is pervasive, multilayered, and involves an increasingly wide range of topics: election fraud, COVID-19, issues of transsexuals and sexual orientation, Black Lives Matter, climate change, and so on.

Many people have their own personal story of how they, or people they know, were arbitrarily restricted. One guy gets flagged for saying that men can’t be women. Another is penalized for describing what he personally witnessed during the 2020 election. A third is deplatformed for saying something negative about Black Lives Matter. This, in some respects, is the Sovietization of America. Genuine debate and conversations go underground, because they can’t take place in what is now our shared public square.

Still, for those who haven’t experienced the arbitrariness, the Orwellian reversals, and the sheer dishonesty of digital censorship, it’s helpful to see how it operates through specific examples. Here I’ll offer two that both involve Facebook. The first is my own case, in which I was demonetized, flagged, and my distribution restricted, for speaking out on behalf of Kyle Rittenhouse. The second is the case of TV journalist John Stossel, who was flagged and restricted by Facebook for making “partly false” or “out of context” remarks about climate change. Both cases reveal more about Facebook than they do about either of us.

In the days leading up to the Rittenhouse verdict, I appeared on the “Ingraham Angle” on the Fox News Channel where I defended Rittenhouse, and even praised him for coming to the defense of his community—a community in which his dad and other relatives lived—while the police and the adult males in that community stayed away or shut themselves up in their houses while looters, rioters, and arsonists destroyed their small town.

When I posted a clip of this appearance on Facebook, Facebook zinged me for violating their “Community Standards on Dangerous Individuals and Organizations.” Since I hadn’t referenced any organizations, the “dangerous individual” in question was obviously Kyle Rittenhouse. Facebook seemed to take the view that to defend Rittenhouse was to defend a violent white supremacist vigilante. This, of course, was the leftist narrative, which Facebook presumed was unquestionable fact.

Never mind that the trial showed that Rittenhouse was violently attacked when he defended himself, there was no evidence he was a white supremacist, and the jury acquitted him on all counts. At this point, Facebook—presumably chastened by the findings of an actual trial—acknowledged that a defense of Rittenhouse didn’t by itself violate their “dangerous individuals and organizations” policy.

So I appealed Facebook’s strike against me—which came with the dire warning that “your page is at risk of being unpublished,” a penalty that would cause me to lose my 2 million plus followers—and received this bizarre response. In Facebook’s own words, “This post was correctly removed according to our policies at the time. Since the verdict, we rolled back the restrictions. … Since this was posted and removed before the verdict/policy change, the content will not be restored.”

In sum, I’m still being tagged and restricted even though my violation is no longer a violation according to Facebook’s current policy. Rittenhouse isn’t dangerous, he never was dangerous, and he was found by the jury to have acted legitimately. Facebook admits this, but since Facebook previously considered Rittenhouse dangerous, and I was found in violation during that period, Facebook insists that I’m still in violation and refuses to revise its decision in the light of new information and its own new policy application. I find this maddening.

TV journalist John Stossel got dinged twice by Facebook. The first time was when he posted a video saying that California’s wildfires were mostly caused by poor government management. Facebook tagged that as “missing context,” linking to a post by a group called Science Feedback that interpreted Stossel as saying, “Forest fires are caused by poor management. Not by climate change.”

Stossel, however, didn’t say that. He didn’t deny climate change. He merely said that the main cause of the forest fires was government ineptitude. Facebook’s fack-checking partner evidently invented a straw-man and then fact-checked that, pronouncing it “misleading.” Stossel’s second violation was a video that said climate change is real but human societies can adapt to it. Even though it’s hard to see what can even conceivably be found false with this claim, Facebook flagged it and shut down the distribution of this video. So Stossel sued Facebook’s parent company, since renamed Meta.
This is where the plot gets really interesting. In its legal defense, Facebook insisted that its fact check labels aren’t fact checks at all. Rather, they are themselves statements of opinion. Referencing the findings of Science Feedback and an adjunct group called Climate Feedback—the two groups recruited by Facebook to be part of its fact-checking process—Facebook claimed that “the challenged statements on those pages are ... neither false nor defamatory.” Facebook went on to say that its “missing context” label, affixed to one of Stossel’s videos, was “protected opinion” because the very term “is necessarily a judgment call, one that is ‘not capable of verification or refutation by means of objective proof.’”

Wow! How revealing! Let’s take stock of the situation. Facebook has been assuring the media and the public for a while now that it’s engaged in a socially responsible campaign of fighting “hate” and “misinformation.” Facebook’s regime of fact checks, which have their counterparts at YouTube and Twitter, are all supposedly part of keeping false and misleading statements from gaining a wide circulation and following, with potentially hazardous outcomes for society.

But now Facebook basically comes out and says that this whole public posture is a lie. Facebook’s fact checks are merely Facebook’s point of view. In other words, Facebook has an opinion, which basically aligns with the opinion of the political left. Facebook doesn’t like rival or alternative opinions on its platform, and so under the pretense of checking facts, Facebook actually tags, restricts, suppresses, and censors opinions that run afoul of Facebook’s own ideology.

Clearly, Facebook is a publisher and not a neutral platform, which means that it doesn’t deserve its Section 230 protection. Moreover, Facebook is a public deceiver, in that it purports to restrict falsehoods while in fact it restricts arguments that conflict with its own preferred ideology. Our ultimate refuge is in building and cultivating rival platforms, which I hope will someday be larger than Facebook, YouTube, and Twitter. Censorship is evil, and there’s no better way to punish it than for people around the world to realize that they’d rather have their real debates and conversations someplace else.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Dinesh D’Souza is an author, filmmaker, and daily host of the Dinesh D’Souza podcast.
Author’s Selected Articles
Related Topics