Big Tech, AI, and the Fight Against Mass Violence

Big Tech, AI, and the Fight Against Mass Violence
Mourners attend a memorial service in the Oregon District to recognize the victims of an early-morning mass shooting in the popular nightspot on Aug. 04, 2019 in Dayton, Ohio. (Scott Olson/Getty Images)
8/26/2019
Updated:
8/26/2019
Commentary

Where is Big Tech in the pursuit of ending mass violence? It’s a question that more and more public policymakers are asking.

The premise is simple: When Facebook, Twitter, and YouTube host speech on their platforms, and, at the same time, can filter, censor, and even harvest the speech of users using algorithms and artificial intelligence (AI), they have the technological power at their disposal to identify and report to law enforcement authorities individuals who display signs of a propensity to violence.

Inanimate objects don’t kill people; people kill people.

Public Platform or Content Publisher?

Starting with the premise that Big Tech can act—supported by the censorship of content such as Prager University by YouTube, Jesse Kelly by Twitter, and many more—why are we seeing post-incident reports that reference social media manifestos, posts, and commentary that were known of before the crimes took place, while at the same time, Big Tech remains silent on the sidelines?

When Big Tech elites can conspire with their preferred social action mobs, such as abortion activists to ban from their platforms “pro-life ads ahead of an important abortion vote in Ireland last year,” according to Life News, it begs the question, why isn’t Big Tech stepping up to identify threats to society with simple reporting?

The case of pipe bomber Cesar Sayoc presents a case in point. Sayoc’s social media profiles online included conspiracy theories, specific threats, and even graphic images. All of this was found after he made mail-bombing attempts against critics of President Donald Trump and key Democratic Party political figures in October 2018. All of this was discovered after the fact.

Then there’s the more recent case of Connor Betts, the alleged Dayton, Ohio, mass shooter. “A Twitter account that appears to belong to Betts retweeted extreme left-wing and anti-police posts as well as tweets supporting Antifa, or anti-fascist, protesters,” CNN reported. Betts is said to have “often simulated shooting other students and threatened to kill himself and others on several occasions.”

At the other end of the spectrum, CNN reported, “a tip from a colleague led police to arrest Rodolfo Montoya. … ‘Suspect Montoya had clear plans, intent, and the means to carry out an act of violence that may have resulted in a mass-casualty incident,’ [Police Chief Robert] Luna said.” Preemptive action stopped a massacre from happening.

How Far Has Big-Tech Gone to Block, Harvest Speech?

Helping to make the point that providing advance warning to law enforcement agencies when an individual may pose a risk to himself or others—familiar language because it is found in most “Red Flag” law proposals—Fox Business reported, “Social media platforms have their own, expansive versions of community standards, breach of which can lead to one being blocked from using the site. No one likes to be put in ‘Facebook jail,’ but such standards are necessary for the continued sustainability of online platforms.”

In September 2018, The Telegraph reported: “In a letter to U.S. senators, Susan Molinari, Google’s vice president for public policy in the Americas, admitted that it lets app developers access the inboxes of millions of users—even though Google itself stopped looking in 2017. In some cases, human employees have manually read thousands of emails in order to help train AI systems, which perform the same task.” 

Another way to read this is, Google can sift through data to find what it is looking for on a commercial level, albeit surreptitiously in email accounts. It’s reasonable to assume, then, that Google also has the capability (think AI) to stay at a higher level, outside of individual emails, but still in the sea of data that is freely posted by individuals on the internet and divorced from privacy expectations.

Tail Wagging the Dog

It seems odd, indeed, that instead of warning our protectors of civil society before the fact, we hear that we should ban and confiscate guns. Notification is the low-hanging fruit on the tree of solutions to a problem that isn’t rooted in the weapon, but in the heart and mind of the person engaging in acts of violence.

Inaction and silence introduce a perverse incentive giving rise to motive. As violence happens, more and more calls come for the confiscation of devices of personal protection from those who have done nothing wrong.

The Witness’s Dilemma

Silence in the face of predictable violence, no matter what the weapon, be it a hammer, bomb, gun, knife, or some other weapon, is as great a threat to civil society as the violence itself.

It’s one thing for the government to track and spy on citizens; it’s quite another for people and private sector service providers—moderators—to alert our government to threats expressed in public, whether in person or on public forums.

The Department of Homeland Security’s “See Something, Say Something” campaign is intended to prevent violence. So how about it, Big Tech, do you want to pitch in and solve the problem on the front end?

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.