Deepfakes and the National Effort to Stop Them

Deepfakes and the National Effort to Stop Them
A woman in Washington, DC, views a manipulated video on Jan. 24, 2019. (Rob Lever/AFP/Getty Images)
Ronald J. Rychlak
9/18/2019
Updated:
9/18/2019
Commentary
Earlier this year, the technology firm SafeGuard Cyber uncovered evidence that almost 7,000 computer users linked to the Russian government posted enough content on social media to reach more than 240 million internet users in the European Union.
That social media content, which was, in actuality, state-sponsored disinformation, could have influenced as many as half of all European voters in EU elections that took place in May. SafeGuard Cyber suggested that the EU fund a “center of excellence staffed with analysts” equipped to respond to the emerging threat posed by disinformation.

Similar charges have been made about the influence of Russian meddlers in the 2016 U.S. elections. The claim is that they pushed dubious content over platforms such as Facebook and Twitter in order to influence the outcome. Many observers are concerned about Chinese influence in the coming 2020 elections.

Officials have been working for some time on ways to prevent hackers from spreading false information on social media with the intent of influencing U.S. elections.

The U.S. Department of Defense is now taking an experimental step. The project, called Semantic Forensics (SemaFor), will be carried out by the U.S. Defense Advanced Research Projects Agency (DARPA), which operates as a bureau within the Department of Defense.

DARPA will test custom software designed to unearth fakes inserted in among legitimate stories, photos, video, and audio clips on the internet. During the tests, DARPA hopes that the software will be able to scan more than 500,000 news and social media posts and identify the inserted 5,000 items that are fake.

The task won’t be easy. Common forensic tools in use today don’t have the ability to detect sophisticated image, video, and audio edits. They usually address only some aspects of media authentication. The aim of SemaFor is to develop an effective, end-to-end platform that will perform a complete and automated forensic analysis of still images, video, and audio recordings.

“There is a difference between manipulations that alter media for entertainment or artistic purposes and those that alter media to generate a negative real-world impact. The algorithms developed on the SemaFor program will help analysts automatically identify and understand media that was falsified for malicious purposes,” DARPA program manager Dr. Matt Turek said in a statement.

Deepfakes

Perhaps the most worrisome fake news today comes in the form of “deepfakes,” which are falsified videos or audio recordings that look and sound so legitimate that they are very hard to distinguish from the real thing.
Not long ago, only Hollywood movie studios and governmental intelligence agencies could produce such sophisticated fabrications. (Remember actor Gary Sinise’s portrayal of the amputee Lt. Dan in “Forrest Gump“?) Today, anyone with a computer can download software that will let them create convincing fake videos. There’s even a Chinese app that lets users insert their faces into television programs.

Thus far, most deepfakes have been put together by amateurs having fun by making celebrities say funny things or putting them in compromising sexual positions. However, it wouldn’t be hard to create a deepfake depicting the president announcing an attack, a police officer making a racist remark, or a corporate head plotting to profit at the expense of the environment. Such a deepfake could destroy someone’s reputation or disrupt a close election.

Fake news and disinformation today have even greater effects than they did in the past because of the ease with which they can be created and disseminated via social media. Just this summer, Twitter and Facebook suspended hundreds of accounts that they believed were linked to a state-backed misinformation campaign against pro-democracy protesters in Hong Kong.
In a speech to the Heritage Foundation last summer, former presidential candidate Sen. Marco Rubio (R-Fla.) called deepfakes a threat to national security:
“In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.”
That’s why the SemaFor project is so important. According to DARPA, “A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies.”
DARPA’s algorithm testing will be conducted in three phases over four years. In the first phase, the project will cover online news and social media, while the second and third phases will focus on analyses of technical propaganda and week-long “hackathons.”
If the test is successful, the system could be employed widely to prevent viral fake news from affecting the U.S. government and elections.

Dealing With Disinformation

DARPA is the correct U.S. agency to deal with disinformation. It’s founding dates to the launch of Sputnik in 1957. It was developed as part of a commitment by the United States to “be the initiator and not the victim of strategic technological surprises.” Over the years, DARPA has developed many precision weapons and much stealth technology for the military, and also non-military items such as key operating protocols for the internet, automated voice recognition and language translation, and Global Positioning System receivers small enough to be placed in consumer devices.

DARPA can’t keep all fake news and malicious stories off the internet, but with this new plan to uncover disinformation by increasing algorithm checks, the goal is to prevent them from going viral.

The agency also is working on a research program called Media Forensics, which has the goal of creating an advanced image authentication system. If successful, that system will not only be able to detect manipulations in images but will also provide information about how those manipulations were done. Presumably, that will also help investigators determine who did them. That could be important in a wide range of matters, from military to criminal.
Of course, despite all of the work by DARPA and the Department of Defense to counter fake news, the perpetrators haven’t slowed in their efforts. According to recent reports, China has adopted the tactics that it’s long used to control social media within its border, in the hopes of doing the same outside of China. Innovations in detection drive innovation by those with malicious intent, and vice versa. That “cat and mouse” game is likely to continue into the foreseeable future.

Despite U.S. government efforts to uncover fake news, citizens still need to be cautious. For all of the good that technology has done, it can be exploited by people and governments that have malicious intent. To defeat them, be skeptical. Check the footnotes. Trust but verify stories that seem incredible.

Disinformation is most effective against those who want to believe it. Be a smart consumer of information.

Ronald J. Rychlak is the Jamie L. Whitten chair in law and government at the University of Mississippi. He is the author of several books, including “Hitler, the War, and the Pope,” “Disinformation” (co-authored with Ion Mihai Pacepa), and “The Persecution and Genocide of Christians in the Middle East” (co-edited with Jane Adolphe).
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Ronald J. Rychlak is the Jamie L. Whitten chair in law and government at the University of Mississippi. He is the author of several books, including “Hitler, the War, and the Pope,” “Disinformation” (co-authored with Ion Mihai Pacepa), and “The Persecution and Genocide of Christians in the Middle East” (co-edited with Jane Adolphe).
Related Topics