Senators Want DHS to Study Deepfake Videos

Senators Want DHS to Study Deepfake Videos
Sen. Cory Gardner (R-Colo.) in the Capitol in Washington on March 28, 2019. (Zach Gibson/Getty Images)
Mark Tapscott
7/1/2019
Updated:
7/1/2019

WASHINGTON—A bipartisan group of seven senators and four members of the House of Representatives introduced legislation on June 28 directing the Department of Homeland Security (DHS) to do an annual study of so-called deepfake video, and recommend statutory and regulatory reforms needed to deal with it.

“Deepfakes” are videos of actual events that are altered using Artificial Intelligence (AI) tools to make them appear to communicate or depict something other than the original.

“Artificial intelligence presents enormous opportunities for improving the world around us but also poses serious challenges,” Sen. Cory Gardner (R-Colo.) said in a joint statement with three Senate colleagues, in announcing the introduction of the Deepfakes Report Act of 2019.

“Deepfakes can be used to manipulate reality and spread misinformation quickly. In an era where we have more information available at our fingertips than ever, we have to be vigilant about making sure that information is reliable and true in whichever form it takes,” Gardner said.

Joining Gardner as Senate co-sponsors are Sens. Rob Portman (R-Ohio), Martin Heinrich (D-N.M.), Brian Schatz (D-Hawaii), Joni Ernst (R-Iowa), Gary Peters (D-Mich.), and Mike Rounds (R-S.D.).

The four House co-sponsors include Reps. Derek Kilmer (D-Wash.), Peter King (R-N.Y.), Will Hurd (R-Texas), and Stephanie Murphy (D-Fla.).

“Deepfake technology is an example of how AI can be used in ways that can be damaging to our society and our democracy,” Heinrich said in the statement.

“Any policy response needs to distinguish carefully between legitimate, protected speech and content that is intended to spread disinformation. This legislation will help increase awareness of deepfake technology and is a necessary first step toward determining how to address this growing threat.”

Portman also acknowledged that countering deepfakes can involve conflicts with civil liberties, saying in the statement, “Addressing the challenges posed by deepfakes will require policymakers to grapple with important questions related to civil liberties and privacy.

“This bill prepares our country to answer those questions and address concerns by ensuring we have a sound understanding of this issue. As concerns with deepfakes grow by the day, I urge my colleagues to swiftly pass this bipartisan legislation.”

Gardner, Portman, and Heinrich are co-founders of the Senate Artificial Intelligence Caucus, while Ernst, Schatz, Peters, and Rounds are members.

Among the House co-sponsors, Murphy, a former Department of Defense national security specialist, said, “Deepfake technology has the potential to be used by bad actors to sow chaos in our society and undermine our democratic process.

“That’s why Congress needs to be properly informed about the national security threats posed by this emerging technology, and the best way to stop them. We cannot allow our enemies to use these tools to threaten our nation’s security and democracy.”

Federal officials haven’t been ignoring the emerging threat of deepfake videos specifically or of AI more generally.

The Defense Advanced Research Projects Agency (DARPA) has been working for more than a year on its “Media Forensics” (MediFor) program.

The MediFor effort “brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.”

DARPA said that if the MediFor program succeeds, it will “automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.”

MediFor is overseen by Dr. Matt Turek, a program manager in DARPA’s Information Innovation Office who is a co-inventor of 14 high-tech patents.

How much DARPA has spent on the MediFor program overall is classified, but it’s known to be part of the Information Analytics portion of the agency’s Tactical Technology research initiative.

The MediFor program received nearly $21 million in 2018, $17.5 million in 2019, and is projected to spend $5.3 million in 2020 under President Donald Trump’s proposed budget.

The decrease is explained by DARPA as “the result of development work ramping down, and the focus shifting to testing media integrity assessment techniques and platforms in collaboration with transition partners.”

Deepfakes also represent a huge problem for news media organizations seeking to know when they have been given an altered video by a source claiming it to be legitimate.

“In an effort to prevent questionable clips from duping reporters, Reuters created its own deepfakes as a training exercise to see if journalists could tell they weren’t real,” Politico reported June 25.

“The Wall Street Journal’s ethics & standards, and research & development teams launched a committee last fall to tackle the problem of doctored video, studying forensic technologies for identifying fakes, and asking journalists to flag suspicious content,” Politico added.

Contact Mark Tapscott at [email protected]
Mark Tapscott is an award-winning investigative editor and reporter who covers Congress, national politics, and policy for The Epoch Times. Mark was admitted to the National Freedom of Information Act (FOIA) Hall of Fame in 2006 and he was named Journalist of the Year by CPAC in 2008. He was a consulting editor on the Colorado Springs Gazette’s Pulitzer Prize-winning series “Other Than Honorable” in 2014.
Related Topics