Social media giant TikTok approved most political advertisements containing false and misleading information about U.S. elections despite assurances from the company that it had a robust mechanism for detecting such content, a new report has found.
TikTok approved 90 percent of advertisements that featured misleading and outright false information about the upcoming 2022 U.S. midterm elections, according to a joint report by nonprofit Global Witness and the Cybersecurity for Democracy (C4D) team at New York University.
The report’s results were based on an experiment conducted to determine how well social media platforms live up to their promises to stop disinformation capable of destabilizing democratic processes.
The experiment posted 20 phony ads with misleading or false claims across multiple platforms in both English and Spanish, specifically targeting audiences in battleground states including Arizona, Colorado, and Georgia.
TikTok has banned political ads but nevertheless approved the ads containing inaccurate claims, including ads stating patently false claims that voting days would be extended, that votes in primaries would automatically be counted in the midterms, and that social media accounts could be used as voter verification.
TikTok also approved ads that dismissed the integrity of the election, suggested the results could be hacked or were otherwise already pre-decided, and discouraged voters from turning out.
“It is high time they got their houses in order and started properly resourcing the detection and prevention of disinformation before it’s too late. Our democracy rests on their willingness to act.”
The release of the report follows the publication of two other analyses in recent weeks that found China-based elements were using social media to spread disinformation on social media ahead of the midterm elections.
In both cases, it appeared that the China-based sources of the disinformation sought to increase polarization and sow discord by posting intentionally inflammatory or false information online.
The Global Witness-NYU report also targeted Meta-owned Facebook and Google-owned YouTube in its experiment. While YouTube successfully weeded out all of the bad ads and suspended the dummy account that posted them, Facebook let some 20 percent of the English language ads and 50 percent of the Spanish ads pass.
The authors of the experiment said that such results could bear real consequences for democratic processes had they spread.
“So much of the public conversation about elections happens now on Facebook, YouTube, and TikTok,” said Damon McCoy, co-director of NYU’s Cybersecurity for Democracy team. “Disinformation has a major impact on our elections, core to our democratic system.”
For its part, TikTok reaffirmed that the company’s policies do not allow political advertising and prohibit content including election misinformation. The company also claims that all advertising content passes through multiple levels of verification before receiving approval.
“TikTok is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform,” a TikTok spokesperson said in an email.
“We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”
The Epoch Times has requested comment from Meta.