Politicians Risk AI Dependency

Politicians Risk AI Dependency
President Joe Biden (L) and California Gov. Gavin Newsom takes part in an event discussing the opportunities and risks of artificial intelligence at the Fairmont Hotel in San Francisco, Calif., on June 20, 2023. (Andrew Caballero-Reynolds/AFP via Getty Images)
Anders Corr
6/28/2023
Updated:
6/30/2023
0:00
Commentary

Generative artificial intelligence (AI) is becoming more common, including in politics. Images of Donald Trump getting violently arrested in New York or of the Pope wearing a puffy white coat are deepfakes that many believed, or wanted to believe, when they went viral.

On June 26, former Alphabet CEO Eric Schmidt warned of AI’s capability to spread misinformation, telling a CNBC audience that “the 2024 elections are going to be a mess because social media is not protecting us from false generative AI.”

Schmidt noted that social media companies like Alphabet, Twitter, and Meta cut thousands of jobs devoted to content moderation. Those roles might have helped ferret out AI deepfakes in the 2024 elections.

There is already plenty of such fake electioneering.

In Toronto, a mayoral candidate posted an AI-generated photo of a woman that looked realistic other than her third arm. The same candidate published fake dystopian landscapes of the city.

With AI, technologists on political payrolls can produce hyperreal color-enhanced images that play to public fears. If the voters weren’t scared before, they would be now and more likely to vote for the politician who best caters to fear by individualizing the messaging through bespoke microtargeting.

The risk is that electioneering—including by authoritarians who use deepfakes in fake elections to legitimize their power—becomes yet more untethered from reality. Authoritarian politics is already devoid of reality because of censorship and disinformation spoon-fed to their populations, as well as underlings who, in turn, feed dictators in countries like Russia and China with what they want to hear. With AI, authoritarians will have yet more security of tenure despite their rosier-than-justified view of their own power.

A man watches an artificial intelligence (AI) news anchor from a state-controlled news broadcaster on his computer in Beijing on Nov. 9, 2018. (Nicolas Asfouri/AFP via Getty Images)
A man watches an artificial intelligence (AI) news anchor from a state-controlled news broadcaster on his computer in Beijing on Nov. 9, 2018. (Nicolas Asfouri/AFP via Getty Images)

Democracies are at least a little better. Voters provide leaders with some semblance of reality through open contestation and the wisdom of crowds. Democracies become demagoguery, however, when those crowds are manipulated with too much false information, either of the too-rosy or too-dire kind.

AI hands the technology necessary for demagoguery to the demagogue, by making deepfakes of whatever type—imagery, text, video, and voice—look and sound so real as to be believable.

“In Chicago, the runner-up in the mayoral vote in April complained that a Twitter account masquerading as a news outlet had used A.I. to clone his voice in a way that suggested he condoned police brutality,” write Tiffany Hsu and Steven Lee Myers in The New York Times.

Shane Goldmacher, also in the NY Times, writes that “Republican and Democratic engineers alike are racing to develop tools to harness A.I. to make advertising more efficient, to engage in predictive analysis of public behavior, to write more and more personalized copy and to discover new patterns in mountains of voter data.”

In political experiments run by the Democratic National Committee, AI-generated texts perform at least as well as those generated by humans. Higher Ground Labs, which invests in technologies to support progressive politics, has an AI system called Quiller that simultaneously writes, sends, and tests fundraising emails.

The Late Show with Stephen Colbert“ used AI to fake the voice of Tucker Carlson, and it sounds just as smooth.
Pod Save America did a deepfake of President Joe Biden’s voice, saying things Biden would never say (and some things he might).

Goldmacher interviewed political operatives with concerns that “bad actors” could use AI to waste opposing-campaign staff time by taking on the persona of potential voters, producing deepfakes of their candidate providing personalized videos to supporters, or faking voice messages by the opposing candidate for delivery to voters the day before the election.

Unless the political use of AI is regulated, the technology could become so ubiquitous and powerful as to compromise the informed electorate upon which real democracy depends. At that point, elected officials will be dependent upon AI for their own success and unlikely to change overly-permissive election laws that got them elected in the first place. Why bite the AI hand that feeds them? At that point, we could be stuck with AI politics forever, just as we are with overly-permissive campaign finance laws.

Some political experts and election consultants are now calling for the imposition of regulations to stop the AI generation of synthetic images for political ads.

OpenAI, the creator of ChatGPT, bans the generation of high-volume campaign materials.

However, this self-regulation is less helpful than it sounds. Contrary to its name, OpenAI does not use open-source code. There are numerous open-source alternatives that any political campaign—or terrorist organization, for that matter—can download and alter to their purposes.

Regulation of AI is urgently needed when it comes to politics—we have enough deceptive electioneering as it is. We don’t need more and better deepfakes upon which our politicians will depend. We need the opposite.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Anders Corr has a bachelor's/master's in political science from Yale University (2001) and a doctorate in government from Harvard University (2008). He is a principal at Corr Analytics Inc., publisher of the Journal of Political Risk, and has conducted extensive research in North America, Europe, and Asia. His latest books are “The Concentration of Power: Institutionalization, Hierarchy, and Hegemony” (2021) and “Great Powers, Grand Strategies: the New Game in the South China Sea" (2018).
twitter
Related Topics