Over the past couple of years, disinformation has gone from being an obscure concept of propaganda to becoming a common household term. It refers to a form of deception meant to make false claims appear real and to alter conclusions on events and information.
As social media platforms have begun looking for ways to fight disinformation, new concerns have begun to emerge about how to distinguish truths from falsehoods and determine which interpretations of news are legitimate.
This has led to many online users being censored or labeled as “bots” for expressing ideas outside common narratives. Conservatives have been greater targets in this new push by platforms such as Twitter.
According to Dan Brahmy, CEO of Cyabra, an Israel-based company that works to identify disinformation, the solution to identifying truths among falsehoods may come down to a fundamental shift in approach.
“We understand that disinformation, most of the time, is spread by fake identities, either to negatively temper public discourse. or to target public discourse,” he said.
Many companies that are working to identify disinformation campaigns are looking at the content being spread, and this has led social media platforms to list news websites as being legitimate or illegitimate. This has likewise caused grievances over politically motivated censorship.
Brahmy has taken a different approach, however. Rather than focus on the content, Cyabra looks for signs of whether or not the accounts spreading disinformation are real.
“We are not working with content,” Brahmy said. Instead, his company’s approach is looking at behavioral patterns.
“In order for disinformation to spread out that fast, you have to have, in the vast majority of these events, online fraudulent identities—fake people running something,” he said.
When many disinformation campaigns spread online, they follow a common pattern. A “puppetmaster” account begins spreading a certain narrative or story, and legions of often automated accounts then spread that narrative or story as widely as possible. As the disinformation spreads, legitimate users begin to see it and likewise begin to spread it.
Some companies are looking at the stories themselves. Others are trying to categorize the whole system, and sometimes legitimate users are labeled as bots for being duped. For Cyabra, the focus is on the puppetmasters.
Rather than list all accounts that play the role of setting disinformation campaigns in motion, Cyabra takes several steps to ensure the accounts are actually doing what they do maliciously.
According to Brahmy, most of the actual puppetmaster accounts are “sock puppets” or “avatars.” They are fake identities being propped up by a person who may be running several similar accounts, and when these individuals create these fake identities, there are often key issues that make their illegitimate nature identifiable.
Brahmy said that “people believe what they see,” and holders of fake accounts will change data to create illusions.
A real person will have many different groups of friends: childhood friends, school friends, college friends, friends from work, and other social groups. Brahmy said it is very difficult for a puppetmaster to falsify friend networks such as this, and it becomes even more so when one attempts to establish multiple sock puppets using the same name across different platforms.
Another key trait is that sockpuppet accounts are rarely more than a few months old. The puppetmaster will attempt to make them appear legitimate by adding what appear to be personalized posts and comments. The mistake the puppetmasters often make, however, is that they will also change dates and alter other information to make the accounts appear to be older than they are.
“It’s about finding people who are aggressively trying to falsify history and falsify surroundings,” he said. “Do you know how tough it is for a person who just started to exist four or five weeks ago to falsify surroundings?”
Although Cyabra’s main aim is to help identify fake accounts that spread disinformation, it is possible that some companies will use it, and similar emerging technologies, to censor users.
Brahmy is realistic about the situation, but noted it’s up to the companies to determine how they use the technology. His approach, however, is changing the direction, from looking at content and news websites, to more directly trying to identify fake accounts.