Harvard Study: Chinese Censors Fear Group Action

By Cassie Ryan
Cassie Ryan
Cassie Ryan
May 10, 2013 Updated: December 15, 2013

A new computer analysis of the Chinese Communist Party’s censorship habits points to a strategy underpinned by fear of losing control over the masses, rather than merely being criticized.

In China, the purpose of social media censorship, regardless of content, is to immobilize “collective action by clipping social ties,” according to a new study by Gary King and colleagues at Harvard discussed in American Political Science Review.

At around 270 million users, based on official figures, Weibo microblogs are China’s most popular social media service, and can be roughly equated to Twitter. However, providers have to maintain a Party-directed internal censorship team, who filter for sensitive keywords.

Sina and Tencent are the biggest Weibo platforms, with teams believed to number as many as 1,000 censors. Augmented with automation tools, these online block monitors can extinguish inharmonious voices within minutes, another recent study showed.

By tracking 85 keyword topics in millions of posts on nearly 1,400 social media services before some were disappeared, the Harvard scholars discovered that spikes in censorship tended to presage action by authorities outside the world of weblogs.

“Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored,” write King and co-authors Jennifer Pan and Margaret E. Roberts.

“Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future—and, as such, seem to clearly expose government intent.”

The team found that around 13 percent of all social media posts are censored, and this is fairly stable over time, but there is a large fluctuation in posting volume and counteracting censorship. When the data was organized around these volume bursts and correlated with real-world events, a distinct pattern emerged.

A non-political censorship episode was one of the giveaways. After the nuclear meltdown in Fukishima, iodine was rumored to protect against radiation exposure, triggering a rush on salt in Zhejiang Province. A clamp down followed, even though the rumor was false and unrelated to the Communist Party. It did, however, spark collective action by people outside of local officialdom.

The evidence strongly suggests that extreme censorship revolves around group formation, and this can escalate as the perceived threat increases. This theory matches the “arms race” of events that took place around the May 4 anniversary of the Sichuan environmental protest at the weekend.

From April 29, after search terms like “May 4” and “Pengzhou petrochemicals” were blocked online, locals spread the word about demonstrating by word of mouth and text message. The stability maintenance gears ground higher and higher, passing through a door-to-door leafleting campaign, and ending with local officials altering the calendar, forcing potential protesters to attend work and school on Saturday.

King et al. conclude that virtual criticism of local and central authorities may embarrass officials, but the Communist Party realizes that “looking bad does not threaten their hold on power, so long as they manage to eliminate discussions associated with events that have collective action potential—where a locus of power and control, other than the government, influences the behaviors of masses of Chinese people.”

“With respect to this type of speech, the Chinese people are individually free but collectively in chains.”

This strategy of information control allows a few sparks to smolder, but forcefully extinguishes the campfires, before they can join together and threaten the Party’s control over the population.


Cassie Ryan
Cassie Ryan