Rohingya refugees filed a $150 billion lawsuit against Facebook over its failure to curb misinformation and hate speech on its platform, which “amounted to a substantial cause, and eventual perpetuation of, the Rohingya genocide” in Burma, also known as Myanmar.
Lawyers in the United Kingdom and the United States launched legal campaigns against Facebook’s parent company, Meta, for the social media giant’s role in facilitating violence against the persecuted Muslim ethnic group in Burma.
The complaint, which was lodged in a California court on Dec. 6, alleged that Facebook has “allowed the dissemination of hateful and dangerous misinformation to continue for years.”
“Facebook has options for moderating its algorithms’ tendency to promote hate speech and misinformation, but it rejects those options because the production of more engaging content takes precedence,” the court document reads.
The same document noted that Facebook arrived in Burma around 2011 and arranged for millions of Burmese to access the Internet for the first time. But it claimed that “Facebook did nothing” to warn users about the dangers of misinformation and fake accounts on its systems, a tactic used by the Burmese military to generate hate speech against the Rohingya.
“Human rights and civil society groups have collected thousands of examples of Facebook posts likening the Rohingya to animals, calling for Rohingya to be killed, describing the Rohingya as foreign invaders, and falsely accusing Rohingya of heinous crimes,” it stated.
The Rohingya have been denied citizenship in the country since a Burmese citizenship law was enacted in 1982. The United Nations said more than 700,000 Rohingya people fled to Bangladesh due to a military crackdown in 2017.
According to a website created for the legal campaign, Rohingya Facebook Claim, more than 10,000 Rohingya individuals have been killed, with over 150,000 have been subject to physical abuse in Burma.
The law firms organizing the lawsuits noted that the U.K. legal claim would be on behalf of the Rohingya community living anywhere outside of the United States, while a separate U.S. claim would be on behalf of those residing in the U.S.
The claimants have accused Facebook of using algorithms that amplified hate speech against the Rohingya people on its platform and not investing sufficiently in content moderators who spoke the local language or understood the political situation in Burma.
They claimed that the platform has failed to take down posts inciting violence against the Rohingya people and remove accounts used to propagate hate speech or incite violence.
In 2018, Facebook officials said that the company hadn’t done enough to limit the spread of posts fuelling violence against the Rohingya, but earlier this year the company pledged to curb the spread of misinformation following recent bloodshed and a military coup in the country.
During a Senate hearing in October, Facebook whistleblower Frances Haugen cited ethnic violence in Burma and Ethiopia as examples of the “destructive impact” that the social media platform has had on society.
The former Facebook employee suggested before lawmakers that there is a link between Facebook activity and violence in the regions. The social media platform’s algorithms facilitate hate, Haugen said, and therefore put profit before user safety.
Haugen suggested that to prevent the viral spread of content and misinformation, something she said could fuel repressive actions in such countries, Congress could make changes to Section 230 of the Communications Decency Act, which protects online platforms from being held responsible for content posted by third parties.
Isabel Van Brugen contributed to this report.