The Supreme Court unanimously sided with Twitter, Google, and Facebook, finding in a pair of decisions on May 18 that the Silicon Valley giants are shielded from liability for content posted by users.
The lawsuits arose after deadly Islamic terrorist attacks overseas. Victims’ families argued that the Big Tech companies were liable because they allowed terrorist videos to be posted online or failed to do enough to police the terrorist accounts posting the videos.
Big Tech and its supporters had been deeply concerned that the court could eviscerate Section 230 of the federal Communications Decency Act of 1996, which generally prevents internet platforms and internet service providers from being held liable for what users say on them. They say the legal provision, sometimes called “the 26 words that created the internet,” has fostered a climate online in which free speech has flourished.
Both President Joe Biden and former President Donald Trump have attacked Section 230, calling for it to be repealed, but in the twin rulings, the Supreme Court sidestepped the Section 230 issue, much to the relief of the tech companies.
Chief Justice John Roberts said that despite any algorithm YouTube may use to push users to view videos, the company is “still not responsible for the content of the videos ... or text that is transmitted.”
Justice Elena Kagan told a lawyer for one of the families, “I can imagine a world where you’re right that none of this stuff gets protection.”
“And, you know, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? A little bit unclear,” she said.
“On the other hand, I mean, we’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” Kagan said at the time.
Twitter asked the court to review a lower court ruling in favor of a Jordanian national killed in an ISIS terrorist group attack in an Istanbul nightclub. The company argued it shouldn’t be held responsible for acts of international terrorism if the group used its platform. The family of the late victim, Nawras Alassaf, claimed that social media platforms didn’t do enough to take down ISIS videos.
Thomas wrote that the plaintiffs sought to hold Twitter, Facebook, and Google “liable for the terrorist attack that allegedly injured them,” but the court concluded that “plaintiffs’ allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.”
The connection between the online platforms and the nightclub attack was “far removed,” he wrote.
“The allegations plaintiffs make here are not the type of pervasive, systemic, and culpable assistance to a series of terrorist activities that could be described as aiding and abetting each terrorist act by ISIS,” Thomas wrote.
The Supreme Court reversed the decision of the U.S. Court of Appeals for the 9th Circuit.
The case goes back to 2015 when student Nohemi Gonzalez, 23, a U.S. citizen, was killed in an ISIS attack in Paris. The killing was part of a larger series of attacks the terrorist group carried out in that city that led to 129 deaths. Her family sued, claiming that Google, owner of YouTube, was liable under the federal Anti-Terrorism Act for aiding ISIS recruitment efforts by allegedly using algorithms to steer users to ISIS videos.
The “plaintiffs asserted that Google had knowingly permitted ISIS to post on YouTube hundreds of radicalizing videos inciting violence and recruiting potential supporters to join the ISIS forces then terrorizing a large area of the Middle East, and to conduct terrorist attacks in their home countries,” according to the family’s petition.
Because of the algorithm-based recommendations, users “were able to locate other videos and accounts related to ISIS even if they did not know the correct identifier or if the original YouTube account had been replaced.”
Google’s services “played a uniquely essential role in the development of ISIS’s image, its success in recruiting members from around the world, and its ability to carry out attacks.” The original complaint filed in the case added that “Google officials were well aware that the company’s services were assisting ISIS.”
The 9th Circuit found that under Section 230, the viewing recommendations were protected by federal law even if the section “shelters more activity than Congress envisioned it would.” Google denied liability, saying that it’s impossible for it to review every video that gets posted to YouTube, which accepts more than 500 hours of new content every minute.
But the Supreme Court found it was unnecessary “to address the application of [Section] 230 to a complaint that appears to state little, if any, plausible claim for relief.” The justices set aside a ruling by the 9th Circuit and returned the case to that court for reconsideration “in light of our decision in Twitter.”
Google general counsel Halimah DeLaine Prado welcomed the Supreme Court’s decisions.
“Countless companies, scholars, content creators, and civil society organizations who joined with us in this case will be reassured by this result,” she said in a statement.
“We’ll continue our work to safeguard free expression online, combat harmful content, and support businesses and creators who benefit from the internet.”