Group Calls on Government to Reconsider Its ‘Fundamentally Flawed’ Approach to Combating Harmful Online Content

Group Calls on Government to Reconsider Its ‘Fundamentally Flawed’ Approach to Combating Harmful Online Content
The logos of mobile apps Instagram, Snapchat, Twitter, Facebook, Google, and Messenger displayed on a tablet on Oct. 1, 2019. (AFP via Getty Images/Denis Charlet)
Isaac Teo
10/2/2021
Updated:
10/6/2021

An expansive federal law proposal to combat harmful online content is not only “fundamentally flawed” but also violates Canadians’ freedom of expression and privacy rights, warn internet law experts, who are calling on the Liberals to overhaul their approach.

The main problem with the so-called “online harms” proposal lies in its ability to filter content and block websites, which endangers the “survival of a free and open internet in Canada and beyond,” reads a submission to the Department of Canadian Heritage by the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) at the University of Ottawa’s Faculty of Law.

“In an effort to combat hate speech and other ills, the proposed law threatens the free expression and privacy rights of the very equality-seeking communities that it seeks to protect,” Yuan Stevens and Vivek Krishnamurthy said in the CIPPIC’s Sept. 28 submission.
The CIPPIC is calling on the government to overhaul its approach to regulating online platforms “from the ground up” in order to address the problems caused by harmful online content.

“The online harms proposal combines some of the worst elements of other laws around the world,” the experts say in their submission.

“We are seriously concerned about numerous elements of the proposed law—such as the lack of adequate transparency requirements, the loosened requirements for the Canadian Security Intelligence Service (CSIS) to obtain basic subscriber information, the various jurisdictional issues raised by the law, and whether an administrative body like the Digital Recourse Council should be able to determine what speech is legal under Canadian law.”

In July, Canadian Heritage launched a public consultation to gather feedback on the proposed law, which it said “will be part of an overall strategy to combat hate speech and other harms.”

“The government aims to present a new legislative and regulatory framework this fall, with rules to make social media platforms and other online services more accountable and transparent in combatting harmful online content,” said a Canadian Heritage press release upon announcing the public consultation, which ended Sept. 25, shortly after the election.

Specifically, the new law will target online posts in five categories: terrorist content, content that incites violence, hate speech, non-consensual sharing of intimate images, and child sexual exploitation content.

It’s one of three pieces of controversial legislation related to internet regulation crafted by the Liberals.

The previously introduced Bill C-10 would require social media platforms and internet streaming companies to make financial contributions to support Canadian content, and Bill C-36 would allow individuals to file a complaint with the Canadian Human Rights Commission if they experience “hate” online. Alongside these two proposed laws, the new proposal aims to “combat hate speech and other harms” and will be introduced by the Liberals when Parliament resumes.

The CIPPIC said it finds the scope of the strategy concerning, particularly the stipulation that “platforms block unlawful content within 24 hours of being flagged, as well as alarming requirements for online service providers to proactively monitor and filter content as well as report information on users to law enforcement.”

The group argues that the 24-hour blocking requirement will lead to an overzealous attitude by social media platforms to remove content—even vast amounts of lawful content—to avoid the risk of liability under the proposed legislation.

Minister of Canadian Heritage Steven Guilbeault speaks with the media in the foyer of the House of Commons on Feb. 3, 2020. (The Canadian press/Adrian Wyld)
Minister of Canadian Heritage Steven Guilbeault speaks with the media in the foyer of the House of Commons on Feb. 3, 2020. (The Canadian press/Adrian Wyld)

‘Massive New Bureaucratic Super-Structure’

Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa, said in his submission that one of the fundamental problems in the government’s approach is to treat the five categories of harmful content as “equivalent and requiring the same legislative and regulatory response.”

“It makes no sense to treat online hate as the equivalent of child pornography,” he wrote. “By prescribing the same approach for all these forms of content, the efficacy of the policy is called into question.”

Geist said the proposed approach “envisions a massive new bureaucratic super-structure to oversee online harms and Internet-based services” that would be unwieldy and could jeopardize due process.

“For example, adjudicating over potentially tens of thousands of content cases is unworkable and would require massive resources with real questions about the appropriate oversight. Similarly, the powers associated with investigations are enormously problematic with serious implications for freedom of the press and freedom of expression.”

Part of the online harms proposal includes requiring online service providers to report some kinds of content to the RCMP and the Canadian Security Intelligence Service. The CIPPIC said that such reporting, when combined with the proactive monitoring, “pose an unacceptable risk to the privacy rights of Canadians.”

“Such measures should have no place in the laws of a free and democratic society,” it said.

In its Sept. 25 submission to Canadian Heritage, the Citizen Lab at the University of Toronto’s Munk School of Global Affairs and Public Policy said the scope of the proposal is “overbroad and incoherent,” as the five categories of harmful content have little in common other than that they are illegal.

“In our view, any legislative scheme that purports to unite all of these disparate kinds of content under a single framework is incoherent, counterproductive, and constitutionally untenable,” said Citizen Lab, whose research includes the areas of communication technologies, human rights, and global security.

“In truth, the categories are united by almost nothing—constitutionally, factually, practically, or ethically—other than the proposed remedy of content removal.”