Government Expert Panel Suggests Regulating Private Communications Through ‘Online Harms’ Legislation

Government Expert Panel Suggests Regulating Private Communications Through ‘Online Harms’ Legislation
Minister of Canadian Heritage Pablo Rodriguez rises during question period in the House of Commons on Parliament Hill in Ottawa on June 16, 2022. (Justin Tang/The Canadian Press)
Noé Chartier
7/8/2022
Updated:
7/13/2022

Many experts on a panel handpicked by the federal government to lay the basis of a future “online harms reduction” bill say private communications should be included under the framework, a Heritage Canada document indicates.

“Some experts highlighted that a lot of times a high level of harmful content, such as terrorist content or child pornography, are shared in private communications instead of on public forums—and that excluding these types of communications would leave a lot of harmful content on the table,” said a summary of the first session held by the expert panel, on April 14.

“Many experts supported the notion that private communications should be included under the scope of the legislative framework.”

Heritage Minister Pablo Rodriguez announced in March that 12 experts would be holding discussions, also attended by bureaucrats from different agencies, to provide advice on the drafting of an internet content regulation bill.

Ten sessions were held from April to June, and government-provided summaries have been posted online.

While the experts pointed to terrorist content or child exploitation as needing to be countered in private communications, they identified at their eighth session, on June 3, that “disinformation” is “one of the most pressing and harmful forms of malicious behaviour online.”

While calling for the government to tackle “disinformation,” the experts nevertheless said the issue would be hard to define in legislation. They also said the government should not be deciding what’s true or false.

As for regulating private communications, the experts suggested at their first session, on April 14, that platforms use tools that would “mitigate the risk before it emerges” or have reporting mechanisms to address “harmful content.”
“In this way, regulations wouldn’t need to impose a proactive monitoring obligation on platforms to monitor private communications to mitigate harms,” the panel said.

‘Legal yet Harmful’

Freedom of expression is protected in Canada, and hate speech is prohibited under existing laws, but the panel explored how to counter content online that could be lawful yet deemed “harmful.”
“Some experts asserted that a balance would need to be struck between preserving charter rights while also addressing legal yet harmful content,” reads the summary of the April 8 introductory workshop.

“It was also stated that lawful but harmful content cannot legally be banned but could be regulated by means other than take-down measures.”

Some experts argued that the law should be left ambiguous to incentivize platforms to “[do] more to comply” in regulating content, whereas others argued it would offer platforms too much leeway.

Despite disagreements, Heritage Canada said there was a consensus among the experts that a regulatory regime is needed to tackle “harmful content” online.

The experts said the way the government communicates its efforts to regulate content is “important“ because ”such a framework has the potential to contribute to, erode, or reinforce the public’s faith in Government and other democratic institutions.”

Regarding platforms’ compliance with regulation, the experts said at their second session, on April 21, that “public shaming or profit incentives” would be “key to a successful framework.”

Politicization

A previous analysis of the panel of 12 experts showed they mostly share the government’s ideology on different issues such as COVID-19 measures, advocating for more vaccine mandates, labelling alternative viewpoints as “conspiracies,” and criticizing the recent freedom-themed protests.
Some experts warned during the April 21 discussions that any legislation introduced to regulate content “must not be susceptible to misuse by future governments.”

Included in the “range of harmful content” they seek to regulate is “propaganda, false advertising, and misleading political communications.”

The discussion summaries are often steeped in progressive jargon.

“Many experts stated that it would be important to find a way to define harmful content in a way that brings in lived experiences and intersectionality,” said the April 21 summary.

“They explained that a number of harms online are exemplified by issues like colonization and misogyny, and a regulatory framework would need to recognize these factors.”

At the fifth session, on May 13, some experts expressed concern that platforms could become over-policed, “labelling activist content like material from Black Lives Matter campaigns as extremist content.”

Digital Safety Commissioner

Another area of consensus among the experts is the need to create a digital safety commissioner.
At the fourth session on May 6, they said the commissioner should have powers to audit, inspect, administer financial penalties, and launch investigations.

There was disagreement, though, on the scope of powers that should be afforded to this new position. Some experts said it should have “teeth” in order to force compliance.

The idea of creating a digital safety commissioner had already been proposed by the Liberals last year, and it featured in a technical paper published by Heritage Canada in April.
Complementing the new commissioner position, some experts said there should be a “cyber-judge” to determine the legality of content posted online, citing that platforms do not have the “legitimacy” to make those decisions.