The regulator, boosted with extra powers, has set out its plans for the first 100 days as the UK’s online safety regulator, telling tech firms that they should start preparing now for new online safety rules.
In a statement on Thursday, Ofcom said that it will not “censor online content” as the Bill does not give it powers to moderate or respond to individuals’ complaints about individual pieces of content. It added that it will oblige the “largest and riskiest companies” to be transparent and consistent about how they “treat legal but harmful material” when accessed by adults.
But free speech defenders still have major concerns about censorship issues further down the line, due to Ofcom’s new role.
‘Ought to Be Doing More Censorship’
“So they are not actually censoring individual posts, they’re quite right, but nevertheless what they are in the business of is saying to these providers we think you ought to be doing more censorship,” Andrew Tettenborn, common-law and continental jurisdictions scholar and advisor to the Free Speech Union, told The Epoch Times.
“The internet in Britain will look rather tamer, this will affect only the people who haven’t got round to getting a VPN,” added Tettenborn.
“The government recognises, and we agree, that the sheer volume of online content would make that impractical. Rather than focusing on the symptoms of online harm, we will tackle the causes by ensuring companies design their services with safety in mind from the start,” said Ofcom.
The upcoming Bill (pdf) on regulating online spaces is intended to “protect children from harmful content such as pornography and limit people’s exposure to illegal content, while protecting freedom of speech.”
To help it do so, the government announced that it was granting Ofcom new responsibilities and powers, with a wide range of compliance tools, fines, and sanctions, to become the regulator for online harm. It already has experience in tackling this through its role in overseeing telecommunications and broadcasting (TV, radio programs, and video on demand).
‘Not Ofcom’s Job to Adjudicate’
Ofcom said it expects the Online Safety Bill to pass by early 2023 at the latest, with our powers coming into force two months later. It released a roadmap telling tech firms to start preparing now for new online safety rules.
Mark Bunting, Ofcom’s online safety policy director, told The Telegraph on Thursday that it “will not censor online content” and that “I don’t think we should expect the companies to be able to completely eliminate hate speech, at least not without very significant unintended consequences for free speech.”
“it’s not Ofcom’s job to adjudicate on particular items of content,” he said. “Consumers can complain to us, we can bring it to the companies’ attention, but we can’t require them to take individual items of content down.
“The second aspect of not being a censor is in the so-called legal but harmful areas of the regime. It’s really important for services to understand that it’s not Ofcom’s role to dictate what they can and can’t host in terms of legal material,” he added.
“What services have to do is to recognise there can be risks associated with legal content, and to take appropriate steps to consider those risks and then to be clear in their terms of service what action they’ve taken about those risks,” said Bunting.
Victoria Hewson, head of Regulatory Affairs at the free-market think tank Institute of Economic Affairs explored the subject and the risks of unintended consequences of the Bill in a report called An Unsafe Bill: How the Online Safety Bill threatens free speech, innovation, and privacy.”
“That mantra about this being about systems and processes, rather than individual pieces of content has really never held up I think because how are you going to know if a system or process is working effectively as the regulator would see it unless by reference to how it handles individual pieces of content?’ she told The Epoch Times.
In her report, Hewson noted that “a likely outcome is that those who are easily offended or who are acting in bad faith will procure removals by claiming material to be intentionally false or psychologically distressing to a ‘likely audience.’
“That places the burden on the platform to remove it or risk non-compliance with its duty, and potential fines and other sanctions from Ofcom,” she added.
Hewson questioned Ofcom saying that they were not going to censor content.
“You might not take regulatory action just because one particular piece of content slips through the net, but they will be judging compliance in the round by reference to individual pieces of content in aggregate, so they will clearly be looking at individual pieces of content to judge the systems and processes,” said Hewson.
“As to whether they would impose fines for an individual piece of content as opposed to a systematic breach, I don’t think that makes too much difference to the incentives to the platform, because the platforms know that if they are systematically allowing quote ‘illegal’ content to be encountered on their platforms, then they will face all the sanctions and liabilities. It does very much come down to making judgments on individual pieces of content,” she added.
The Epoch Times contacted Ofcom for comment.