Tech Giants to Respond to Australian Government’s Proposed Online Safety Laws

By Steve Milne
Steve Milne
Steve Milne
Steve is an Australian reporter based in Sydney covering sport, the arts, and politics. He is an experienced English teacher, qualified nutritionist, sports enthusiast, and amateur musician. Contact him at
January 19, 2022Updated: January 19, 2022

Tech Companies will have the opportunity to comment on the Australian Government’s proposed laws that would hold them accountable for online trolling on their platforms.

On Thursday, a committee set up to investigate how to make social media platforms safer for Australians will listen to the views of representatives from Google, Tiktok, and Meta, who owns Facebook and Instagram, AAP reported.

This comes as the Australian Government moves to introduce new legislation that forces social media platforms to disclose the identities of online trolls.

If passed, the new legislation would become one of the world’s most robust online trolling laws and would mean social media companies were considered publishers and held liable for defamatory comments posted on their platforms.

However, the companies can avoid liability if they reveal the identity of individuals accused of defamation. This would allow legal proceedings to commence against the individual responsible for the trolling instead of the company.

Google Australia’s representative Samantha Yorke said that the tech giant is not waiting for such laws to be passed, but it is already taking the initiative to deal with harmful and illegal content on its platforms.

“Our approach to information quality and content moderation aims to strike a balance between ensuring that people have access to the information they need while also doing our best to protect against harmful content online,” she said.

Meanwhile, Tiktok said it took down 81 million videos from its platform between April and June 2021 due to guideline violations.

Of those videos, 93 percent were taken down within 24 hours of posting, 94.1 percent before someone reported them, and 87.5 percent before anyone had viewed them.

In Meta’s submission, the company said it has reduced the prevalence of hate speech on its platform by over 50 percent in the past year and has been successful in detecting over 99 percent of seriously harmful content.

However, criminologist Michael Salter told the committee on Tuesday that tech companies need to be more transparent in reporting harassment and child abuse on their platforms, as well as accepting they have a duty of care for their users, AAP reported.

“Far too often what we’re provided from social media company reports on these issues … is statistics that are most friendly to them,” Salter said.

“Having basic safety expectations built into platforms from the get-go is not too much to expect from an online service provider,” he said.

Reset Australia—the Australian branch of a think tank focusing on digital threats to democracy—has said that harmful and sensationalist claims made online were often perpetuated by social media giants’ attention-based business model.

“Social media companies promote, amplify and profit from hate—catching trolls won’t end online hate,” Executive Director of Reset Australia Chris Cooper said.

“Forcing social media companies responsible for coughing up the identity of individuals does not hold the platforms accountable for their profit-making amplification that enables that content to go viral.”

Cooper also pointed out that the Government’s proposed legislation would remove the shield offered to individuals, such as whistleblowers, who speak out against the government or its officials.

“Online anonymity does protect trolls from accountability, but it also is an important tenet of a free and open internet that protects critics of the powerful, which can hold leaders accountable,” Cooper said.