Big Tech Should Be Responsible for Tackling Online Abuse, Advocate Says

Big Tech Should Be Responsible for Tackling Online Abuse, Advocate Says
A multibillion-pound legal claim has been launched against Facebook in the UK, accusing the technology giant of abusing its market dominance (Niall Carson/PA)
1/18/2022
Updated:
1/18/2022

An advocate says Big Tech should take the responsibility to curb abusive content on social media as they “make a heap of money” out of their sites.

It came after the federal government proposed the anti-trolling law, which will force social media platforms to take down offending posts and provide the identity of anonymous posters in some circumstances. However, if the social media company refuses to reveal the real identity of the accused user, it will be held liable for the defamatory comments.

Anti-trolling campaigner and journalist Erin Molan on Tuesday said people can “get absolutely annihilated and torn to shreds” by anonymous trolls, yet seeking help from the social media platforms themselves or law enforcement is “almost impossible.”

“The legislation just hasn’t existed until now… Social media is essentially a protected species in that space and have been for a very long time,” she told the Social Media and Online Safety Committee.

“They make a heap of money out of these platforms, an enormous amount of money… with that comes responsibility.”

Anti-trolling campaigner Erin Molan tells federal MPs how to make social media safer on Jan. 18, 2022. (screenshot)
Anti-trolling campaigner Erin Molan tells federal MPs how to make social media safer on Jan. 18, 2022. (screenshot)

She recounted some of the “horrific” abuse which made her fear for her life and her young daughter’s safety, adding that she was failed by the social media giant’s response.

“[On] Facebook, I recorded some terrific messages from an account, and the account kept being recreated. I would block it, it would be recreated… That was about trying to kill my child within my stomach.”

“And they (Facebook) came back and said that it didn’t meet the threshold for inappropriate behaviour… If that does not meet your thresholds, what the hell is your threshold because that is appalling.”

“You feel like you’re banging your head against a brick wall as you look at their business model. Advertising is the biggest thing for them... They’d love one person to have 8000 accounts because it gives them more people to sell to advertisers.”

Criminologist Michael Salter told the committee Molan’s experience of reporting abuse to social media companies was common among victims.

“We’re asking for transparency because far too often what we’re provided from social media company reports on these issues ... is statistics that are most friendly to them,” he said.

“Having basic safety expectations built into platforms from the get-go is not too much to expect from an online service provider.”

Meanwhile, child safety advocate Sonya Ryan said many social media platforms were unwilling to cooperate with law enforcement as there is  "more focus on privacy than there is on the protection and safety of young people.”

In its submission, Twitter said it recognised the need to “balance tackling harm with protecting a free and secure open internet.” But it also warned that any hasty policy decisions or rushed legal regimes would lead to consequences that ”stretch far beyond today’s headlines, and are bigger than any single company.”

Meanwhile, Meta (Facebook) said it has cut the prevalence of “hate speech” content by more than half within the past year and is proactively detecting more than 99 percent of content considered “seriously harmful.”

TikTok noted that between April and June 2021, more than 81 million videos were taken off the platform for violating its guidelines.

Of those videos, TikTok said it identified and removed 93 percent within 24 hours of posting, 94.1 percent before a user reported them, and 87.5 percent with zero views.

AAP contributed to this report.