Google reported to the Australian federal government that it has removed around 800,000 YouTube videos concerning COVID-19, and 275 million COVID-19 apps from across its platform as part of its $1 billion (US $726 million) global campaign to “counter COVID-19 misinformation.”
It has also launched a US $3 million fund to wipe out alleged vaccine misinformation.
Google regards official government information from national health departments or the World Health Organization as reliable sources. Meanwhile, the WHO has noted that information changes over time as the world “learns more about the virus.”
Lucinda Longcroft, Google’s director of government affairs and public policy for Australia and New Zealand, told a Senate committee on foreign interference through social media in July that Google has extensive automated systems and a global network of staff to remove “false or misleading” COVID-19 content “as rapidly as possible.”
This is combined with algorithmic tools to help promote government-approved COVID-19 information and bury “borderline” content, which was defined in a January blog post by Youtube as content that “comes close to—but doesn’t quite cross the line of—violating our Community Guidelines.”
“This has been a broad campaign and activity across our organisation,” Longcroft said. “We have deployed significant resources and developed innovative tools, both human-based and machine-based, to curb harmful information and promote authoritative information.”
During the Senate meeting, Longcroft also confirmed that Google had “engaged very closely” with the Australian government, giving it AU $3.6 million worth of free advertising, resulting in 20.6 million impressions of government-approved COVID-19 information for Australian users.
Big Tech Censorship Concerns
Big tech’s efforts to remove public discourse around COVID-19 from online platforms has raised the ire of some researchers working on understanding the CCP (Chinese Communist Party) virus, who have voiced concerns that big tech is stifling scientific debate around the pandemic, calling it censorship.
For example, in June, YouTube removed a video in which Stanford professor of medicine John Ioannidis discussed data related to COVID-19 and the negative impacts of the ongoing lockdown. Despite numerous challenges to the censorship, YouTube did not reveal which part of Ioannidis’s interview it construed as misinformation.
In May, Facebook deleted a post linking to a peer-reviewed Lancet article, which reported that SARS-CoV-2 spreads by airborne transmission. The article had criticised a claim made by a review funded by World Health Organization (WHO) that there were no firm conclusions to be drawn about airborne transmission.
Authors of the Lancet article included world-renowned experts on aerosols, including American scientist Kimberly Prather and the highly cited aerosol researcher Jose-Luis Jimenez from the University of Colorado.
“We absolutely recognise that measuring misinformation is a real challenge,” Facebook’s head of policy in Australia Josh Machin told the Senate last month.
“First, because people’s views on whether a post on Facebook is misinformation or not can vary, and also because, particularly since the pandemic began last year, we’ve had to really rapidly scale up our policies and continue to consult with experts, and they have been shifting.”
Machin disclosed that Facebook had removed 18 million posts containing “harmful misinformation” about COVID-19 and vaccines, attached “false” labels to 167 million posts on these topics, and collaborated with 80 fact-checkers around the world.
But while social media companies usually turn to the WHO, local health officials, and governments for authoritative information, “this does not imply that they are unerring,” wrote Swedish bioethics researcher Emilia Niemiec in a scientific report last year.
Niemiec argued that because knowledge about COVID-19 is “currently limited and unsettled,” the medical community is still debating “various topics,” such as the lockdown policies and vaccines.
She also noted that while the censorship on social media might seem like an “effective and immediate” solution to the problem of misinformation, it can also limit the sharing of constructive critique of the current evidence and opinions.
These types of information, the medical researcher pointed out, is “necessary” to identify and correct potential errors, as well as further the understanding of complex issues surrounding the pandemic.
“A major question regarding the policies of the communication platforms is who exactly defines … which information is deemed to be false or harmful? And can we rely on these judgements?” Niemiec asked.
She added that if the “exclusive authority” to define what is scientifically proven or medically substantiated is left to social media providers or certain institutions, there is potential for errors and miscalculation, or even the potential abuse of this power to “foster political, commercial, or other interests.”
“The censorship is not based solely on science,” the researcher added. “An analysis of content banned on social networks suggests that the moderation is often politically biased.
“If we add to this the fact that Google is the most popular search engine, it becomes clear that a few tech companies have huge power over what information Internet users can see and how their views are shaped.”
Australian Senator Malcolm Roberts asked in the Senate on Aug. 11 whether there was a potential “conflict of interest” giving Google the final say in how COVID-19 vaccine information is screened and approved.
Roberts noted that Google and YouTube’s parent company, Alphabet, owns 12 percent of Vaccitech Ltd. through a venture capital fund GV (formerly Google Ventures). Vaccitech is a UK-based biotechnology company that co-invented the AstraZeneca vaccine.
The Epoch Times has contacted Alphabet, Vaccitech, and Google for comments but did not receive a response.