After multiple sources corroborated the longstanding accusation that Google stealthily infuses its political preferences into its products, the company has continued to claim neutrality, leading to incongruous answers by its executives to lawmakers’ questioning.
A June 24 exposé by Project Veritas showed several Google employees and a cache of internal documents describing methods Google has used to tweak its products to surreptitiously push its users toward a certain worldview.
One employee even appeared to say, when caught on hidden camera, that Google’s goal was preventing President Donald Trump, or anybody like him, from being elected again—an assertion confirmed by another employee who spoke under the condition of anonymity.
Google spokespeople have failed to produce an official response, but two of its executives were questioned about the revelations—one at a June 25 Senate hearing and one at a House hearing the following day.
During the June 26 House Homeland Security Committee hearing, Rep. Debbie Lesko (R-Ariz.) confronted Derek Slater, Google’s global director of information policy, with one of the leaked documents on “algorithmic unfairness” (pdf).
“Imagine that a Google image query for ‘CEOs’ shows predominantly men. Even if it were a factually accurate representation of the world, it would be algorithmic unfairness,” the document says, explaining that in some cases “it may be desirable to consider how we might help society reach a more fair and equitable state, via … product intervention.”
“What does that mean Mr. Slater?” Lesko asked.
“I’m not familiar with the specific slide,” he said. “But I think what we’re getting at there is when we’re designing our products, again, we’re designing for everyone. We have a robust set of guidelines to ensure we’re providing relevant, trustworthy information. We work with a set of Raters around the world, around the country, to make sure those Search Rater Guidelines are followed, those are transparent, available for you to read on the web.”
“All right. Well, I personally don’t think that answered the question at all,” she replied.
Similarly, Maggie Stanphill, Google’s head of Digital Wellbeing, was questioned by Senate Commerce Committee member Ted Cruz (R-Texas) the day before.
He asked whether Stanphill agreed with a quote from one of the leaked documents saying that Google should “intervene for fairness” in its machine-learning algorithms. Stanphill said she didn’t agree with it.
But Google has already put the “fairness” doctrine into practice, based on what the employees and the documents in the Project Veritas report say.
“Our goal is to create a company-wide definition of algorithmic unfairness that … establishes a shared understanding of algorithmic unfairness for use in the development of measurement tool, product policy, incident response, and other internal functions,” says a document last updated in February 2017.
“What they’re really saying about fairness is that they have to manipulate their search results so it gives them the political agenda that they want,” the unidentified insider said.
For instance, when one types in the Google search bar “men can” and makes a space, the search engine suggests phrases like: “men can have babies,” “men can get pregnant,” and “men can have periods.”
When one types in “women can” and makes a space, the suggestions would show phrases like: “women can vote,” “Women can do anything,” and “women can be drafted.”
This isn’t because these phrases are so popular among users, but because the “fairness” algorithm pulled them from so-called “sources of truth”—they reflect the political narrative Google desires, the insider said.
Moreover, Google has adopted the doctrine while keeping its users in the dark, he said. One of the document says “it is not a goal at this time to release this definition [of algorithmic unfairness] externally.”
Google and other tech platforms, including Facebook and Twitter, have publicly endorsed a model of content policing that reflects certain political leanings.
Moreover, the concept is so subjective it’s impossible to enforce fairly and impartially, said Nadine Strossen, a law professor and former president of the American Civil Liberties Union.
“Even if we have content moderation that is enforced with the noblest principles and people are striving to be fair and impartial, it is impossible,” she said, testifying at the June 26 House hearing. “These so-called standard are irreducibly subjective. What is one person’s hate speech … is somebody else’s cherished loving speech.”
“I did read every single word of Facebook’s [content policing] standards and the more you read them, the more complicated it is. And no two Facebook enforcers agree with each other and none of us would either. So that means that we are entrusting to some other authority the power to make decisions that should reside in each of us as individuals, as to what we choose to see and what we choose not to see and what we choose to use our own free speech rights to respond to.”
Though private companies, even the ones as large and influential as Google and Facebook, are not bound to protect free speech for the individual, “it is incredibly important that they be encouraged to do so,” she said.
Ranking member Mike Rogers (R-Ala.) added his own skepticism regarding Google’s impartiality, given that YouTube, which is owned by Google, took down the Project Veritas exposé the same day it was published, due to privacy complaints that appear to have been filed by one of the Google employees caught on camera by a Project Veritas reporter.
“I have serious questions about Google’s ability to be fair and balanced when it appears that it colluded with YouTube to silence negative press coverage,” Rogers said in his opening statement. “Regulating speech quickly becomes a subjective exercise for government or the private sector. Noble intentions often give way to bias and political agendas.”
Trump briefly commented on the issue during a June 26 Fox Business interview.
“They’re trying to rig the election,” he said, suggesting Google “should be sued.”
Strossen suggested that rather than by censorship, offensive and false content should be as much as possible countered by “media literacy,” counterspeech, “user empowerment tools,” and “through radically increased transparency.”