Dr. Robert Epstein: How Big Tech’s Algorithms Can Impact Opinions and Votes—and the 2020 Election

September 9, 2019 Updated: November 6, 2019

Just what are some of the methods that tech giants like Google and Facebook can use to shift their users’ attitudes, beliefs, and even votes?

How do search engine rankings impact undecided voters?

How powerful of an impact can search engine algorithms have on our perceptions and actions, without us even knowing?

And why aren’t more people researching these things?

This is American Thought Leaders 🇺🇸, and I’m Jan Jekielek.

Today we sit down with Dr. Robert Epstein, the former editor-in-chief of Psychology Today. He is currently a senior research psychologist at the American Institute for Behavioral Research and Technology and a leading expert on search engine bias.

We explore his meticulous research into tech giant bias, and the startling discoveries he has made. We also look at Dr. Epstein’s ambitious plans to monitor and track search engine bias in the months leading up to the 2020 election.

Jan Jekielek: Dr. Robert Epstein, wonderful to have you on American Thought Leaders.

Dr. Robert Epstein: It’s a pleasure.

Mr. Jekielek: So, Dr. Epstein, I see you as one of the foremost experts on potential bias in search engines and social media in the digital landscape, especially as it pertains to elections. Tell me what is going on here?

Dr. Epstein: Well, a lot is going on, which people are generally unaware of. Basically, I’ve been studying now for more than six-and-a-half years a number of techniques that big tech companies have available to them—and it’s available to them exclusively—for shifting people’s opinions, thinking, attitudes, beliefs, purchases, and votes without people knowing and without leaving a paper trail. So it turns out there’s a whole class of techniques that they have available to them which have never existed before in human history. I stumbled onto one of those techniques about six-and-a-half years ago and began studying it very carefully in controlled experiments. Since then I’ve found about a dozen techniques like this. I’m currently studying seven of them.

Mr. Jekielek: I’m actually really looking forward to digging into a few of these. I think the first one is the search engine manipulation effect. You did congressional testimony fairly recently where you said that at least 2.6 million votes were shifted in one direction. And you said that in sworn congressional testimony. Can you explain a little bit about what exactly you found there?

Dr. Epstein: Sure, there’s two pieces to this. And sometimes it’s hard to explain how important it is that we understand these pieces separately. But one piece involves experimental research, which I’ve been doing for a long time now. I do experiments sometimes with people in a lab, sometimes people online, sometimes with people from multiple countries, in which people are randomly assigned to one group or another—so the experiments are randomized—in which there’s at least one control group. We use counterbalancing, which means that we kind of shift things around to make sure that we don’t get any so-called order effects. Our experiments are double-blind which means that the people who are administering this study, our assistants, they don’t know exactly what’s going on. It means the participants in the study, they also don’t know exactly what’s going on.

So we’re using every possible precaution to make sure that when we do get some results, we really understand what they mean. So in late 2012, I happened to be looking at some new research–and not from my own field, which is psychology, but from the field of marketing–that was looking at the tremendous power that search results have to impact purchases and clicks. And this research was saying, first of all, that people tend to trust what’s at the top of search results more than what’s at the bottom. In fact, 50 percent of all clicks go to the top two items, and they’re very trusted. And the results of some of these new studies were so interesting to me because, for example, there’s an eye-tracking study showing that even if you put a really good result down low in the list, people’s eyes go to the top, they glance down, they might see the really good result, and then they go right back up to the top.

So I thought to myself, well, if people trust those high-ranking results—for whatever reason, that took me years to figure out—but for whatever reason, people trust what’s at the top… could someone use search results to shift people’s opinions on something? Maybe that could be done. Maybe even shift their voting preferences. So I conducted one of these randomized controlled experiments to try to see what kind of a shift I could produce, where I’m controlling the order of the search results. And I thought I could produce a shift in voting preferences and opinions of maybe 2 or 3 percent. Not much, not too much, but a lot of elections are very, very close. So if I can produce that kind of shift in some voters—and I’m focusing mainly on undecided voters, those are the people who are really vulnerable. I thought, well, that could decide the outcome of a very close election. First experiment I ran, the shift I got was 48 percent.

Mr. Jekielek: That’s unbelievable.

Dr. Epstein: Well, that’s exactly what I said. I said, I don’t believe this. I redid the experiment with another group of participants. Got a shift of 63 percent. Experiment after experiment, I was seeing that there was enormous power that search results have to shift opinions and even voting preferences. And I thought, well, this really needs to be explored. At some point, I did a national study in the U.S. with more than 2000 people from all 50 states. Again, got an enormous shift.

But because this study was so large, I was also able to dig in to see if there were any demographic effects. And, sure enough, different demographic groups are susceptible in different degrees to this kind of manipulation. And so it’s very easy it turns out—I know from multiple studies—to get shifts of 20 percent or more. But in that big national study, we found one group in which we got a shift of 80 percent.

So in other words, some demographic groups are very, very trusting of search results. In general, people are trusting of search results because they’re trusting of Google, which is the main source of search results for most people in the world. But also because they know it’s generated by a computer, by an algorithm, people don’t know exactly what that is, but they think this must be impartial. It must be objective because it’s coming from a computer.

So we were at this point really on to something because here we had a new effect. We call these types of influence effects. We had a new effect, and I had to name it, so I called it the search engine manipulation effect SEME, which we pronounced as “seem.” And there were some things about this that were very disturbing. Number one, generally speaking, people can’t see the bias in search results. So if you think about it, that’s kind of creepy because that makes this effect one of the largest ever discovered in the behavioral sciences. It means that in some sense it is subliminal and that you can’t look at search results—even I can’t look at search results—and easily see if there’s any bias or favoritism, let’s say for one dog food or one one kind of music or one candidate.

Mr. Jekielek: So you would manipulate the order of these search results. You knew what you were trying to manipulate, but then when you got people to see if they could tell, they said, I think this looks impartial to me. Is that the idea?

Dr. Epstein: That’s correct. So in the basic experiment, there were three groups. People don’t know it, but when they come into the experiment, they’re randomly assigned to one of the three groups. In one, the search results are ordered in a way that favors one candidate. In the second group, they’re ordered in the opposite way so that they favored the other candidate. And [in the] third group, they’re mixed up. So that’s the control group.

And before we let them do a search, we tell them a little bit about each candidate, and then we ask them questions about each candidate. Who do you like? Who do you trust? Who would you vote for if you had to vote right now? And normally at this point, there’s like a 50/50 split because we’re using voters who are undecided, and there are various ways in which we can make sure they’re undecided.

And then we let them do a search, and they’re using a search engine that looks very much like Google. We called it Kadoodle. But it works just like Google in that they can look at the search results. They can go to different pages of search results. They can click on any search result and go to a webpage. We used real search results and real webpages in all of our experiments, all of which came from the Internet. The only thing that’s different about those three groups is the order of the search results. That’s the only difference. So the point is we now let them do a search, and we let them search up to 15 minutes and read as much as they want until they feel more strongly about who they should vote for. And then we ask them all those questions again. Who do you like? Who do you trust? Who would you vote for if you had to vote today?

And, again, we found in experiment after experiment—and I’m still doing these experiments—we find enormous shifts. So I’ve replicated this phenomenon I think at least 15 times.

There’s actually a group at one of the Max Planck institutes in Germany that’s replicated this effect. I published my first paper about this in 2015 in the proceedings of the National Academy of Sciences, and that paper has been accessed or downloaded from the National Academy’s website more than 200,000 times, which in my career I’ve never even heard of. So there’s definitely some interest.

And, you know, that was kind of the beginning. That was studying this first effect SEME. And, OK, so over here I have studies like that. Now, one day in 2015, I was called by an attorney general, one of the attorneys general of one of our states who was asking me about Google and their search results because I had just published this big study, and he was wondering whether Google could fiddle with search results in a way that would cost him votes in an upcoming election because he was up for reelection. And I explained, oh yes, definitely they can do that. And then he said, how would we know if they’re doing it? OK, so over here there’s lots and lots of experimental research showing the power that a company like Google has to shift opinions and votes. That’s over here. That’s rock solid work adhering to the very, very highest standards of scientific research.

But now we have this question posed by an attorney general, how do you know leading up to an election, whether there is bias in search results? That got me obsessing. How would one figure that out?

So in 2016, I set up the first-ever system to answer that question. I set up a monitoring system, which is, you can think of it as a kind of Nielsen-type system. The Nielsen company since 1950 has recruited families. They keep their identities quite secret, and with the family’s permission, they install a box in the house. And they’ve kept track of what kinds of television shows family members were watching. That’s what has led to the famous Nielsen ratings that determines how much advertising costs, that determines what shows stay on the air, what shows get canceled. They’re now in 47 countries, so they have very good methodology for figuring these things out.

I created a Nielsen-type system in which I had 95 field agents in 24 states. Worked very hard to keep their identities quite secret, developed custom software that was installed on their computers that would allow us, in effect, to look over their shoulders as they’re using their computers. This is with their permission of course. And then we kept track of what they were seeing when they conducted election-related searches on Google, but also on Bing and Yahoo. So that’s important. We’re looking at three search engines. We’re looking over the shoulders of a very diverse group of American voters from 24 states. And we’re gathering the data. So the data are streaming in every single day. We actually had our first data start to come in May of 2016. And over a period of time we were building up this force of field agents. About 25 days before the election, we had a full panel, we call it, of field agents. So we had a lot of data starting to stream in nearly a month before the election. And we were able to preserve 13,207 election-related searches as well as the 98,044 webpages to which the search results linked.

Now, since we know what these field agents were seeing, we know what position they were seeing those search results in. In other words, we now had the data that would allow us to determine whether there was any bias in the search results. Now, to measure bias, we use what’s called crowdsourcing. So we used one of the online pools of people that’s out there to measure bias in the web pages that we had captured. And now that we had bias ratings for the webpages, we could compute an average bias for every search position in the search results that we had preserved.

So people have asked, well, how would you get people to tell you the bias on a webpage? Well, that’s what people in my field do. We’ve been doing it for a hundred years. Normally we just present them with a scale. In this case, I presented a scale that went from -5, which was Hillary Clinton, to +5, which was Donald Trump. And they looked at the webpage, they examined the webpage, and then they decided where on that scale there was bias, if any. If there was no bias, they went straight in the middle to the zero point. And then by having multiple people rate each page, we now have an average bias rating. And then, of course, that allows us to compute possible bias in the search results.

So over here we have the experimental work which is telling us about the power that a company like Google has to shift opinions and votes. Now over here we have a pretty large amount of data that’s come in from actual voters. Again, a diverse group of 95 voters in 24 states. And we’ve got a lot of searches, more than 13,000 and a lot of web pages. And we began to analyze the data.

So we deliberately focused on collecting data in the days leading up to the election, very deliberately did not analyze, because if we had found bias before the election, what would we do? Or in other words, what would I do? I mean, if I announced it, there would have been absolute chaos, especially, I think, if there was bias against Donald Trump. And if I didn’t announce it, then I would be complicit in the rigging of an election. So we focused on data collection. And then after the election we actually spent several months focusing on the analysis.

In the spring of 2017 I submitted our results to some scientific conferences. So, in other words, the analysis went through what we call peer review. And once we had an acceptance at a conference, I contacted a reporter, in this case, a man from The Washington Post. I told him what we did and I said, we found substantial bias favoring Hillary Clinton in all 10 search positions on the first page of search results on Google, but not Bing or Yahoo. So, again, that’s very important. We found that bias on Google but not the other search engines. And we also found, among other things, that not only were people in blue states seeing that bias, but people in red states were seeing a pro-Clinton bias as well.

We also used standard statistical techniques to see whether our findings were significant, we say, statistically. Our findings were highly significant. For those people out there who know some stats, our findings were significant, not at the 0.05 level, which is one cutoff that’s sometimes used, but at the 0.001 level, meaning the probability that we got this level of bias by chance alone is less than one in a thousand.

So, of course, at this point, going public with this, I ran into some new challenges because now I was saying that there was a substantial level of bias in Google search results. A lot of people would have seen those kinds of search results over a period of months leading up to the election. What kind of impact might that have had on votes? And I did some calculations, which are actually based on equations which are included in the 2015 paper in the proceedings of the National Academy of Sciences. And based on those calculations I concluded that if this level of bias had been present nationwide in Google search results, then that would’ve shifted somewhere between 2.6 and 10.4 million votes to Hillary Clinton, whom I supported.

So, in other words, I was reporting scientific findings and these findings from the monitoring project, which in a way were harmful to my own political preferences, my own political leanings. But I felt very strongly that since our results were so clear that I had a responsibility to report the findings. And, of course, Secretary Clinton won the popular vote by over 2.8 million votes. She lost the election because of the way things kind of swung in the electoral college. But my research and the monitoring project suggested that had there been no bias in Google’s search results, in fact, the popular vote might have been very different. And it might have been a very close race.

Again, it was uncomfortable for me to have to acknowledge that, to have to announce that. But that’s what I concluded from the research.

And you might ask, well, could Google really have that big an impact? And the answer is yes. I mean, Americans see Google search results about 500 million times a day. Google controls roughly 90 percent of search. The next largest search engine Bing controls about 2 percent of search. They don’t have much impact. And remember, too, that we found this bias just on Google. And so there seemed to be something about Google that was favoring Hillary Clinton.

Now, at the time, Google responded by saying, well, that’s just the way things work with the way users interact with our algorithm. We’re not doing anything deliberately. Eventually, years later, they actually started to quibble with my methodology without being very specific about what was wrong with my methodology. But initially they were saying, this is an organic effect. That’s a term Google often uses blaming what happened on the algorithm.

In my mind, that’s complete nonsense. I’ve been a programmer since I was a teenager. I can tell you that when we write computer programs, we write them to do exactly what we want them to do. And the fact is Google has total control over what happens when, let’s say there are a lot of users who lean left or who lean right. The algorithm can respond any way it’s programmed to respond. So I simply don’t buy the idea that this was just the algorithm’s fault or just the user’s fault. It makes no sense to me.

Now, bear in mind also that the numbers that we found—although very clear—in a way don’t matter that much. We had done something quite extraordinary. We had preserved what are known internally at Google as “ephemeral experiences.” And this is the key to understanding the nature of the threat that we face from big tech companies. In one of the recent leaks from Google, a leak that went to the Wall Street Journal in 2018, an email from someone who works at Google said, how can we use ephemeral experiences to change people’s views about Trump’s immigration policy? Now that’s from an internal email.

In other words, people at Google understand the enormous power of ephemeral experiences. Now, what’s that? What’s an ephemeral experience? That means you type in something, let’s say a search term. And some results are generated on the fly just for you. They impact you, they disappear, they’re gone. And they’re not stored anywhere. And you can’t go back in time and find them. You can’t recreate that experience. That experience you had is unique. So this is a fantastic way to manipulate people because.

Mr. Jekielek: There’s no record.

Dr. Epstein: There’s no record of it. You can’t go back. And people can’t even see the bias. So we know this from study after study. There are very, very few people who can spot bias in search results. And here’s something creepy. The very, very small number of people who can spot the bias, they shift even farther in the direction of the bias.

Mr. Jekielek: That’s an astounding finding in itself.

Dr. Epstein: Exactly. So the more and more I’ve looked into these things, the more I’ve learned about them, really the more concerned I’ve become, because these are extremely powerful techniques that have never existed before. I’ve literally, as I’ve stumbled upon them one by one, I’ve had to name them because they’ve only been made possible by the Internet. They’re also unlike other sources of influence. In elections, we’re influenced by billboards, by radio shows and TV shows and advertisements and so on and so forth. All of that is competitive. And in that sense, it’s probably a good thing. It’s a good thing for democracy, that there is so much competition out there vying for your attention and trying to convince you of this or that.

But if there’s bias in search results, that’s controlled by the platform, in this case, Google. That’s not competitive, Google has a hundred percent control over that. There’s no way to counteract such a bias if, of course, you can even measure it. If you can measure that kind of bias, you cannot counteract it. You have to be able to see it—number one. You have to be able to measure it—number two. Capture it—number three. And then, now what, how do you counteract that? You can’t alter what Google’s algorithm is doing. So there’s a lot of danger here, but seeing was the beginning and that led to other discoveries. But the basic idea is over here we have procedures which basically tell us very precisely what power companies like Google have. It’s not just Google, but what power these new tech companies have to shift opinions and votes.

Mr. Jekielek: The ones with these giant user bases basically.

Dr. Epstein: Correct. And then over here, we now have a monitoring system. So I did set up a second system in 2018 this time with more than 160 field agents, this time preserving more than 47,000 election-related searches and nearly 400,000 web pages. I should point out that in each one of these situations, the search terms we’re using—that’s very important—are unbiased. And so we literally have independent raters rate the search terms that we’re using. The reason why that’s important is because if we’re using biased search terms, like the wall is a great idea, then of course we expect to get biased results. But the fact that we’re using unbiased search terms, like, tell me about the wall.

Mr. Jekielek: Or the name of the candidate or something.

Dr. Epstein: Exactly. The fact that we’re using unbiased search terms, well, that should give us unbiased search results. Now, think about it. Undecided voters, what kinds of search terms are they going to type in? They’re going to type in search terms that are fairly neutral, fairly unbiased. They’re not going to type in “the wall’s a great idea.” They’re going to type in “tell me about the wall” or “is the wall a good idea?” They’re going to type in neutral search terms.

Mr. Jekielek: And so the fact that we have found bias in 2016 and 2018—again, a highly statistically significant bias in a liberal direction in both elections using neutral search terms—that’s quite disturbing.

Mr. Jekielek: So in this 2018 findings what was the estimate of the number of votes that could have been shifted?

Dr. Epstein: Well, 2018 was a midterm election. So we had to go about this very differently. We focused on three Republican districts in Southern California, all in Orange County, which are staunchly Republican. In fact Ronald Reagan once said famously that Orange County is where old Republicans go to die. So these were Republican districts. And we focused our monitoring in those districts.

We again found a substantial liberal bias in all three districts easily, easily enough of a bias to account for what happened. Because what happened was all three of these districts flipped Democrat. And there was a very strong bias in favor of those Democratic candidates in those districts. And we calculated that if that level of bias had been present nationwide on Google, and by the way, that liberal bias in those three districts only occurred on Google. It did not occur on Bing and Yahoo. If that level of bias had been present in Google election-related search results nationwide in 2018, we calculated that that would have shifted upwards of 78.2 million votes. Now that number sounds impossible, but you have to remember—

Mr. Jekielek: It does. It sounds outrageous.

Dr. Epstein: Sure, sure. But you have to remember that that’s scattered across hundreds and hundreds of elections. Because, again, this was a midterm election. So we’re talking about state, regional, local elections. So 78.2 million would be the upper limit on what we think could have been shifted. But, again, those are spread across many elections. It’s still obviously quite disturbing.

Mr. Jekielek: Well, so, here’s the question that comes to mind. The monitoring project, you can see there’s clear bias that will impact, based on your earlier studies, how people will perceive candidates. But how can you be so sure how that manifests in the ballot box itself, in their actual behavior?

Dr. Epstein: That’s an excellent question, and I wish more people had asked me that question because I’ve actually had seen articles in which that question is raised and suggests to some people that I haven’t really discovered anything. And the fact is we’re pretty sure that what we found manifests in the voting booth for a couple of reasons.

One is we know that from survey research, which is quite extensive, that if you ask people who they’re going to vote for, it turns out that’s a very good predictor of who they actually vote for. There are a couple of exceptions to that rule, but generally speaking, we’re talking about 90, 95 percent accuracy in predictions. And then, of course, there’s the after-vote polling that occurs. That polling, too, is generally very, very accurate. There are occasional exceptions, I admit that.

But there are other reasons too to think that what we found is not only correct, not only accurate, but if anything that we’re underestimating the impact that biased search results have on people’s actual votes. I say this because first of all, in most of our experiments people only conduct one online search, and we still get these enormous shifts. What happens in real life? Well, in real life people are conducting many searches over a period of weeks or months that are election-related. If they’re undecided, that means they’re being hit over and over and over again with biased search results, taking them to webpages that favor one candidate. What happens if you hit people over and over again with biased search results?

Well, as a matter of fact, we did experiments to look at that issue specifically, and we found that if we hit people once, we get a shift. If we hit them again, now we’re not showing them the same search results, but we’re showing them results that have the same bias, then that shift increases. And if we hit them a third time, the shift increases even more. So you have to imagine what really is happening out there. What really is happening out there is that undecided voters are being exposed to biased search results not once, not twice, not three times, but possibly dozens, hundreds of times over a long period of time. That means that what we’re finding in our experiments almost certainly underestimates the real impact that a company like Google can have on real voters.

Mr. Jekielek: It’s kind of mind-blowing. As you were just describing all this, I’m just imagining the scenario … because you are planning to do another monitoring project, which I’d like you to tell us about in a moment. But, presumably, this time when you’re doing the monitoring you won’t be waiting until after the election to reveal what the monitoring is actually telling you. And I can imagine mayhem of the sort that you were describing a bit earlier. Tell me a little bit about this monitoring project and how you’re going to deal with this whole scenario.

Dr. Epstein: In 2020 we’re hoping to launch a much more ambitious monitoring system. The system in 2016 you could almost say was just exploratory. That kind of thing had never been done before. Frankly, no matter what the numbers we got, the fact that we did what we did I think is noteworthy. I mean, we found a way to preserve in a meaningful fashion, ephemeral experiences on a fairly large scale.

2020, though, I think that the tech companies are going to go all out. I think they were very cautious and overconfident in 2016. I think there’s a lot of crazy things they could have done to shift votes that they just didn’t do. 2020, we’re expecting lots and lots and lots of manipulations and lots of different kinds of ephemeral experiences to impact voters over a period of months.

So we’re planning on setting up a much larger panel of field agents, at least a thousand in all 50 states. And we’re planning this time to use artificial intelligence—we’ve been working on this in recent months—to analyze the massive amount of data that we’re receiving every day in real time. This means that if we find evidence of bias or some sort of manipulation, we’ll announce it. We’ll announce it as soon as we’re sure that we found it. We’ll announce it either to the media or to the federal election commission or to other authorities. And that is going to create a kind of chaos, but it’s the kind of chaos we need to have. Basically, I’m guessing that these companies or the company that we’ve identified are going to fight.

But there are a couple of good possibilities here. One possibility is that the company … let’s call it the perpetrator, the perpetrator will perhaps back down. If these companies back down, if they stop using these techniques, then we will have a free and fair election. If they don’t back down, and we continue to detect and capture evidence of large-scale vote manipulation, I think frankly these companies will pay a terrible price. I think there could be both civil actions taken against them and possibly criminal actions taken against them.

So either way, I think democracy wins, and that’s my concern here. I’m not concerned about any particular party or candidate, although I do lean left. I’m concerned about democracy and the free and fair election because you don’t know how these private companies, which are not accountable to the public, you can’t know how they’re going to use these powers that they have. That they have these powers is beyond any doubt. That these powers are being used, I think at this point, in my mind anyway, there is little doubt.

And that’s been reinforced by some of the recent leaks of material from these companies. I think that some of the insiders have come forward, whistleblowers, and have said this company is in fact deliberately altering opinions and thinking and voting preferences, and they’re doing it on a large scale. The point is if we have a large monitoring system in place, and in realtime, we’re identifying shenanigans of one sort or another. It’s very possible these companies will back down. The bottom line here is to try to assure the integrity of the free and fair election. And beyond that, of course, since these companies can alter opinions about anything, we need to keep an eye on what they’re doing in general. Again, that requires monitoring systems.

And I think we have to think beyond the United States because a company like Google is impacting more than 2 billion people around the world. Within three years, that number will swell to over 4 billion people. So Google has this power around the world. They can literally impact thinking, behavior, attitudes, beliefs, elections in almost every country in the world. In my mind, that means building larger, better monitoring systems to keep an eye on companies like Google. I think that’s necessary not only to protect democracy around the world, but to protect human autonomy.

Mr. Jekielek: I want to cover a few of these other methods. You’ve actually documented these in an article back in September that you published in The Epoch Times. You had 10 methods, and not just Google, but the tech giants in general. There’s another thing I wanted to cover just before we get to that. When we spoke earlier in relation to the Zachary Vorhies leak–we brought you in to comment on some of what he had found–you said something that I found incredibly disturbing, and I just wanted to explore this briefly in this interview. You said it’s more disturbing to you if the algorithm is making these decisions for bias on its own versus someone actually sitting there and using this twiddler-like—I believe it was called—tool to actually re-rank the results. Can you just speak to that briefly before we look at some of these other methods?

Dr. Epstein: Sure. One of the recent events in my life of note is that President Trump sent out a tweet referring to my research–he didn’t mention my name, but it was clearly my research–because I had recently testified before Congress. And he said this new report shows that Google manipulated the 2016 election and sent between, I don’t know, 2 [million] and 16 million votes to Hillary Clinton. So he got the numbers slightly wrong, but he also got this notion of manipulation wrong. I have never said Google manipulated the election.

I admit I used to be obsessed with wanting to know whether executives at a company like Google or just rogue employees were fiddling around with search results and search suggestions and other things, YouTube videos and all that. And for a while, I was obsessed with wanting to know about intentionality. Was there intentionality on the part of Google employees? Then eventually I realized, you know what, I don’t care. It doesn’t matter because let’s say there’s no intentionality. Let’s say they’re just not paying attention.

And, you know, think about it. They’re impacting thinking and behavior and votes around the world. My wife and I lived in the Fiji islands for a while and more than 90 percent of search in Fiji was done on Google. While we lived there, there was a military dictatorship. But after we left there was an election. Now, the question is, would Google pay attention to that election? For the sake of argument let’s say in Fiji they couldn’t care less. They weren’t paying attention. Let’s say in many countries they don’t care. For many elections, let’s say they don’t care. But the algorithm is still going to do its thing. It’s always going to put one dog food ahead of another and one brand of toothpaste ahead of another and one candidate ahead of another. Always. That’s what it’s programmed to do.

There’s no equal time rule built into Google’s search algorithm. When we run experiments, we always have a control group in which we mix up the results. So there’s no bias, but they don’t do that. Their algorithm is meant to tell you what’s best and what’s best goes at the top. And if you’re undecided, what’s at the top shifts people’s thinking and opinions.

So what I realized was it’s very possible that a lot of important events right now in human history are being determined not by plans and goals and strategies of human beings at a company like Google, but by computer programs that are just being left to do their own thing. To me, that’s far more frightening than thinking that a Google executive is out to rule the world. The fact is we have let loose upon humanity powerful computer algorithms, which are impacting humanity.

Mr. Jekielek: Well, you said it before, I got a chill up my spine. You said it again now, I got another chill up my spine. First of all, there’s been a bunch of research done that show that programmers, ideological biases and so forth make it into algorithms. That’s a whole other interview perhaps for another hour. But how is it that Google could actually counteract this effect, right? Let’s say they decide to respond in a way that you’re hoping, how would that work? So they would actually have to re-rank their search so it wouldn’t be biased. How does that work?

Dr. Epstein: A few months ago I would have hemmed and hawed in answering the question, but I no longer need to thanks to the whistleblower Zack Vorhies, and thanks to some of the documents that he removed from Google’s offices. The fact is that Google could easily take care of political bias in their search results. How do I know that? Because there is concern at Google and has been for years with what they call, internally, algorithmic unfairness, which can lead to so-called algorithmic distress. And so Google has come up with ways to control for algorithmic unfairness. And so they’ve developed, for example, what they call Machine Learning Fairness Algorithms, ML Fairness Algorithms.

Now, let me just give you very roughly an idea of how this would work. If you were looking on, let’s say, on Google and you were looking at images and you type in, which I tried recently, and you type in CEOs, you’re going to get mainly pictures of males because the vast majority of CEOs in the United States are male. And that’s exactly the kind of thing within Google some people would call algorithmically unfair. So even though most CEOs are male at the moment that might cause people distress to see that.

So they developed very simple methods of evening things out so that when you type in CEOs, you see half male, half female. Genius, except of course it’s not necessarily realistic and could be considered a form of social engineering because it’s creating impressions that aren’t quite true. And, of course, if you used that kind of logic to apply to many, many, many different aspects of our society, you could be engaging in or you could be accused of engaging in large-scale social engineering.

Setting that issue aside, my point is that Google has the ability to even out any possible political bias. They could do it quickly, they could probably do it in a day. And that’s why I’m thinking that if we expose various kinds of bias with a big monitoring system, as I said, the company might back off, they might use the techniques they’ve already developed, the so-called ML Fairness techniques, and they could even things out. And the next time we look for bias, we find none. It has disappeared.

Mr. Jekielek: Well, it’s fascinating. You’ve outlined the problem, and you’ve outlined a possible solution. That’s, I think, rare to find in our interviews anyhow. Tell me a little bit about some of these other methods that are used right now. We’ve been talking exclusively about Google and search engine manipulation. There’s of course numerous other methods. But what about with Facebook, the other giant platform? What type of methods exists there?

Dr. Epstein:
Facebook has at least five methods for shifting opinions and votes. I think I first wrote about that probably back in 2016. But the simplest method they have is just called targeted messaging. And that’s something that I’ve studied and quantified to some extent. And the effect I call TME, the targeted messaging effect. Simplest example and, by the way, this is based on Facebook’s own published data, which they published in 2012. If on election day in 2016, if Mark Zuckerberg had decided to send out a “go vote” reminder just to Democrats or people leaning left, that would’ve given Hillary Clinton that day at least 450,000 more votes than she got. Now, I suspect given what happened, [and since] Mark Zuckerberg leans left himself, I suspect that he probably kicked himself a few times because he did not send out that targeted message. But the point is, he has that power. Targeted messaging is extremely powerful.

Now, here’s a variation on it. In 2018, Google did post on its homepage a “go vote” reminder. They literally removed the colorful word Google from their homepage and put in the words “go vote.” I published, as you well know, I published an article with all the calculations to show that that reminder, if it had been sent to everyone in America, would have sent a lot more people voting who otherwise would have stayed home. But because of demographics, the demographics of people who use Google, it would have given Democrats 800,000 more votes than it would have given Republicans. Now, again, sounds like a lot of votes, but it’s a midterm election. So those votes are scattered across hundreds of elections. Still, it gives an advantage to Democrats. I have no doubt that data analysts at Google did the same calculations I did before posting that.

Now, if Google displayed that prompt mainly to Democrats or mainly to people who lean left or exclusively, let’s say, to people who are Democrats or people who lean left, then the impact, the differential impact would have been greater. And in the extreme case that would have given upwards of 4.6 million more votes to Democrats than to Republicans. So that’s a different kind of effect. We call that DDE or differential demographics effect. As I said earlier, you know, I keep stumbling onto these phenomena, and I keep working hard to try to understand them and quantify them. But most of these types of influence have never existed before in human history. They’re made possible by the internet. They’re made possible by these huge tech monopolies, and they’re entirely in the hands of these tech monopolies. In other words, you can’t counteract them. Even if you found them, you can’t counteract them.

Mr. Jekielek: So you set up this giant monitoring system. You’re successful. We’re studying search engine manipulation effect across at least a thousand people, maybe more. But then could we miss DDE or some other effect?

Dr. Epstein: Oh, no doubt. I mean, I’ve at this point identified about 12 effects like SEME and DDE and TME. But there are probably others. There have to be others. I mean, there’s no way I could possibly … I mean, I’m a lone researcher. The fact that so far I’m the only one who’s been bothering to try to find these effects and to quantify them and understand them, the fact that so far I’m the only one who set up monitoring systems to capture ephemeral experiences to me is bizarre. I don’t understand that. That’s got to change … 2020, it’s going to be tricky. Again, if it’s just me and my associates gathering all this data, will that have an impact? Will that change anything?

Mr. Jekielek: As opposed to multiple independent entities that are finding the same or perhaps different results. That’s what you’re getting at?

Dr. Epstein: Exactly. There really should be lots of people like me and working at different institutions. There should be people in government doing some of this type of work. There should be people in nonprofit organizations, universities doing this kind of work. I’m guessing that if in 2020 we have some amazing discoveries and that we’re doing things carefully and in a credible manner, and we’re turning over our data to to the right authorities and the right experts, I’m guessing that will lead to some real change. I’m hoping.

Mr. Jekielek: Dr. Robert Epstein, incredible body of work. Thank you very much.

Dr. Epstein: Thank you.

This interview has been edited for clarity and brevity. 
American Thought Leaders is an Epoch Times show available on Facebook and YouTube.
Follow Jan on Twitter: @JanJekielek