SUBSCRIBER
BENEFITS
Exclusive With Data Scientists: Public Data Shows 432,000 Trump Votes Removed in Pennsylvania
54d AMERICAN THOUGHT LEADERS
Saved

The Data Integrity Group, a group of data scientists, has been dissecting publicly available data on the presidential election in multiple states. Most recently, in Pennsylvania, they found over 432,000 votes were removed from President Donald Trump in at least 15 counties.

Time-series election data shows Trump’s votes decreasing in various counties at many time points, instead of increasing. In an election, as you count votes, you typically only see vote increments, not decrements—unless some error occurred that needs to be assessed.

The group also testified before the Georgia Senate that more than 30,000 votes were removed from President Trump in Georgia.

This is American Thought Leaders, and I’m Jan Jekielek.

Jan Jekielek: Lynda McLaughlin, such a pleasure to have you on American Thought Leaders.

Lynda McLaughlin: Thank you so much for having me. I can’t thank you enough, Jan.

Mr. Jekielek: Actually, it’s not just Lynda McLaughlin. It’s actually the Data Integrity Group that the show is with today and you’re the communications person for the Data Integrity Group. We recently published your work on the Pennsylvania data. All your work is done with publicly available data, which is a really interesting approach. Tell me about what you’re doing.

Ms. McLaughlin: I think like a lot of people, the night of the election, I was confused by what was happening. I was seeing these strange vote totals. The data wasn’t performing the way that many of us saw it happening in different parts of the evening, and we saw these strange drops and declines live on television.

I think for a lot of people, there’s a simple explanation, which is that voting is supposed to be very straightforward. This is an additive process, one candidate gets more votes than the other, and we move on. But that night, we saw votes actually dropping live on television, whole swings of votes, hundreds of thousands. I was sitting there with my family and friends saying, “What is happening? I don’t understand.”

I’m not a data scientist. I work in politics, I work in media. It’s been my business for 15 years. So when I started seeing this happen, I thought, “I’ve got to find this out.” So I did. I started looking around for data scientists, finding people that were much smarter than myself to understand these numbers and make sense out of them, and that’s really how this came about.

Mr. Jekielek: Why don’t you tell me, who are the core members here of the group and what do they contribute, and then we’ll get into more about what you actually did?

Ms. McLaughlin: We’ve got an amazing team of people that come from all walks of life. They have very different political ideologies and philosophies on elections, on the Constitution, on our republic, but I think the one thing that we all agree on, and it’s a very interesting point, is that data is language. It’s a language that very few of us speak to the depth and intricacy that these individuals speak, and so what we have found is that they all came to this.

John Basham is a meteorologist but he has a background in data science because that’s used in that profession. He’s also a patriot to this country, served his country, and has a background in some of those operations, understanding some of that intelligence. Then we have Justin Mealey, an NSA analyst, also a data scientist, who took a look at the numbers and how they were speaking to one another, and where those strange corrections were happening. Then we have Dave Lobue, an artificial intelligence expert, another data scientist.

We have these people who use data in very, very different ways and then we all came together in this group to look at the numbers and say, “This doesn’t make sense.” It doesn’t matter what role, what walk of life you come from, what political field, it doesn’t make sense. It’s not fair to the citizens of the country, it is not fair to the voters who stayed up, waited in line for hours during a pandemic to get their vote counted, and to keep our republic what it is meant to be: free and fair.

Mr. Jekielek: You’ve done a number of these kind of explanatory videos that detail these irregularities that are found in the data in Pennsylvania and Georgia. I think those are the two that we’ll cover a little more in depth today, because they’re the most recent and also obviously topical on the Georgia side. But in essence, you are looking at unexplainable errors in data. Do I have this right?

Ms. McLaughlin: Basically, what we have seen is that when you start, what we call, “cleaning the data” and looking at it, these numbers just don’t make sense. There’s a pattern that numbers follow based upon previous elections, looking at demographics, looking at geographical location, and suddenly the same votes that are happening in one location have these spikes, grand spikes, out of nowhere and then returned to what would be considered normal or consistent with what we would expect from the numbers.

We were also seeing these influxes of changes and every time a change would happen, whether the votes were removed, whether they were swapped, whether it was an even swap or just a partial swap, it always ended up benefiting Biden, and every time that rule never changed, and that was our constant. Then we saw these data numbers and everything’s moving around, and it makes no sense.

Nothing is following the pattern as it should be seen and it’s basically abuse of data, but that’s not a language that many in our society speak, so you need to have it broken down in a way that is consumable, and digestible, and understandable by your average American. We try to make analogies, and graphs, and animations so that you can understand some of the things that are happening in everyday conversation, and that’s been really important to us because we want people to understand what’s happening.

Data is not partisan. It’s not blue, it’s not red—it’s binary. It’s just numbers. The numbers tell the story. It’s the truth. The thing that bothers us is that this data is publicly available. Now the secretaries of state are not making it available. We have to go and find it. We have to work for it, but it’s there. When you look at it, it doesn’t make any sense and not a single person, whether secretaries of state or Edison [Research], they’re not debunking or debating our data. Not one.

Mr. Jekielek: Tell me about what all your data sources are and how they compare to one another.

Ms. McLaughlin: I think that’s been a really big question for a lot of people: If you’re doing this data, why hasn’t anybody else? That’s a good question, we don’t have the answer to it either. Especially for our elected officials, many of whom we’ve asked to take a very close look at this, because you don’t need warrants, you don’t need subpoenas, you don’t need to do anything.

This data is readily available. It is public information, and you can access it, you can get it from the Edison website, you can get it from the New York Times website, it’s taken directly from the JSON feeds of all those websites, we scraped the raw data and then we’ve compared it to the secretaries of state websites, and in most cases, they sync-up completely.

There are some indiscernible changes, there are some numbers that are different. It’s on a state by state basis, and we can definitely talk about that in a more particular and precise way with our data analysts. There are changes, but they are not changes where it would change the outcome. There’s always a balancing act of the data, and so it always totals out.

Even if we see these strange errors and we compare them to the secretaries of state websites, we compare them to the raw data feeds from the JSON feeds, [compare them to] the Edison website where different states that have the Scytl server, we compare all these data sets and they’re all the same numbers—they’re comparing the same way, and they’re balancing out. So either they’re certifying results that have errors in them or Edison is giving information that’s incorrect to the networks. But regardless, there are errors in the data and that is not what we should be certifying any election on.

Mr. Jekielek: So basically, you’re saying that there are more errors in these contested states where election results have been certified, than would justify that certification.

Ms. McLaughlin: That’s correct. The one mantra that was going on during this election was that every vote counts, that every vote was important, that every vote mattered. But it doesn’t seem that’s very true—when we see over 400,000 errors in the state of Pennsylvania, when we see direct switches like we saw in Bibb County, Georgia, where there are over 12,000 votes that were swapped from Trump to Biden.

These are people’s votes. This matters, and for states to say, “Well, we’re going to certify it.” We saw that one big change in Pennsylvania where it went from 1,690,000 to 1,670,000, and they said, “That was a clerical error.” Let’s just say that I give you that clerical error and maybe that’s true that it was human error. What about all the other errors? Don’t those matter? Don’t those deserve answers? And why aren’t we allowed to have them? Why aren’t the secretaries of state making all of this data completely available to us, as the citizens of this country? Why can’t we see it?

Mr. Jekielek: How many of these errors, as you describe them, and there are different categories, have you found overall across all your analyses up to now?

Ms. McLaughlin: Speaking to Georgia specifically, right? Georgia is happening today. This is a runoff election. They’ve made zero changes to the way that they did the general election. There’s a lot of questions about the way that things were handled, and there’s a lot of questions about how the data was analyzed, the chain of custody of it. We took a very close look at that. We spoke with people on the ground that understood that voting process, because remember, every state is very different, how they do their votes.

We just did Pennsylvania, and there are between five and seven different processes. So when we looked at Georgia, you look at Putnam, Dodge, Dougherty, all these different counties, there were over 30,000 votes that just disappeared. We’re not saying who did it. We’re not saying why they did it. We just know that they did it, because the data tells us that it happened, and no one’s saying that it didn’t happen. They’re just not explaining it and they’re not addressing it.

Mr. Jekielek: When you say, “They did it,” is this machines or people, or do you even know?

Ms. McLaughlin: That’s a very complicated question. In some cases, human hands in the adjudication process, for example, do touch these votes and can change them, and can modify them. They can decide what they call “voter intent.” If you talk about Richard Baris from the state of Georgia, in our Georgia video, there were 113,000 votes. He adjudicated 106,000 of them to determine voter intent.

What that basically means is that voter goes in, they put in their ballot, and that ballot says I want to vote for whomever—they [the adjudicator] can go in and say, “That actually doesn’t look like what [the voter] meant. We’re going to put this down as someone else.” You’ll notice that I don’t say one candidate over another because it’s not about that. It’s about actually having every vote count and actually allowing that vote to be the voice of the person who cast it, as opposed to the adjudicator.

Mr. Jekielek: Have you tried to get answers as to what actually happened here?

Ms. McLaughlin: The data is publicly available, as I’ve said, but it would be nice if the secretaries of state had released the data on the public side of their websites. In order for us to get any of that information, it has to be FOIA’d. [Freedom of Information Act] We’ve been very lucky to work with a lot of congressmen and senators who are just as deeply interested as we are. We’d like there to be a lot more.

There have been individuals who are deeply concerned about what’s happening and they are putting those FOIA requests in to get the information directly from the source, whether it’s from the secretaries of state—their communications directors explaining to them that we need to find out answers. In Maricopa County, there were five board supervisors in that specific county that all said that they were on board for a forensic audit, but the secretary of state wouldn’t allow it.

Mr. Jekielek: What is the recourse in a situation like this? What is it that you hope to accomplish with these examples that you’ve [given]?

Ms. McLaughlin: It’s a great question. When we start getting into numbers where we’re talking about hundreds of thousands of errors, tens of thousands of switches, when we start to see these anomalies, corrections, whatever adjective you want to give them, happening over and over again in all these different states—we’re talking about six swing states but it happened in several states—and then you compare them to the states where it was a normal election, if you can A and B those two, and you can look at what a normal election would be, you start to see it even more clearly and you understand something very strange happened here.

Why doesn’t anyone want to look into it? Why would they move forward in Georgia today with an election, having changed nothing with so many people coming forward, giving all of this information? These are people that have been in voting polls and working their precincts for 30 years, some of them, and they’re saying, “I’ve never seen anything like this. This doesn’t make any sense.” So we just want to get those answers. The data tells us the story. We just need to read it and find out why.

Mr. Jekielek: For Georgia, with a heightened level of scrutiny which this election is going to be getting or is getting as we speak, do you think that makes a difference?

Ms. McLaughlin: I do, specifically in Georgia with the adjudication process. One of the main arguments from Georgia has been, “We’ve done two hand recounts.” Counting counterfeit money twice doesn’t give you a different outcome, it gives you the same outcome. So what we keep trying to explain to people when we testified to this point when we were in front of the senate committee in Georgia, and we explained, “If you put a ballot in and it’s adjudicated, that original ballot with the voters original intent is gone.” So now there’s a new ballot that you have that can be printed, and then they keep that as their record.

Now, when you’re comparing them, they all add up and it looks like that voter wanted to vote that way, but that’s not how the voter voted. That’s how the adjudicator determines voter intent and that changes that outcome, and that’s why that doesn’t make any sense. An adjudication is supposed to have two witnesses from either party, sometimes there’s even an independent as well—those processes didn’t happen. We spoke to the adjudicators, they didn’t have that at all.

Mr. Jekielek: So the bottom line is with many of these ballots, it may just simply be impossible at this point to know the original voter intent.

Ms. McLaughlin: Exactly right, especially because the ballots are separated from their envelopes, they’re put into a machine, there are no audit logs, there are no login credentials, the machine is already logged on, they don’t even have to log in, you don’t even know who made the change—there’s no tracking. This is our most important civic duty and we have absolutely no receipt or record of what we’ve done.

Now that we’re pointing out the errors and we’re showing that there is a trail, and that the data speaks very clearly to that, they’re saying, “No, nothing to see here. We don’t want to look at that.” Why? Why don’t you want to look at that? That would be important for any election, that’s our republic we’re talking about, and that should matter to everybody.

Mr. Jekielek: Now, let’s look at some of the data details here. We’ll bring the data scientists on.

Ms. McLaughlin: I can’t wait for you to talk to them. They’re fantastic human beings.

Mr. Jekielek: Justin Mealey, Dave Lobue, great to have you here. I want to start by actually finding out a little bit about you. Let’s not talk data right away. For starters, Justin, a little bit about your background and what motivates you to be doing this work?

Justin Mealey: I basically was in the military for nine and a half years. When I was in there, obviously, that’s about halfway to retirement and people have to make a big decision, and I thought, “I love what I’m doing, but at the same time, I could be making a lot more money than the military,” so I left and became a contractor. I was on a CIA contract for the ODNI [Office of the Director of National Intelligence] and as a contract, I worked at the National Counterterrorism Center.

When I was working at the National Counterterrorism Center, I did a couple of things that were really, really cool data-wise, where I was able to do some things that save probably about $10 million worth of money for the government. So when I did that, I thought, “If I can save $10 million, can I make $10 million?” So that’s when I left the government completely and then started off with myself, and then just failed business after business. So that was that.

Eventually, I had to get a job, and understand a couple of things, and get into a couple of industries, got into the advertising industry, got into working in building software and everything like that for commercial products. It spawned a whole career where what I do now is I build software for one of the Big Four accounting firms. I really love it.

When I got notice of what happened on November 3, I thought, “I have the ability to look at this stuff and in ways that maybe are different than some of the other people that are looking at it.” I thought, “What can I contribute to see what happened, because it doesn’t make sense to me?” Just looking from the outside intuitively, the data didn’t really make sense and I didn’t have the raw data.

So I wanted the raw data. I wanted to look at the numbers and say, “This is good and Trump legitimately lost,” or “This is not good and Trump legitimately won, and Biden legitimately lost.” I need to know, because I think that’s our type of nerd passion, it’s like we need to know. So that’s what drove me to come to the Data Integrity Group.

Mr. Jekielek: Dave Lobue, tell me a bit about yourself and what brought you to this?

Dave Lobue: I’ve been working in all types of data for over a decade now. Ever since I’ve begun my professional career, I’ve always selectively engineered my own trajectory based on the data available within different types of industries. Over a decade ago, data wasn’t as freely available as it is now. I worked in some primary research, quantitative research consulting survey based data, polling type data, and from there, as the industry caught up with data collection accumulation, I moved into other industries that had better data collection capacity.

Again, I’ve always had a passion and an interest in pursuing this type of analysis, and understanding data and insights, so I moved into a few different areas. I’ve worked in telecommunications, financial services, and most recently, obviously, being involved in this project, I’ve been working in the artificial intelligence space. This is the next horizon for a lot of data. New applications are coming out weekly, monthly, for ways to utilize this type of data.

When we saw what happened, obviously, on election night, that’s some very interesting data patterns. From that, I thought, “I’ll need to explore exactly what happened here,” because I haven’t seen these types of behavioral patterns anywhere in the quantitative research I’ve done, in the first hand data collection across any industries. So that’s what brought me into the mix of understanding what generated this behavioral pattern. As I got closer and closer, and we started looking into the data and accumulating more and more resources, we were able to come to the point now with an understanding of what exactly happened.

Mr. Jekielek: That’s pretty fascinating. You’ve cataloged all sorts of errors in these contested states. Now, just looking at the data, is this data anomalous compared to what normally happens?

Mr. Lobue: Absolutely. That was a key entry point that we went through early on, we had to establish a benchmark for what is normal versus what is abnormal. We began to isolate the states that were exhibiting these patterns and the ones that were not. As we now know, these five key states were more highly concentrated than others.

Some states exhibited no anomalous patterns at all, which we said, “This is great, because we have a running benchmark to compare against what we’re seeing, on this other hand, as suspicious, irregular, and in need of deeper attention.” Bottom line is there are many states that have these irregularities and a deeper concentration within these five or six, particularly.

Mr. Jekielek: Fascinating. How many of these baseline states did you look at, out of curiosity?

Mr. Lobue: We looked across all states. We also looked at the state level and at the county level, and we’re available at the precinct level. We had really an abundance of data and that was across these data sources. 30 states had some sort of strange activity to varying degrees of unexplained behavior, we’ll call it.

The remainder incremented normally. There was no, as we’ve seen, decrements or vote switches or removals happening in that other bucket. That was nice to have as, again, the system’s operating properly. The data is just flowing properly as it should be in these areas. It’s only these select few that we need to continue to investigate what’s going on.

Mr. Jekielek: You’re telling me that these five specific states were just off the charts compared to the remaining 25 that had anomalies?

Mr. Lobue: Absolutely. Each one individually could have an individual team devoted to it. As we know, time is a resource with these components as we’ve been watching the clock closely to understand that we can find a complete resolution in the due time we have available. We’ve had to focus on these states and we’ve done the deepest dives there, but even within that, there’s quite a bit more going on in a relative scale to all the other states, absolutely.

Mr. Mealey: When it comes to the analysis we’re providing on these states, it starts off [with] this wide and very shallow analysis. You can think about it like filters at different level. So we have these filters with very, very large pores, and then as they get smaller and smaller and smaller, we just filter down. When we see what falls through is when we determine what we should focus on. The thing is, when it came down to the swing states, it was every single swing state having things that fell down all the way to the bottom, where as deep as you go, you still find the problems.

The thing is just finding an error or an anomaly by itself isn’t enough to say fraud occurred. You have to look very, very deeply into data, and you have to understand, and come up with reasons why and have things that could be explained, and then test your theories against the data. It’s this constant process. Now, to do that on one state, sometimes it could take us a couple of weeks. For some states, especially when we have to go into a lot of different data sets, scrape data from a lot of different places, it’s a very, very labor intensive process, only for the presidential elections.

There’s an entire down-ballot-ticket of all these things that we haven’t even been able to look at. We want to get to it, but obviously, the importance for the country as far as the election is concerned, we had to focus on only the presidential election and we had to narrow our focus down to just a couple states.

Mr. Jekielek: One thing that’s really, really interesting to me, again, is this adjudication process that we’ve been discussing. Again, you guys documented these large batches of ballots that would switch or be removed all at once, and I’m scratching my head trying to figure out how this could have happened and how this works. Perhaps you can dig into that for me.

Mr. Mealey: When we see some of these votes switch, it’s completely theory to go into how the actual votes are switched or are decremented and things like that. We’re not really able to say that because we don’t have the full trail of all the data. But when you talk about adjudicated ballots, that in itself we pointed out specifically, because it was a very, very high probability that there was some problem with that process, because it was so insecure.

Because when you go and adjudicate a ballot, it destroys the audit history. You can’t go back in time and look at what that original vote intended to do with that. Unfortunately, we can only look at the data as it is raw and that’s what our focus is limited to, and bringing out those processes. Part of that process of figuring out where the votes switch and everything like that, we have to map out that whole process.

So when we came to that adjudication point in the process, that’s when we really learned that there’s something really weird going on here because how could you have a process that is a complete break in the chain of custody? It wouldn’t stand up in court. If I put that ballot up in court and said, “This is the ballot,” the opposition would just say, “How can you prove it?” If this vote wouldn’t hold up in court, how’s it supposed to hold up against all the other votes in the land?

That’s a real problem that we have with that. That’s why we focus on the adjudication—there’s a problem here. It doesn’t mean that this is how fraud occurred, because all we’re doing is proving that fraud occurred, but this is a huge area of exploit, that somebody could easily exploit this situation into creating ballots for another candidate or for the candidate that they want to win.

Mr. Jekielek: But you have this scenario where you believe that it’s physically impossible to adjudicate the numbers of ballots in a reasonable way that were adjudicated in a relatively short period of time. I want you to explain that to me.

Mr. Mealey: What’s really, really nice is that we do have documented—one of the people in charge of the elections in Fulton County saying that 113,000 ballots were cast, 106,000 were adjudicated. Now, if you’ve seen our video or anything like that, obviously, you put some of that information up, that is a process that is pretty much all machine to machine to machine to machine.

When we’re looking at the adjudication piece of that, we’re thinking, “What happens when you adjudicate, if we have 106,000 ballots that are adjudicated? When we talk to people who were adjudicators, we asked, “What’s the fastest that you can adjudicate a ballot?” Imagine it’s me and another person sitting there, and we’re looking at a screen and we’re saying, “It looks like might be this,” and then this person would say, “Yes, it’s that,” and whatever like that—it’s about 30 seconds, that’s about the fastest that you can adjudicate a ballot.

So if you could adjudicate a ballot every 30 seconds, then that’s basically two [ballots] per minute, and you’re saying 106,000 [ballots], I think that’s 53,000 [minutes]—it comes out to about 883 man-hours to do that. But the logs don’t show that there are enough people to actually adjudicate that by the time that he actually gave that interview. There’s not enough physical time and that’s if you did every single one, working 24 hours straight, working on all this stuff, and you had 30 teams or whatever that is, but you would be having all these people adjudicating in order to achieve that number specifically.

Of course, if you look at a waterlogged ballot, or if you looked at a ballot that had an X here and a check there, it’s not going to take 30 seconds anymore because now we have to talk about it. But assuming the best case scenario, 30 seconds, it’s physically impossible with that amount of time to adjudicate that many ballots.

If that’s the case and when you adjudicate a large amount of ballots, you can adjudicate a batch of 100 and just say, 100 for Biden, 100 for Trump, however it works, and that entire batch will now destroy the record for every single one of those ballots and move the vote to exactly what they put. These are supposed to be adjudications individually, but that doesn’t necessarily happen.

Mr. Jekielek: Let me just get this straight. So basically, the process could be one where these ballots are actually, in theory, adjudicated individually, but then when they’re put into the system, there are 100 or more at a time that are put in for a particular person. They’ve kept a little tally or something like that.

Mr. Mealey: I would say more like that it’s physically impossible to adjudicate that many ballots one by one, because they could have started their adjudication on November 3 and they would maybe get through about 58,000 [ballots] today. So how could you do 106,000 [ballots] for the amount of teams that they had working on it?

We were asking, “How long are you guys working on this, are you working in shifts?” Everything like that, we were going really deep into their process. From what we could tell, it looked like you had a team of two people over here, and then they would be replaced by a shift of another two people, and then there was another group, so two separate groups adjudicating—that’s four people adjudicating. Even if you were to do that 24 hours a day, you still wouldn’t be able to do that in time for today, let alone one day later after the election.

But obviously, that’s just doing it individually. Now, if you did adjudicate them in large batches, [this] goes against the entire adjudication process because you’re supposed to look at that to determine voter intent. You’re supposed to physically inspect that with another person agreeing with that. How could you physically inspect 100 ballots at once? The machine doesn’t allow you to do that. It only brings up one in front of you and you both talk about it, so how can that even be possible? It doesn’t make any sense to us.

Mr. Jekielek: What has been the response to you finding this out and sharing this with the secretary of state?

Mr. Mealey: Absolute silence. Not one secretary of state has refuted a single statement. In fact, they spent time to write articles about other people that testified in Georgia, but they have not refuted our statement, because ours cannot be refuted based off the knowledge of the data. It is just actually how the data is. There is no refutation for it. I would love to hear if you could [refute our statement].

When we released these videos, we put out the data set. This is what we use. See what you can come up with. If you can come up with something, an explanation for what we’re seeing, that’s what we’re here for—it’s the Data Integrity Group. This is not the “get Donald Trump elected” group. We want to know what happened and why these errors occurred, and that’s what’s important to us. It is that nerd curiosity that led us to even look at this thing and we need to know why this happened, what the reasons are, line by line. So if you say, “There was this human error here,” that’s fine—that’s one. I need to know all these other 37. Are they human error as well? Are these machine error? I don’t know.

Mr. Jekielek: Dave, tell me a little bit about these 400,000 errors that you found in Pennsylvania detailed in this video that we published exclusively on The Epoch Times. It’s been great to partner with you on that.

Mr. Lobue: In Pennsylvania—I’ll just say off the bat as we’re getting deeper here into the background—obviously has its unique thumbprint in terms of what happened there within that data structure. It has its unique processes of how the data is transmitted on the ground from when everyone puts their vote in the machine or the tabulator, and then how it gets transmitted through the computer system and on to the eventual certification process.

In Pennsylvania, we noticed a few just major glaring irregularities as we see in the video to varying magnitudes, and some of deeply concerning magnitudes. Like Justin had said, if there is an explanation, then that’s fine. All that we would ask for is a thorough explanation and not simply a brushing off of an explanation that it’s simply human error.

In data integrity, we’d like to know what the process is in the code in the background to confirm, given the gravity of the situation, that is in fact what happened. As a programmer, you’d like to know exactly what part of the program malfunctioned, and from a system log perspective, you want to know which of the logs had the error. I don’t think anyone out there with the background or knowledge of these systems in programming would be satisfied with a simple, “It’s a human error, it was just a glitch,” because what does a glitch mean? There’s always a precise way within a computing system to identify exactly what happened and what went wrong.

To get back to this specifically, Pennsylvania, a series of irregularities, and this is something that is really within the data that we saw from either Election Day calculations or in absentee vote type buckets, that votes are oddly [irregular], and again, not universally. I talked about benchmarks, we want to make sure that certain counties look right, because if it’s an error that is universal across all counties, then it’s likely to be some sort of system error that just was tripped up or some part of the code that is consistently erroring.

That’s not the case. It’s irregular and more so, infrequent, because we have more regular updates across these counties than we do the anomalies and that’s hence why they’re the anomalies. So focusing on those is where we bring out that deeply concerning drop in data.

Again, it’s not just for Donald Trump. We see the irregularities moving across third-party candidates, write-in candidates, and it’s just been no explanation if you were to evaluate this from a strictly data standpoint on why this would occur. Whether it be intentional manipulation or system, there needs to be an answer. I think that anyone with a background and understanding of these processes deserves to know precisely what that answer was, and not a simple brushing off of, “It’s a human or a log [error].” We’d like to see the log.

Mr. Mealey: I do want to bring out one point too, which was—we’re not saying that negative votes didn’t occur for other candidates, which is still worrying that you’re seeing any kind of negative drop in votes, so that’s still a problem. It occurs for other candidates—it occurs for Jo Jorgensen, it occurs for Joe Biden, but we’re talking about magnitude degree of votes.

The thing is, a lot of times when you see those drops in votes for other candidates, you also see them go back up and recover. But with Donald Trump, he’ll have a drop in vote and then he just won’t recover. That does point to another characteristic that separates it. The line that we had about that, to clarify that, is to say that while we did see drops and votes for other candidates, it’s not the same type of drop.

Mr. Lobue: I’d like to say something further on Pennsylvania too, which is our processes there. There’s a data engineering database structural component which is almost the entry point. That’s where we say, “Now we know where to focus,” and then the analytics analyses kicks in where we look at statistical anomalies at that point. There’s a multi-tiered process that we’re going through to ensure that what we’re seeing with all the knowledge we have on the team, horizontally, vertically, that none of us can explain with the depth of what we know, is what’s happening there.

It’s when we reach the end of that rope and say, “There’s no reasonable explanation,” that we say, “So what is it?” Because the combination of database oddities, anomalies, irregularities, and statistical improbabilities of some of these things that we’re bringing out are just way off the charts in terms of what you would expect in a normal distribution or any type of expected behavior.

Mr. Jekielek: Once again, you’re making these data that you’ve scraped available to everybody to run whatever analyses they want, to try to offer alternate hypotheses as to what happened.

Mr. Lobue: Yes. The other thing too, it is complicated, obviously. For those that have a knack for programming, we have all the code available. We’ve written across a few languages and cross-verified that obviously everything we’re doing is accurate and precise. Everything is open and out there in the spirit of transparency because we want to know the answer to the question as to why this happened and there’s been no reasonable explanation.

Mr. Mealey: We’ve actually talked to a couple of people that you can consider to be opposing arguments for the things that we brought forward. One example would be in Arizona. We talked to a person that has a Twitter handle, the data guru or something like that. His name is Garrett Archer, and he took a look at this stuff and he provided some reasons why he thought it didn’t work.

We had a conversation and we talked to him about all that stuff. He started off saying, basically, there’s absolutely no tomfoolery going on in that election and by the end of the conversation, he agreed, “Yes, maybe we should have a forensic audit, looking at the data.” That’s the power of looking at data and I give him very great amount of credit for understanding it from an objective standpoint, and he’s a reporter.

He looked at it from an objective standpoint and understood that we do need to look at this, maybe. He agreed that we should possibly have some sort of forensic audit of the election inside of Arizona. Again, the different types of things that we found are different per state, so we were looking at specifically his state and talking to him about his process, which he knew intimately.

Mr. Jekielek: So John Basham, great to speak with you as part of the Data Integrity Group.

John Basham: Good to be here, Jan.

Mr. Jekielek: You played a role in getting these people together. You’re one of the on-camera personalities in the group. How did this group actually come together?

Mr. Basham: Right after election night, and even on election night, I think there were a lot of us who were working data—my expertise is in numerical weather predictions. We deal in large numbers and vast datasets. Those of us who were watching the election noticed that the numbers didn’t work in any scientific sense. They didn’t work statistically, they didn’t work in the simplest thing, which is vote should never go backwards.

So there was a group of folks who were online within the first 24 hours who were data scientists, and we all started comparing notes. In the first week, I think it really started. I put a tweet out that had leveled a few of the states, six states, if I recall correctly, that said, “These are just initially some of the first things that we’ve seen in the raw data that we are desperately trying to get. There’s something wrong here, something doesn’t look right.”

That tweet garnered so much attention that Linda McLaughlin reached out to me and said, “Hey, where are you going with us? What’s going on? What can we do with this?” From that moment on, and we started putting our heads together and thought, “There is no group out there that’s looking at the data. Everybody’s looking at it from a political standpoint, it’s got to be you’re either for Trump or you’re against him, either for Biden, or you’re against him.” That wasn’t the case—it was you’re either for a real vote counting for one person and what their vote was or you’re against that vote being counted.

It was really that simple. It is the American Republic. America is exceptional, because of what we’ve got. If we ignore that, it’ll be gone. So I reached out to a large group of data folks, and a lot of them were very worried about even coming into the fold. There are some incredible folks who have contributed, but wouldn’t put their name on stuff because they are afraid that they would be attacked because [someone] would think, “Oh, you’re taking a political stance.”

Well, the numbers aren’t political. We actually just looked at the numbers now. Which by the way, [it] was hard to get those numbers after going back and forth, it did take some time. It’s not like there was a company in place or group in place, or even systems in place to figure out how to attack all of these states and all of this data that’s done differently in every state, and in many times done differently in every county. So we had to put together, cobble together a group of experts, and it took some time to put that group together.

Once we did, we have the top level, which is the Data Integrity Group, the group that you have seen and talked to. Then there’s a tremendous amount of support people behind that who are experts in very specific things, whether it be programming in different languages, or whether it be large number theory or machine learning. What we started doing was coordinating to get these group of very intelligent, very driven people who all saw the same thing. It wasn’t that they saw that Joe Biden had won. It wasn’t that they saw that Donald Trump had lost.

As a matter of fact, our group has liberals, it has conservatives, and we have someone in our group, I won’t call him out, but who didn’t even vote in the election. So it really came down to we were passionate about the data, the numbers we were seeing, and the fact that those numbers, every one of those numbers represented a vote and a person’s intentional vote towards something. So it had to be right.

That’s how the group started to get cobbled together. And then 1000’s, and I really mean 1000’s of man-hours since the election day of non-stop video conference calls with the group, in 12 and 14-hour runs, putting together data, gathering data, checking the data to make sure we were right. The biggest thing that we’ve seen in this entire scenario is people will come and they’ll attack you because you’ve said something that’s against the narrative. Look, there’s no narrative in numbers. It’s either the number is correct or it’s not. So for us, it was very important that we got the numbers right, which was a very hard thing to wind up doing, gathering those numbers to begin with.

Mr. Jekielek:  You actually mentioned that it wasn’t so easy to get this data. It’s publicly available,

Mr. Basham: There is no central gathering point in government for this data. You would think that there is one. There’s not. You have to go to the individual states and counties to compare it with the Edison data. There’s no central place that shows you what’s called the time series of the data, as the data came in throughout the night. Most of the things that we see are the end result. At the end of the day or the next day we had 500,000 votes in this county for this person, 400,000 for this person, that’s it.

Well, what we found was, it was very important to look at the time series. That’s where we found these errors, where votes would either go backwards, or they would disappear completely, and then maybe reappear later in another account. But the fact was, government did not cooperate with us. We’ve reached out in many cases trying to get data and trying and trying. We had no real help ever. There were some places where it was much easier to get what we were looking for.

But we actually had to invent systems, write computer scripts, write program to read data, either off raw PDF forms in one case, or whether it was in what’s called JSON data in another case. We would have to go in and rip the data in pieces, one piece here and one piece there and cobble it together. Then it became a very important thing for us to make sure what we had cobbled together was correct. Because the last thing we wanted to do was say, “This is what we found,” and then later say, “Our data was wrong.” So we had to compare.

In this case, we did three sources. We wound up making sure that whatever our final numbers were matched with the secretary of state, the Edison data or within just a couple of votes. And I really mean a couple, two or three. Then in many cases, if we were looking at a specific area, for instance, in places in Pennsylvania, where we would look at a single county, we would make sure that we were looking at the county data. Now, none of those things, whether it be Edison, the secretary of state, or a single county, not one of them uses the same data format. They don’t report it the same way. They don’t put it in the same kind of tables, they don’t use the same programming language.

So this very intelligent group of data scientists and programmers and machine learning experts had to build a system to gather from each individual source at every county. It is a different solution for each county in order to gather that data. You can see how intensive it was for us to get this data. The sad part is it should have been something that the government was saying, “Here, take a look. It’s your vote. It’s our election. This is our democracy. Take a look. This is what we’ve got. What do you see?” Because that’s how you trust an election. You show everyone what happened. But that’s not what’s happening now.

Mr. Jekielek: You have a few recommendations that your group has come up with. One of them is, it seems to be and correct me if I’m wrong, that you’re saying a forensic audit on all of these contested states is essential to deal with these anomalies, to deal with these irregularities. That’s one thing. Another one, you’re saying the government should actually organize this data in a way that people can access easily.

Mr. Basham: Obviously, there are two sides to this. The future, what we do going forward in our democracy is, yes, there needs to be a single format for the way data is given, and it should be open and transparent at all levels. You shouldn’t have to pay $20,000, $30,000, $50,000 to get a list of the people who have voted or the data that you’re going to need to verify that an election was in fact on the up and up.

As far as the forensic auditing, we’ve kicked this around and one of the things that hit us was that the vote in America is probably one of the most basic human rights we have. It is a human right that—and you have a background in human rights—is so important, everything hinges on that vote. Your entire sense of freedom, what you can and can’t do, that vote matters. But take a step back. People look at that in the big general sense and they don’t think about it. It’s an out-there feeling, “That sounds good and I get that.”

Let’s make it your bank account. Would you accept your bank after you went in and you put your check in, you know how much your check was, you come back the next day and the total is different? I don’t care if it’s $5 difference or $500 difference, whether it went up or went down. Would you accept that from your bank, that your totals would change just magically?

When you went to ask them about it, they recounted and then they got another total, different than the first two. When you said, “I need to see the data. Let me see the books. How did this come in and out,” they said, “No. You’ve got to trust us. Our tellers tell us that this is right.” That’s your bank account and I guarantee you would fire your banker. In a heartbeat, you would fire your banker.

But the vote is more important than your bank account, because they can take all your money with a vote. That’s where we are. I think that people aren’t reaching out, reaching out to their senators, reaching out to their congressman, getting on the phone and saying, “We have to get a forensic audit, an actual audit look at the entire trail.”

And I would love for someone to completely explain and say, “No, this is right and let me tell you why you’re missing something.” We are open to that. I would love to see that happened.

You’ve seen some of the polling where we’ve seen upwards of 40 percent of the American public don’t believe that this election was fair and that it was valid. If that’s the case, we’ve got a problem in our republic. We need to bring back a situation where—I harken back to the Bush-Gore election where we had the infamous hanging chad.

Whether you agreed or disagreed with the way it ended up at the end, the bottom line was, America got to look over the shoulder of all of those people who went and had to recount those votes. They had a Republican, a Democrat, and the county worker. They were holding it up, there were cameras there, everything was visible. The courts were engaged. They did it immediately, the Florida Supreme Court, and then within hours, the U.S. Supreme Court, and then they’d start the process again.

This election, it seems so very different than that. It seems as if the courts don’t want to engage, even though there are very clear indications that something’s wrong. Fraud, not fraud, there’s something wrong. The data isn’t correct. One plus one doesn’t equal seven. If the numbers aren’t matching, we need to just have answers as to why those numbers are wrong. Whether you think it was fraud, whether it was mistakes, whether you think there’s a valid reason for how it happened, we need transparency in our election and our electoral process to have trust in it.

Mr. Jekielek: Any final thoughts before we finish up?

Mr. Basham: Democracy is a very important thing. Our country, if you’ve stopped and think about the United States, it’s an amazing, exceptional place to live, to grow up. Hundreds of thousands of people immigrate here because we are that great American Dream, the shining city on a hill that Ronald Reagan said. It’s true.

If we throw away the simple part that makes us so great, that the American people’s voice matters, and when something goes wrong and the people say, “We don’t trust this process,” if we ignore that, we’re throwing away American exceptionalism—it’s not worth throwing away. We need to save this. We need to look at this race. No matter what the answer is, no matter who sits in the White House, that’s secondary. The important thing is, let’s find out what the real answer is to why this election looked so wrong.

Mr. Jekielek: John Basham with the Data Integrity Group, such a pleasure to have you on.

Mr. Basham: Thank you, Jan.

This interview has been edited for clarity and brevity.

American Thought Leaders is an Epoch Times show available on YouTubeFacebook, and The Epoch Times website. It airs on Verizon Fios TV and Frontier Fios on NTD America (Channel 158).
Follow Jan on Twitter: @JanJekielek