Kara Frederick: Big Tech Totalitarianism and America’s Emerging Social Credit System
“It’s not just social media companies or your right to be on Twitter, your right to be on Facebook. It’s everything. Email delivery services, online fundraising platforms … It’s every aspect of your digital life,” says Heritage Foundation research fellow Kara Frederick.
From Twitter to GoFundMe, digital platforms have shut down accounts, fundraisers, or online transactions and removed people from aspects of digital life because of their political views. “These practices are frankly mirroring that of what China does in the social credit system,” she says.
Frederick previously served as a senior intelligence analyst for the U.S. Naval Special Warfare Command, and spent six years as a counterterrorism analyst at the Department of Defense. Later, she helped create and lead Facebook’s global counterterrorism program.
In this episode, she breaks down what she sees as an evolving Big Tech totalitarianism, and offers a strategy to rein it in.
Jan Jekielek: Kara Frederick, such a pleasure to have you on American Thought Leaders.
Ms. Kara Frederick: Thank you for having me.
Mr. Jekielek: Kara, you just published a report for the Heritage Foundation; the name is bold and I’m going need you to justify it for me here, “Combating Big Tech’s Totalitarianism: A Road Map.” Totalitarianism is the word that struck me. A lot of people know about surveillance: sometimes very deep surveillance; some semblance of a social credit system. Perhaps totalitarianism is a whole another step. What are you thinking here?
Ms. Frederick: Yes. So, that word in particular, it’s a nod to this trend that we identified and we really probed in this report. And that trend is this increasing symbiosis between the government and tech companies. I looked at your last interview with Rod Dreher who talks about this soft totalitarianism that’s plaguing the west now.
He says it’s the politicization of everything. As you know, Airbnb refused services to a prominent, very conservative voice and her family, Michelle Malkin. She can’t rent and use that service based off of her political ideas and her viewpoint.
Joe Rogan and Spotify. You have the government—Jen Psaki, the press secretary for the Biden Administration—standing up at the White House podium and saying, singling out a specific company; not even an American company, mind you, but a specific tech company and saying, “Spotify can do more to combat this crisis that accusing Joe Rogan of perpetuating, which is the misinformation and disinformation crisis surrounding COVID 19 vaccines,” et cetera.
And she’s done this before. So, the governments have used tech companies as their agents to chill speech before. In July, she stood up again at the podium over the summer along a surgeon general, right in tandem, and basically said, yes, there are a couple of users and accounts, a few users and accounts, half a dozen in fact or a full dozen that we are singling out for purging from the platform, and this platform is Facebook.
We are working with Facebook to do so. Within a month as CNN reported, all of those users and accounts were off the platform, and they were gloating in this instance. So, it’s that integration of the government and big tech companies to police speech that I think is troubling and very evocative of the coming totalitarianism.
There’s a litany of other examples. In addition, Joe Biden in January saying, making a direct appeal to tech companies to do this, the Homeland Security secretary, Mayorkas, also say we’re doing this, not even in the context of COVID misinformation though, in the context of election integrity and election security.
So, this is becoming pervasive. And big tech companies are the willing agents for the government to have really a heavy hand on the American people. Then, we can talk about gradations of this totalitarianism when it comes to a potential coming social credit system. The contours of that are developing right now at the hands of these big tech companies.
There’s, again, a litany of abuses that we can talk about. The seeds are developing now. In some instances it’s overt, such as when the January 6th Committee called on 35 tech companies to give up their data in terms of people who were not even in the capital building on January 6th, but just milling around, and they readily are responding to these requests.
In my mind, you have a confluence of data points that absolutely supports the politicization of everything, especially in the tech world. The normalization of specific tools that were originally meant for national security purposes now being repurposed to look at misinformation, disinformation, malinformation, and dissenting viewpoints as propagated by Americans on social media.
So, absolutely, this is coming to totalitarianism. These practices are, frankly, mirroring that of what China does in the social credit system. You have to remember that started with private companies as well, in specific provinces in the financial sector. So, I think it’s extremely important for Americans to get their guards up and recognize what’s happening as it’s happening today.
Mr. Jekielek: Wow. So, you’ve just given me your whole overview of this incredible report that you’ve put together. You know what I’m going to do, before we jump to my next question where I want to explore this focus on combating misinformation, malinformation and so forth. This seems to be the play of the day, so to speak, right?
But tell me a little bit about yourself, how you got here? I was looking at your resume. You have pretty fascinating steps to get to your current role now where you’re looking at all this. So, tell me about that.
Ms. Frederick: Yes. What’s really interesting and not by my own hand or my own design, but all of the experiences that I’ve been fortunate enough to really fall into, have built upon each other to get me to be able to publicly comment on public policy and try to provide recommendations for how to fix some of these problems that we’re seeing emerge now.
I started out [wanting] to be a soccer player. You don’t need to know this, but I played over in England for Fulham, and then at the end of the season, traded to Chelsea. My dad told me, “You need to find a real job.” At the time, there were two wars that were raging.
He was a Marine, so I knew I wanted to be part of it at the time. I ended up getting recruited by a three-letter agency, and that was the Defense Intelligence Agency. Once I did that, EOD in January, and I was on a plane to Afghanistan to support special operations forces by the summer.
While I was working in the intelligence community, I was embedded with the National Security Agency. Our whole job there was to look at digital network intelligence and analyze how terrorist actors moved in digital space, so we could find needles in haystacks effectively, Al-Qaeda terrorists.
I then moved to a tip of the spear command, Naval Special Warfare Development group. Then, Facebook caught wind of me. They decided that they needed a counter-terrorism analysis program for global security. This is when ISIS was increasing their propaganda efforts online.
When everyone was saying they had these slick videos that were really recruiting terrorists on platforms like Facebook, [and] Twitter, Facebook mobilized against it. They needed people who had experience in the national security bureaucracy developing these systems, figuring out these tools. Assessing and monitoring how terrorists moved and acted online so they could transfer some of that knowledge to their burgeoning business.
I think the particular sea change for them was when ISIS had a feature of Mark Zuckerberg and Jack Dorsey and said, “We’re targeting you guys in particular.” I think there was like bullet holes at Jack Dorsey’s head in the video and whatnot. They were like, okay, this is a big problem. We need to work towards fixing it.
My job at Facebook when I went to headquarters in Menlo Park was to really help surface high quality publicly available information to help improve those platform-based reactions. In layman’s terms, effectively looking at amplifying information around when terrorists attack and help identify if that terrorist had been on the platform in some capacity.
Basically, flag that for the teams that actually took the content down, as well as work with the engineers to help build tools to identify those bad actors further so that we can make the platform, as we used to say, hostile to terrorist actors.
Mr. Jekielek: So, to take down these videos that you were just describing, to be able to identify them quickly and get rid of them.
Ms. Frederick: That wasn’t my job, that was the community operations team. My job was to flag what these bad guys were doing, how they act and posture later in the future for contesting them. A good example is on the Berlin Christmas Market, remember there was a truck attack?
When that happened, I had to be ready at any hour of the day, unfurl my laptop out of my backpack that I trekked around with at all times and figure out what this guy’s name was. If he had a kunya, if he had a nom de guerre or anything like that.
If he was indeed on the platform right now and communicate to executive leadership, is this an actual terrorist attack? Is this just a chaos agent? Is there a political motivation behind it? Who is this guy? Is he on the platform right now? And has, God forbid, he used Facebook to conduct the operational planning.
That was our worst nightmare, so we had to guard against at all costs and basically communicate that up the chain. Then, afterwards, you do an after-action review that results in how do we improve our processes on the platform to make sure this guy number one, can’t get on the platform to begin with.
And number two, if he conducts an attack like this is not on the platform spewing his hate, inspiring copycat attacks into the future. So, it was 24/7. We encompassed the whole globe when it came to these terrorist attacks, but also really looking at how they behave online, their digital practices and using that to inform going forward various permutations of bad guys so that we could head them off in advance.
Mr. Jekielek: Absolutely fascinating. I can imagine that a lot of our viewers, including myself are now thinking to ourselves, wow. Okay. So, that feels a very powerful set of tools to be using, to be developing. And those could be used against other people as well, right? And this is what is some of the substance of your report right now.
Ms. Frederick: Precisely. And what worries me is that those tools can not just apply directly, but inspire at least the creation of tools that are then turned inward on the American population for things like dissenting viewpoints. I believe that there are genuine problems on these platforms, right? Human trafficking, advertisements for drug cartels.
CSAM, which is Child Sexual Abuse Material, child exploitation and pornography, and real foreign Islamic terrorist content, those are real issues. Not to mention state-linked influence operations, where you have bots that are farmed out to patriotic citizens by the CCP, the Chinese Communist Party, to spew bio all over the internet or cheerlead for the CCP. Big real problems, right? It’s very important that we do have people within these companies working on that.
Mr. Jekielek: Potentially homegrown terrorism and things like that. Like those things that does happen, right? Yes.
Ms. Frederick: Yes, then those problems do exist. But I think in terms of resource allocation, I’m detecting what I would call a very troubling trend to focus on right leaning content, dissenting content. So, this is the big problem. We have failed to agree on a definition of misinformation and disinformation, and what actual organic influence operations are versus state-linked influence operations from nefarious actors.
Right now, disinformation, it seems to be a catchall for views that the left doesn’t like, that the Biden regime doesn’t like. No more demonstrative examples exist other than the Hunter Biden laptop story, the lab leak from the Wuhan Institute of Virology. These two things were considered misinformation at the time. You would be censored suspended or banned from Facebook and Twitter and other social media.
Mr. Jekielek: And publicly vilified and polite society, right?
Ms. Frederick: Precisely, yes. Tom Cotton was lambasted as a conspiracy theorist before. Then, all of a sudden, you have more mainstream outlets actually reporting on the potential lab league as a theory and it’s okay now, because New York magazine said it.
Reputations like Dr. Robert Malone in terms of any dissent from the prevailing orthodoxy, these heterodox voices, these people who frankly are classical liberals who just question and really do believe that science is a process of evaluation, and deep intellectual debate and rigor. You have these people being absolutely vilified, being issued from these platforms and jettison for being [inaudible 00:13:41].
We can’t agree on what misinformation actually means. Because right now, again, there’s a disproportionate focus on, I think, on misinformation as anything that dissents from the prevailing narrative. That is extremely problematic if you start to use these tools, originally created for noble purposes, to appropriate them to target these dissenters. It doesn’t look like America in that instance.
Mr. Jekielek: Well, okay. So, we definitely need to talk, again, goes back to misinformation. Is there some point at which it’s acceptable to censor misinformation? We’re going to talk about that in a sec. Before we go there, this is, I think, around 2017, when you’re at Facebook.
How close is the government working with Facebook compared to how close the government appears to be, because you’re no longer there, to be working with Facebook now? Oh, and let’s just say big tech, in general, not to single out Facebook necessarily, but you have that experience.
Ms. Frederick: Yes. In my experience, there is a very clear pattern. You have these CSAM teams, so teams devoted to countering child exploitation on these platforms. They were the first, in my experience, to be built out, like they had their information security engineers.
You had your analysts. You had your cross-functional teams as we called them. So, community operations, policy teams, everybody works together to contest this one big problem of child exploitation. They had a great robust apparatus created there.
Originally, they did have to talk to the government because there are various institutions like the FBI that really work on this stuff. They have that institutional knowledge that Facebook at the time didn’t really have. Hence the reason why they poached from a lot of intel agencies and brought them over to give them that semblance of augustness.
People have been working on these problems for a long time in bureaucracies that had hammered out these problems and worked on these problems for decades. That apparatus to counter child sexual abuse material was really in place at the time.
Then, we piggybacked off that. We even poached some of their InfoSec bodies to use on the foreign Islamic counter-terrorism problem. There’s a little bit of a pattern like pulling from the people who’ve done it before; the tried and true to bring their knowledge and their know how to that specific team.
Now, you’re seeing that’s happening too; but the counterterrorism issue [is] being expanded to look at more right-leaning, right wing expression. Don’t get me wrong, there’s real problems that exist in the right-wing atmosphere. There’s real terrorist attacks that have actually occurred, especially overseas in New Zealand and Norway based off of right wing terrorism.
But there seems to be some definition inflation that we’re seeing in other realms of the US too, like with the National School Board Association telling Merrick Garland that parents who are against CRT are actually domestic terrorists or extremists.
Then, Merrick Garland moving within five days, according to representative Jim Jordan to institute that terrorist tagging system within our counterterrorism entities here in the US. So, I think that is very problematic, but the pattern is also very interesting to see how they build it out in tech companies themselves.
With building it out, they do work with the government. They do work law enforcement response requests and whatnot. They have whole teams dedicated to, at least in 2017, working with the government when it came to child sexual abuse material.
Of course, when it comes to foreign Islamic terrorism, there has to be some connection as well with the entities that do it best in the government. But for what I am seeing, it’s starting to become a little worrisome because that closeness is getting more and more integrated and looking pretty odious for a regular dissenting American who [is] questioning COVID 19 orthodoxy.
Mr. Jekielek: Well, exactly. So, people who are involved in child exploitation, these images and stuff like that, that doesn’t strike me as a countering misinformation operation. Tell me what you think. When I look at the trajectory of when this became a thing, this is post 2016.
This is post Donald Trump becoming president. We have this video of Google, like a town hall or something, people are saying, “Oh, my goodness, how could we have allowed this horrific thing.” We have politicians telling social media, “We’re going to be hard on you unless you deal with this problem” right?
There’s this politicization happening. It seems like, at least to my eye, and this is where you can actually help me understand. It seems like that’s when this whole focus on misinformation grew, and I would say metastasized. [That] reveals my perspective on it. But first question, is this when this all grew, how do you see that? And [the] second question, is there a place where combating misinformation is actually appropriate at all?
Ms. Frederick: Yes. So, I completely agree with you. I think you hit the nail on the head. It was that 2016. Remember when Cambridge Analytica, that scandal was reported and people were like, “Oh no, the reason why Donald Trump got elected was because of Facebook.” That became the prevailing idea.
Facebook really tried to defend itself. One of the things they did that was revealed by the Facebook whistleblower and the Facebook files that the Wall Street Journal ended up publishing last fall was that they created two internal tools that actively suppressed very conservative outlets.
This is in the wake of Donald Trump’s victory. They realized Facebook lives and dies by its internal tools. They realized they needed to do something because they were getting hammered by the media, hammered by other democratic politicians; just absolutely hammered by the public as the ones who were the architects of this horrible Trump regime.
They care about their brand and reputation very much so they moved to counter that. One of them was creating those tools that ended up suppressing outlets that are very conservative in nature. One of those tools is still in use as of October 2021.
They figured, “we need to show our work here and show that we are, in fact, not on the side of Donald Trump.” But I think you’re exactly right. That’s when you had people, new employees flooding into these companies that were on fire for that mission.
They were like, “We are going to ensure that this never happens again.” Frances Haugen even admitted that the reason why she went to Facebook to work on the election team was because she didn’t want anyone to fall prey to, and I’m extrapolating here, but what I think is right wing ideology. Because her friend was from the Midwest and he fell into the misinformation trap according to her, et cetera.
So, you have a new wave of employees who are very gung-ho about this. They see it as a mission. You have all that exogenous pressure on Facebook as well. The building of new internal tools to really contest that narrative from the outside. That did, in fact, I think change the trajectory of the company to one that was a democratizer of information.
Linking the world, connecting communities, building communities online. They dropped that motto and really changed. I think that I, myself, sat through the leaked footage of the Google town hall on that Friday that you talked about. When Sergey Brin got up there and said, “the reason why we think people voted for Trump, we can’t even relate, it’s crazy.”
They must be in all to some form of extremism. I think that is a data point to cling to. Because as Molly Hemingway reported in her book, “Rigged,” a journalist, she said that they had plans to take a company, one of the technology incubators called Jigsaw from Google, which was originally created to combat foreign Islamic terrorism.
They had plans or thought or brainstorm the ideas of making sure that we could maybe use some of our technologies to combat the extremism of the Trump voters. So, you start to see that inflation, the definition and inflation creep into the lexicons of the top executives in these tech companies. I think a lot of their employees, the product managers at that mid-layer of bureaucracy in these companies followed suit.
Mr. Jekielek: I have to smile. You’ve mentioned this definition, inflation. It’s the first time I’ve heard this particular jargon. It’s a way you can describe some of how this, for lack of a better term, woke ideology functions, right? Anything that isn’t in concert with this way—the woke, so to speak—view the world becomes something very extreme. It’s something that must be fought against. Definition inflation is a funny way to describe this, that’s what I was thinking.
Ms. Frederick: Yes. Credit where credit’s due. I actually poached that from Bari Weiss when she was talking about similar instances a couple of months ago. I think it’s very apropos. I think when you expand a definition that traditionally means one thing to encompass anything, basically, to the right of it, you’re expanding that definition in a major way.
We saw this in so many instances too. I’d bring up Kyle Rittenhouse when he defended himself and was acquitted for killing two people in self-defense in Kenosha, Wisconsin during the BLM riots, when they were burning down businesses.
He was defending a place where his father had lived, where he worked, et cetera. Joe Biden in a campaign ad said that he was a white supremacist. You saw that language, it starts with the whole white supremacy. Then, it starts to expand to that he’s an extremist.
You saw that in actual democratic politicians on their Twitter feeds and whatnot, they’re saying it over and over again. They’re taking a very tightly defined definition of something. Something actually very bad and noxious, and they’re expanding it to include anybody who potentially could disagree with BLM and their motivations, et cetera.
We’re seeing this in so many instances, especially after January 6th, when deputy station chiefs at the CIA, former guys are writing articles about how we should treat Republicans as Taliban facilitators and whatnot. You’re starting to see that occur. That is something I think we need to be on the lookout for and arrest right now, or else it’s only going to get worse.
Mr. Jekielek: Okay. We’re going to talk a lot about solutions because you do outline some of that in your report. What about disinformation, misinformation and malinformation? I mean, I don’t even know exactly what malinformation is. But do you think it’s ever acceptable for a platform to do this? Or, is it just free speech and everything should be acceptable, right, that isn’t illegal?
But even that. These are questions I’m asking myself. I’m horrified by some of the child exploitations that the porn… some of the stuff that’s rampant on some of these platforms, the biggest platforms. So, no, I don’t want that stuff there on the other hand, right?
You just pass laws to make whatever it is you feel like you want to be illegal. All of a sudden, yes, hey, it’s illegal to say what a year ago were very basic things, right? Is it ever acceptable?
Ms. Frederick: I definitely think it’s acceptable. But the problem is tech companies have taken it too far. This is where you get into the Section 230 of the 1996 Communications Decency Act, a reform of that specific statute. We’re thinking, okay, there’s an otherwise objectionable clause that basically protects community or tech companies from civil liability.
It gives them immunity from civil liability if they remove speech and content based off of a bunch of tightly defined things, and then say, an otherwise objectionable content. Tech companies have really taken that and run with it and used it to censor all kinds of political viewpoints and whatnot.
So, they’ve been given an inch and they’ve taken a mile. However, when it comes to, and this is how I parse it out, state-linked disinformation and influence operations. Granted, attribution is very difficult. Especially when these state-linked actors deliberately outsource the purveying of this disinformation to regular citizens, to fringe news outlets that then pass it to trusted outlets.
It’s a very, I would say, almost an intractable problem. But companies I think need to focus more on those sorts of things. That’s when purging genuine state-linked bad actors, foreign actors, foreign disinformation is a good thing. I think that they need to start there.
I do think it originally did start there, but it just, as you said, metastasized into something very different. That was with the Hunter Biden Laptop story, again, the Wuhan Institute of Virology lab leak theory and whatnot. I think they’ve basically taken that charge, which was originally a noble charge and gone too far.
Because they don’t pay a cultural price for this. They don’t pay a cultural price for purging Marjorie Taylor Greene off of Twitter. They don’t pay a cultural price for purging Representative Jim Banks or suspending him or Rand Paul, or taking down a Clarence Thomas documentary from Amazon.
Right now, yes, the public is catching wind and they’re saying, okay, this might be a problem. But you have accusations that all of these things are, they’re anecdotal. I say, believe your lying eyes. All of these mistakes are going in one direction. We’re starting to have studies coming out of organizations that are saying, yes, conservatives and people who support Republican politicians and Republican congressmen burst themselves.
They are in fact treated differently on these platforms. But to date, this is pretty much, it hasn’t necessarily flown under the radar, but it’s not really affecting their bottom line. Their stock prices keep going up and up. Until they actually pay a price for this want in censorship, companies are going to keep doing what they’re doing.
I think it’s very important to note that it’s not just confined to social media companies either. We talked about this a little bit in the beginning with Airbnb denying services to Michelle Malkin because of her incendiary viewpoints. But it’s coming at Spotify now, which is a media services company not even headquartered in the US.
It’s coming at a GoFundMe or they’re instituting it themselves. These are online fundraising platforms. Kickstarter has done it before too when it comes to anti-abortion and pro-life content that’s being advertised on their platforms. So, it’s hitting every node of your digital life, which is extremely problematic.
Because we’re going to be hemmed in more and more. Anyone, not just conservatives, but people with heterodox views, or people with heterodox views tomorrow, because we don’t know what’s going to be considered against whatever is the prevailing narrative tomorrow.
It changes so suddenly and so fast. But I think that GoFundMe point is particularly interesting. Because you have a company that with the Kyle Rittenhouse saga, again; they actively denied contributions to Kyle Rittenhouse’s legal defense fund.
Because pretty much Kyle set himself in opposition to BLM and all of its protests and its motivations. So, they said nobody can donate to him. Yet, they themselves, donated hundreds of dollars to the rioters, to the BLM protesters and people who are actively burning down buildings in Wisconsin.
Now, with the Canadian trucker convoy. So, the Canadian truckers are in Ottawa right now. They’re shutting down certain roads and whatnot to pressure the government to drop its vaccine mandate for truckers. GoFundMe amassed 10 million Canadian dollars to distribute and help this convoy.
Now, they’re saying, “We already distributed one million.” They initially said they’re going to take 9 million of those Canadian dollars and give it to verified charities of their own choice. People were like, “No, there’s a legitimate case for fraud here.” So, now, they’re going to say that they’re distributing it to everybody. But bottom line.
Mr. Jekielek: Distributing it back to the users.
Ms. Frederick: Oh, yes, sorry.
Mr. Jekielek: Because that’s their usual model, right?
Ms. Frederick: Precisely. Instead of giving it to charities of their choice when they’ve already donated to BLM riders and whatnot; which is problematic in and of itself. The Ottawa government—GoFundMe is an American company—and the Ottawa government, Trudeau himself said that they were working with GoFundMe to actually stop the distribution of these finances, to keep the trucker convoy going.
At this point, we can’t even voice our opinion by donating money on these digital services. It happens with email delivery services as well. It happens with internet service providers. They’re taking down websites, refusing to host sites that have different political views than those that conform to woke ideology like, again, the pro-life stance.
I think it’s important for people to understand that it’s not just social media companies or your right to be on Twitter, your right to be on Facebook, it’s everything. Email delivery services, online fundraising platforms, your ability to get a creative project going. The regular person’s ability to have a business on Instagram. Your ability to sell merchandise that you create on Shopify. Your ability to bank online with Stripe.
We know that 17 digital platforms mobilized within two weeks in early January to suspend or ban president Trump from their platforms. It can happen to the everyday user as well. I think it’s critical that we realize it’s not just social media companies, but it’s every aspect of your digital life, which is life into perpetuity.
Mr. Jekielek: What do you make of this new DHS bulletin that focuses the agency’s posture towards further countering of misinformation, disinformation, malinformation?
Ms. Frederick: I think it’s that linking of disinformation with terrorism. I think that these institutions, they have definitions for a reason. They call things terrorism for a reason because once you label something terrorism, you can then mobilize the robustness of the entire US national security apparatus developed in the wake of the September 11th attacks.
You can mobilize them against anyone that you’re accusing of terrorism. When you link disinformation, malinformation and misinformation with terrorism, that gives them license to do a variety of things under a variety of specialized authorities and visit them against the purveyor of this disinformation or misinformation.
What’s extremely troubling is that definition is not tightly defined. You’re not even talking necessarily about those state-linked actors I was talking about earlier. You’re talking about people that are questioning what we’re allowed to say about COVID today.
We know that questioning the efficacy of cloth masks—again, questioning where the origins of COVID—we knew that all of these things were at one time considered misinformation. So, anybody who does that as the science continues to change, are they now terrorists too?
That to me is they’re operating in a nebulous opaque space on purpose. And that is to potentially normalize the use of those authorities against people with a specific thought pattern. Even more troubling, I don’t know if you remember but the creation of a new domestic terrorism unit under DOJ was announced recently too.
The rationale for that was to target people who had anti-authority or anti-government ideologies. So, now, you’re not allowed to protest against the government. Now, you’re not allowed to dissent against the government. You link that with disinformation.
You link that with social media and with terrorism, and you have a cocktail that is very scary going forward for the average of American, who believes in a genuine marketplace of ideas; who wants to refine their own thinking against that of others and arrive at proper conclusions.
That is what America is all about. That is what the freedom of expression and speech is all about. Right now, they’re cutting that off at the knees and they’re outsourcing a lot of it to tech companies and working together with them to do so. I think this is basically supercharging the trend that’s already underway and giving it the full force of the United States government and national security authorities to boot.
Mr. Jekielek: What strikes me about the last few years of the official narrative; it’s almost like the vast majority of the prominent official narratives turn out to be false. It’s almost like if you were going to make a commercial to explain why it would be a bad idea to have these kinds of rules that you were just describing, I don’t think you could make a better one in a way, right?
Ms. Frederick: Oh, exactly.
Mr. Jekielek: I mean, yes, virus origins is a great example. But what about, for example, natural immunity. If anything, you would have very robust studies to show that that doesn’t work, because it’s usually the case. But somehow, it just hasn’t existed. Now, we’re seeing rumblings again after a couple of years that, oh, natural immunity is actually; yes, it’s robust. Oh, this is really interesting that this actually works, but being very wrong for several years with actually devastating social consequences, right?
Ms. Frederick: Exactly. That’s why these companies originally shied away from being labeled arbiters of truth. They would say, “We don’t want to be arbiters of truth” because they know how hard it is to get at the truth. You’re exactly right. The natural immunity issue, the fact that the cloth mask efficacy issue. The fact that lockdowns the Johns Hopkins study that recently said, lockdowns pretty much don’t—they have a negligible, if any effect.
Mr. Jekielek: My comment was just going to be that we’ve known for more than a year that the social cost—these various types of analyses have been done that the social cost of lockdowns is greater than the cost of the virus to people—that lots of these studies have been done. But to your point, again, that the social cost of this so-called disinformation suppression, we’ve seen it’s massive, right?
Ms. Frederick: Yes. That is something that is, I think, often absent from the conversation surrounding tech companies and what to do about them, and how they’re working with the government to entrench these narratives that pretty much always change based off of when the truth actually does come out.
I think it’s very problematic when the government and tech companies set themselves up as the gatekeepers of all information. I think the Hunter Biden laptop story is a very good example of this too, where you have all of those intelligence community officials coming out and saying, “This has all the hallmarks of Russian disinformation.”
This thing has consequences. So, we talked a little bit about the cultural consequences, but the political implications of this; there’s a study done by the Media Research Center last year that frankly said, one in six Biden voters in swing states would’ve changed their vote had they been aware of information that was actively suppressed by big tech companies. That includes the Hunter Biden laptop story.
So, you’re nudging people in specific directions. They’re percolating over things in their mind before they pull that lever in the ballot box. There are real political implications for all of this, for tech companies are hand and glove with the government actually getting it wrong.
There’s some political issues there. And then, the cultural impacts too. I think these are huge. The march to the social credit system not withstanding, but what about the impacts on the next generation of American citizens? This is what gets my goat about TikTok in particular, is that it is marketed to a younger and younger demographic.
We talk about this as the race to the bottom, that’s a quote from one of the democratic senators who’s pursuing this. But there’s a race to the bottom among these tech companies to grab that youth demographic, the preteen demographic.
Hence the reason why Instagram is creating an Instagram for 13-year-olds and under. They’re trying to really hook these kids with their own discreet platforms based off, specifically for them, because of the growth-at-all-costs mindset. But what about the cultural impacts of introducing these devices and putting them in the hands of young teenagers or pre-teens even.
We haven’t even begun to plumb the depths of some of those effects and those impacts. We know that Facebook did a bunch of internal research, again, leaked in the Facebook files in the Wall Street Journal. They found that a litany of studies, like one in three women, when they felt bad about their bodies to begin with, Instagram made them feel worse, and on and on.
There’s some interesting data that the Wall Street Journal also reported on separate from the Facebook files about TikTok and how there’s a massive uptick in cases of young girls going into hospitals with cases of potential Tourettes and tics. What they all had in common—yes, the pandemic and the atomization induced by the pandemic, and with everyone having devices in their hands absolutely played a role.
But everything that those cases had in common was the fact that they followed influencers on TikTok. Abigail Shrier writes often about the social contagion and the social media cheerleaders that fan the flames of gender dissatisfaction in young girls and how pervasive this is on YouTube.
There’s a reason why big tech executives don’t give their kids devices. Sundar Pichai, head of Google said so himself in a 2018 New York Times interview that his then 11-year-old son didn’t even have a device. Steve Jobs famously did not let his kid have an iPad.
So, I tell people that that tells you all you need to know about these cultural impacts on the next generation of citizen that is just plump, and just absolutely inundated with all kinds of influences that we don’t necessarily want as people who are trying to raise good solid souls that are formulated appropriately to the good.
Mr. Jekielek: Well, and with TikTok as an example, which is arguably the most aggressively invasive of these technologies, that’s even arguably under the power of a foreign power that doesn’t mean good for America. Well, I don’t think we even have the scope to go into that here.
But when you were talking about TikTok, it just gives me that extra bit of a chill. As I look at this whole situation that you’ve just described, I see an accelerating, almost leviathan of technology, ideology, censorship pushing in the same direction.
Some people that I’ve spoken with are concerned that it’s unstoppable at this point. You have some ideas about solutions or how to stop it or change things. What would you say here at this point? Because you’ve described a really challenging, potentially intractable problem here.
Ms. Frederick: Yes. I think in order to stem the tide and hopefully reverse it to really redress that imbalance between the consolidation of power and abusing that by tech companies, hand and glove with the government to redress the balance between that and the American people, I think you need to diversify your tactics.
I think you basically need to have an array of policy options, not just for Congress and relevant federal agencies, but also options that trickle down to civil society, state legislatures, state attorney generals, tech founders, and new entrants and other tech companies.
I think you need to basically hit them with everything to arrest the progress towards this totalitarianism that we’re talking about in the paper. So, what we say is get at that, number one, bottom line at all cost priority. The user growth at all cost priority as the second, and the third—brand or reputation.
To attack that, you have to employ multifarious solutions. You start with enforcing antitrust laws. They’ve consolidated all of this power. They do have anti-competitive practices. You have to clearly define them and have very strict limiting principles on that.
But at the same time, laws exist to be enforced. So let relevant federal agencies after congressional oversight actually enforce those laws. If tweaks need to be made to cut into these abuses, then I think that should be on the table as well.
Mr. Jekielek: I’m just going to cut in here, but agencies at the moment seem to be going in the other direction, right? That’s what we’ve been talking about today.
Ms. Frederick: Yes, that’s problematic. But I still believe in America, this Republic, right, and it’s messy. But I do believe that conservatives, at some point, or people who actually believe in freedom of expression will eventually be able to exercise that power at some point.
I still think there are good people within these agencies who do want to do the right thing. This is also the reason why we’re not saying that the solution is government. We’re not saying that the solution exists in these federal agencies whatsoever. Hence the spreading all of our manner of solutions and the ability to get at that has to be distributed among all sorts of options. So, that’s just one tiny little option.
Another option, I think, is cutting into that ad tech model, scrutinizing that ad tech model. That is the beating heart of these tech companies. That is what Zuckerberg is going to pay attention to if somebody starts talking about the way that they make their money. So, scrutinizing that, making it much more difficult for these companies to gather, to micro target, to exploit the data of users. And on that note, there has to be some data privacy framework.
I think data privacy legislation should absolutely be next on the agenda for Congress. The American user should know how their data is stored, collected, whom it’s shared with, how it’s used. There should be time limits, limits on indefinite storage. There should be privacy protections imbued in the data and the design of these products themselves, that’s critical.
Then, transparency; that is one thing. Let tech companies tell us, the American people, how they got to where they got. That is in terms of content moderation, behaviors and practices. Give us impact assessments. When were you wrong? How many times were you wrong? When were these accounts restored? Why were they taken down? Were they just suspended? What’s the granularity here? Let the American people know in addition to algorithmic transparency.
We shouldn’t have to wait until Frances Haugen leaks a trove of documents to know that in 2018, Facebook tweaks its algorithm to maximize user engagement and make content more incendiary surface easily on your newsfeed. Then, lastly, that data transparency element as well, again.
Users in plain, clear English terms should be aware of how their data is used, stored, collected and shared, absolutely critical. That’s how the government can play a role in giving some teeth to this legislation so that tech companies actually decide that, hey, we don’t think our self-policing is working anymore because they are absolutely breathing down our necks.
Give it some teeth. Have that public availability component; that’s absolutely critical. Then, state legislators, again, are labs of democracy, right? They can figure out what works—attorney generals have. They have a higher degree, I believe, of efficacy than the federal government because it’s sclerotic, it’s lumbering, it’s much slower, and states can move a lot faster.
Let a thousand ideas among the states bloom. I think we need to be able to allow them to have imagination to attack this problem, and that’s state legislatures and attorney generals as well. Then, tech founders have a huge duty here, a huge responsibility to build in a way that they’re very cognizant of what big tech can do to new entrants and new platforms like Twitter competitor Parler, right?
It was kicked off of Google Play. It was kicked off at the Apple’s app store when it was sitting at the very top of the store at number one. Then, when Amazon web services pulled the plug from them and refused to offer them cloud hosting services, they went lights out.
They’re back now, but not as originally conceived. So, tech founders have a duty to remember that and build with business models are encompass the full technology stack that maybe offer cloud hosting services too, so that they are insulated from being completely shut down by bigger tech companies that they’re beholden to.
A huge distribution of responsibility. I think civil society has a role too when it comes to these platforms’ effects on teen girls; that they know that they’re propagating and they want to keep going and even supercharge it. Parents should be up in arms about this.
Parents should be foaming at the mouth to stop all of this. There’s a civil society and grassroots outcry that I think needs to occur as well. Those are just some examples. There are many more in the paper, but I think there needs to be a full-throated fulsome response to some of these abuses, and especially in the way that they’re working with the government, that should be prohibited.
Using tech companies as agents of the government to chill speech, absolutely prohibited. Also, the joint ventures with Chinese companies such as Tim Cook’s deal to the tune of $275 billion with China should also be prohibited as well. There’s no reciprocity there.
They take, they take and they take from us. They put their spokespeople on our platforms to spew propaganda. We have no visibility, really, or purchase on their digital platforms at all, so why are they allowed on our digital platforms too? So, just a couple ideas for you.
I think pursuing all of them at once, an array of option is really going to get us to the final solutions that we’re looking for, and the shaping of behavior by these companies working with the government.
Mr. Jekielek: It strikes me that if any country, if any system has the tools initiated by the founding fathers way back when to counter some government giant industry fusion or collaboration, it strikes me it should be the US, right? So, what are those tools that should be deployed now by that, perhaps in the Constitution, perhaps in other places in the legal system.
You’re describing, as I said earlier, a very difficult and immediately serious problem. At least, there doesn’t appear to be, aside from a really well-done report, a ton of response. Actually, to be fair, a lot of building of these independent ecosystems and tech stacks and so forth; that is definitely happening.
Ms. Frederick: I think it starts with diagnosing the problem accurately. We have to understand and impress upon people that we have specific rights that are given to us by God enshrined in the Declaration of Independence, and in the Constitution and the Bill of Rights like the freedom of speech that can be infringed upon by these massive private companies, especially when they’re working with the government.
I think people need to recognize how that really cuts into human flourishing first and foremost. That, instead of saying they’re private companies, they can do whatever they want, recognize that’s a problem.
Mr. Jekielek: Here, you’re saying this is the thing that’s stifling the conservative response, that’s what I’m hearing from you, right?
Ms. Frederick: I would say the laggards in the conservative response. Because people get it now. Most of the conservatives get it now. You understand that. You diagnose the problem accurately and not just its economic components, but its cultural and political components as well.
Those should be embedded. Those imperatives should be understood when you’re formulating any solutions, whatsoever. So, it starts with that. It starts with properly diagnosing the problem, which I think a lot of people, not just on the right, on the left too; they’ve absolutely failed to do that.
When it comes to the left too, where are all my individual liberty lovers? My civil rights? My civil liberties stalwarts, where are they right now? Are they that tribal that these principles that they said that they hold so dear are being infringed upon that they’re not going to speak against this?
When it comes to digital surveillance, outsourcing this to private companies when digital surveillance, as I was saying before, and the exploitation of user privacy and data—why isn’t the left clamoring for this? Just because it’s politically convenient? Because their side is the one visiting all of these harms on the people? So, that’s one thing.
Accurately diagnose it. Realize it has political and cultural implications as well as the economic ones too. Then, I think we need to think it through clearly—threading the needle—dealing with it in a way where you’re trying to influence the behavior of these companies.
But you’re acknowledging that they do help Americans project power in the world, et cetera, not when they’re working necessarily with the CCP. I think that’s critical. The other element is what you alluded to before; a willingness and a courage, I think, within people with oversight capabilities.
Like Congress members to actually call this out, see it with clear eyes for what it is and be brave. Take a stance and say, “No, we are going to put pressure on these companies to change, be more in line with American values.”
Because right now, I’ll tell you what struck me the most going to these companies or the particular company that I worked for was just a lack of both gratitude and cognition of the fact that they thrived and they flourish under a distinctly American system.
Because of America they were able to amass all of this largesse, and innovate and build all these really interesting things for the people of the world. I recognize that they’re global companies. But when it comes to the reason why they’ve been so successful, it’s because of America and our unique system.
I think companies need to recover a sense of being American again. Because you hear an argument these days that big tech companies, they are the bulwark against Chinese aggression. They’re going to help us win the race against China. Not if they’re working when, if Jeff Bezos is working with a CCP propaganda arm.
Not if Tim Cook is paying China with $275 billion to contribute to their development. Not if Zoom is acquiescing to the directives from the CCP to get a human rights activist off of one of their calls, the list goes on and on. I think companies, they need to be distinctly American again.
Recognizing that, yes, we do have a global constituency. But that sense of gratitude, that sense of all of us as a nation pulling towards one common goal, I think that’s been lost if it was ever recognized in the first place. We need to, as companies and people who work in these companies, recover that again.
I think Congress members with their oversight capabilities need to remind them that that’s the case as well. They need to impose costs when Tim Cook strikes a deal with China, when Google is actively working with PLA-linked AI research labs in Beijing. This needs to be on the table for us to be able to say “No”, for Congress to be brave and say, “Absolutely not. Although my stock portfolio is going up because of you guys, this is against American interests.”
Recovering that sense of duty to America and a gratitude for what it’s been able to do, and create for these executives and the people who work under them, my former self included, I think that’s absolutely critical.
And what it does is it starts here. It starts in Congress members being brave, calling them out, recognizing that this is a problem and taking measures to rectify it
Mr. Jekielek: As we finish, one last tiny thought. For a lot of people out there, they might wonder to themselves, there’s not much I can do here other than something you mentioned, which is keep my kids off of those platforms, right? Is there anything that people can do?
Ms. Frederick: Yes. I think make sure you diversify what services you’re using. Because you never know when you’re going to go lights out. If you have an anti-regime, anti-authoritarian, anti-government opinion. So, make sure that you’re using the platforms that are being created by these new entrants.
I won’t name them specifically, but I think we’re starting to see these competitors come up as they recognize the challenge, as they try to take on these real monopolistic practices of these big tech companies. So, make sure that your privacy is first and foremost as well.
Using companies that are actually devoted to privacy, preserving technologies and developing them and deploying them, that’s critical too. Also, remain human. I think there’s a huge push when it comes to these companies trying to look into the future and project what’s going to make them more profitable like the metaverse.
They want to immerse you in these virtual worlds. Stay human. This is something that James Poulos is like, the Claremont Institute does really well, and elucidating this idea that remember that you are a body-soul composite. We were created in a specific way with dignity for specific purposes, and that’s not to lose ourselves completely in a virtual world. Stay human.
Mr. Jekielek: Well, Kara Frederick, such a pleasure to have you on.
Ms. Frederick: Thank you.
This interview has been edited for clarity and brevity.
Subscribe to the American Thought Leaders newsletter so you never miss an episode.
Follow EpochTV on social media: