“How can you have a digital society where you don’t even know what the rules are?”
In this episode, we sit down with Dr. Kalev Leetaru, a media fellow at the RealClearFoundation and senior fellow at the George Washington University Center for Cyber and Homeland Security.
Instead of repealing Section 230, he argues for an amendment to Section 230 that would compel social media companies to make available extensive dataset collections on what they’re censoring, why each individual post or account was censored, and how the social media companies’ algorithms decide what content is boosted or demoted.
While some argue for repealing Section 230 immunity for social media companies altogether, Leetaru says it would have the opposite of the intended effect. Instead of decreasing censorship, it would only cause social media companies to become more avid censors as they try to avoid costly lawsuits.
Jan Jekielek: Kalev Leetaru, it’s such a pleasure to have you on American Thought Leaders.
Dr. Kalev Leetaru: Thanks for having me. It’s great to be here.
Mr. Jekielek: Kalev, you put together a really fascinating report that I’ve just been reading, trying to deal with this issue of big tech moderation, big tech censorship. And frankly, there aren’t a lot of people trying to look at this—how to deal with these questions in a deeper, more thoughtful way—so it was really a pleasure to have this come across my desk. Transparency is a central piece of your argument, you want to legislate transparency onto the social media companies. Tell us what you want to do.
Mr. Leetaru: Yeah, if we think about it, there’s widespread agreement. You look in the aftermath of last week’s Facebook whistleblower. One of the major themes that comes out of that is how little we know about these companies. What we know about how social media censorship work tends to come from these big leaks of information.
And if you step back, there’s widespread agreement that it’s not tractable the way things are now. Something needs to change. The question is, what? I think with social media, one of the challenges is, every day we hear about things getting taken down.
But take a simple example. Earlier this year, Twitter started deleting any tweet that mentioned the word ‘Memphis’ in it. All of a sudden, all these tweets are coming down, being told they violated Twitter’s terms of service. And then, of course, the company obviously says, “Oops, sorry, it was an AI algorithm run amok, we’re sorry.”
Well, how exactly did that AI algorithm come to see Memphis as being a bad term? We have no idea. How did these things get added? Was that a human being who typed in a keyword? Was that some machine-learning algorithm that looks for things going viral and makes its own decisions? These are all these challenges that—is it humans’ moderation that’s the problem? Is it machine moderation that is the problem?
We really don’t know who’s being affected by these policies, what’s being affected. What’s being removed, and specifically how that—especially as more and more of our democracy is playing out through social platforms. It’s where our government officials—you think about social media today—it’s where our government officials talk to us.
Yes, they have their own websites. Yes, they have other ways of getting things out there, but social media is how they talk to their constituents.
To what degree is social media ensuring that certain content, for example, doesn’t go viral? Maybe one congressperson puts out a new proposal and maybe social media platforms don’t allow that to go viral. Another person puts out something and they push that out there to make that really go viral. To what degree are these silent hands shaping our democratic debates? These are all these things that we really need answers to.
Mr. Jekielek: Some really great questions that you’re raising here. There’s a kind of illiberal ideology that has entered our common space. Especially in Silicon Valley amidst social media companies, this critical social justice ideology which demands that, basically, only its perspective is the valid one. Anything that isn’t its perspective is hurtful and harmful.
I imagine a number of viewers are thinking to themselves, well, I know what the problem is. This is the problem, what I just described, for example, right? But you’re saying that it’s actually—there’s some deeper questions to be asked here.
Mr. Leetaru: Yeah. It’s fascinating. I love history, and anytime that I’m trying to understand a question I like to look back at, how do we get where we are today? If you think about social media content moderation today, you think of the internet. You think of these big social platforms.
But if you step back, these are the same questions of freedom of speech that our nation has wrestled with since its founding. For more than two centuries, what is acceptable speech? Should you be allowed to criticize your government? If you are allowed to, does that change during war time? Should you not be allowed to criticize your government? For over two centuries, we’ve tried to wrestle with this question of, what is harmful to talk about? What is allowable and what is harmful?
And if you look back through our history, our country has tried it every possible way. We’ve tried it where the federal government tries to come up with rules about what’s allowed and not allowed. We’ve tried it with the states, with the cities, with private companies, with this, with that. We’ve tried every possible scenario. And none of those has worked to the point that we said, wow, this is great, we have a solution that works.
And I think it was Justice Harlan that famously said, “The question of how to define acceptable and not acceptable speech is ‘an intractable problem.'” As a society, we’ve thrown up our hands. We’ve said, you know what? It’s impossible to figure out what’s allowable, what’s not allowable.
In a country that’s really diverse with wildly different lived experiences and beliefs, and so on, it’s impossible for us all to come together and say, yes, these are the following things that are allowed. These are the following things that are not allowed. Anytime you have that situation where there’s this rich diversity of viewpoints, you’re never going to come to that consensus.
So what we’ve said is, look, we can’t figure this out. Let these private companies in Silicon Valley sort it out for us. And, that’s really—at the end of the day—what the challenge, what the problem is. These companies, they’re operating—if you flip the coin and look at it from the company’s standpoint, how do they decide what’s allowable or not allowable?
The courts haven’t been, other than a few exceptions, the courts in the U.S. haven’t really deemed much disallowable. So there’s not really much guideline.
They’re sort of making it up as they go, and that’s the real problem. We didn’t stop and say, hey, social media companies, you can remove what you deem to be harmful, but … That’s the part that’s missing. We could have said, but you have to put these out to a vote of the public. We didn’t say that. We could have said, well, Congress has to review these at regular intervals, and we have to publish these rules in Congress.
We didn’t do that. We just said, look, these private companies can do what they want. And most interestingly, if you look back at the history of censorship in the United States, there was a lot of local control of censorship. Section 230 specifically says the states have no ability to alter these rules to local needs.
So, I think that the root of all of this is the fact that we gave up as a society and told these private companies to figure it out themselves. And we’re not happy with the result, but that’s because at the end of the day, you’re never going to have a set of private companies that are going to be able to come up with rules for all of us. We need this transparency to really be able to understand what are these rules?
And I come back to—if we look back a couple years ago, the Guardian had a leak of internal content moderation guidelines. This was called—the original Facebook files, they called it. And what was fascinating about that is, for a long time people said, look, the social media companies are censoring, but they have to, they have to censor to help society get rid of all that horrible stuff.
But then when that came out, we saw that—oh, well, look. Antisemitism, that’s allowed. Violence against women, that’s allowed. All these different categories were explicitly with a label saying allowed under certain circumstances. And that provoked this huge discussion about—well, wait a second, why should these things be allowed?
So society, all of a sudden, was able to realize that, you know what? All is not well here. Maybe we do need to have a little bit more visibility. So I think, once you understand what’s happening, then we can have these societal discussions about do we agree with these or not?
Mr. Jekielek: Well, we definitely need to talk about Section 230. It’s obviously pretty central to these questions. I think it didn’t occur to most people that this was even an issue until around 2015, 2016, when there was also—social media companies started getting a lot of pressure to censor, including from Congress.
And then people started noticing, oh my feed is getting throttled, it seems like. I don’t know for sure, but it looks like. Oh, it looks like this person got taken down. This is the canary in the coal mine now, we can expect more people to be. And it sort of seemed to accelerate dramatically from that time, basically from the time of the Trump candidacy and the beginnings of the Trump presidency.
Mr. Leetaru: Yeah, it’s fascinating. If you walk back and look at these questions since the dawn of the internet … I mean, the internet is, in a forest term, it refers to a lot of history. But if we look at the early period, like the Usenet era, for those that are old enough to remember the Usenet, these same questions materialized there. There was this old term called the flame war, attacking people online, doxing, publishing people’s information.
And what’s interesting is you look at these early days, there was a lot of public discussion about what should be allowable. If you’re angry at someone, should you be allowed to publish their personal information on the web, publish their phone number, and tell people, “Call them, go to their house.” What today we would call doxing. That was around way back when.
We talk about moderation. Even in the earliest days, in Usenet you had this issue. Take any issue, like alt.politics.middleeast. That was a really particularly contentious board. So you had, essentially, what today we might think of as a mailing list. But, again, that got to be to a point where there was a moderated version of it.
And for any given topic, you’d have this Usenet group. Then you’d have other versions of it, you’d have moderate groups. Some cases you even had “nice versions” where the rules were no profanity, no attacks, no nothing.
If you look at that early era, that’s how they dealt with a lot of this issue. We all have different ideas about what’s acceptable and what’s not acceptable. So in the Usenet era, if you didn’t like the rules in one group, you’d create another group. You could create as many of these groups as you wanted. The problem today with Twitter, and Facebook, and these social platforms is, they’re single universes.
Take Twitter, if you want to have a private debate in the corner about something, the problem is that debate, everyone in the world sees it. Everyone in the world is weighing in on it. So, something that happens in one corner of the world, the entire world has to weigh in on it now.
In Usenet, you could create these different communities that each had their own rules, and that’s partially where… Today it’s not just the U.S., it’s the fact that, there’s a conversation here. Ssomeone on the other side of the world can participate in that conversation. And they may have a very, very different perspective on what they see as acceptable and not acceptable.
And so, social media shoves us all in this single box. Essentially it shoves everyone in a giant soccer stadium, gives them all a microphone. It’d be like if we took the Republican National Convention and the Democratic National Convention, and you had them in the same ballroom together with each other. In real life, we’d say, hey, that’s a really bad idea. Nobody would think of doing that. On Twitter, that’s exactly what is happening—all of this political raging debate.
And, again, don’t forget those algorithms that sit behind the scenes that quietly prioritize what we see. And that’s a really important thing. Those algorithms look at what fires us up. You notice on social media, you don’t normally see just endless screens of puppies and unicorns and happiness. If you scroll through and you see, hey, a little puppy there, you might quickly watch the video, but that’s it.
You’re not going to engage with it. If you see something that’s like, oh, that fires me up, you’re going to start commenting. You’re going to forward that to people. You’re going to start commenting on it. It’s really going to fire you up. And that’s what in turn engages people.
So, inadvertently, if you just take a simple algorithm and give people what they seem to react to, you’re going to immediately shovel people towards things that kind of fire us up underneath. And again, these are things where we don’t have visibility. How are those algorithms working?
I prefer where you get everything in chronological order because that way, at least, you’re seeing it as it’s being produced. It’s not some algorithm trying to figure out what makes you tick, and gives you the stuff that’s going to fire you up. And this becomes important, too, as more and more of our offline stuff gets organized on social media.
Think about today. If you’re going to organize a protest on the streets, you don’t send flyers out, or send emails out, or even pick up the phone. Usually, you post it on social media. Which of those are going viral, which of those are not going viral? Is that because people aren’t interested in your topic, or is that because some algorithm is intervening?
Last year you saw with the reopening protests in the early days of COVID, Facebook put out this official policy that said, any physical protest that’s in violation of local ordinances around COVID, we will not allow that protest to be advertised in our platform, and we’ll pull that down.
Same thing if it doesn’t explicitly require masks or there’s anything else there. And then you had the George Floyd protests, and they quietly removed that restriction. And so, this becomes very interesting that, again, that wasn’t a public thing. That wasn’t something where they made an announcement and said, we believe that these protests are important so we are withdrawing this restriction.
So that’s the real challenge there. These rules changes happen every day, and we don’t know what the rules are. I mean, how can you have a digital society where you don’t even know what the rules are? As a journalist, I routinely will ask the companies, “Well, based on this rule that you’ve published, would this particular statement violate your rules?”
The answer I always get 100 percent of the time is, “We can’t comment on hypotheticals. Post it, and if we ban you, it wasn’t allowed.” You can’t have a digital world like that.
Mr. Jekielek: That’s a hugely important point. So many directions to take care [of]. I mean, you proposed 10 different databases that you think should be available for public scrutiny. I guess you’re also suggesting that the rules governing the AIs and their decision-making should also be transparent.
But the question is, by what mechanism can we guarantee that these giant corporations—larger than anything we’ve ever seen in the history of the world with massive power—will actually tell us the reality of those things?
Mr. Leetaru: The way I would see this is, Section 230 … . We talk about this as being this immutable thing. It has been amended, most notably the Sexual Trafficking Act, it was amended. So, it is very conceivable that you could amend 230 to say—as a condition of receiving all this power, all this immunity—in return, you have to provide certain data sets.
If that was codified into 230, then it has the force of law behind it, and so, they would have to publish this stuff. One of the things I think that’s really important is, it’s really about also putting the clear rules out there.
If you read through the rules that Twitter, or Facebook, or any of the companies have, they tend to say things like, if you put something harmful, you’ll be banned. What is harmful? How do you define that? And the answer is always, again, post it and you’ll find out. I think this is the real problem.
I should add that this goes beyond social media. Think about Uber, Airbnb, Lyft. Every platform you use today typically has now a terms of service that talk about things. I think it was Laura Loomer, you had the activist who was removed from Uber and Lyft, not because of something that was done within their service, but because of, I think it was a tweet, separately.
So, think about this—it’s almost like the Chinese, what do you call it? The social credit system. This idea that a comment you make over here in the [corner] can impact your life over here.
Again, without these clear guidelines … . Airbnb, with the inauguration, I think they blocked reservations here in DC. Again, they published all these rules saying, look, we’re going to remove hateful organizations. Well, how do you define that? You could say, well, the Ku Klux Klan. I think there’ll be unanimous agreement that [they] probably should not be an allowed group on there. But where do you draw that line?
Microsoft, anyone who uses Microsoft Office for their Office 360 product, one of the terms of service in there says that if you use their product to produce hate speech, you can be permanently banned from Microsoft’s products. So, I wrote to them and I said, well, how do you define hate speech? What is your definition of it? Have you ever banned anyone for this? Have you ever actually enforced this policy? And again, the answer was silence.
I think [it’s] a really important notion that we have these rules, but the companies, the rules, it’s not like the court of law. A court of law doesn’t say, if you do something bad, you’ll be arrested. It actually lays out bullet, by bullet, by bullet. This is what we lack in the digital world.
You think about the Hunter Biden situation, where the New York Post article about his laptop [inaudible 17:43]. I think that’s a classic example where the New York Post, an actual, real news outlet, put something out there. And you have both Facebook and Twitter weighing in, saying it’s harmful information, we’re not going to post it.
Twitter started off by saying, it was harmful misinformation, and it was hack materials, and there was personal information. The story as to why they were banning it changed hour by hour until finally they said, you know what? We never should have banned this. It was bad, we made a mistake. By that point, it was over, the story had faded away.
Just think of that for a moment. The fact that a mainstream news outlet had an article banned from being shared on social media, and the reason for that changed, I think, four total times. That tells us all you need to know about the state of things right now, that there’s not…
That would be like the police arresting you and saying, “Well, we’re charging you with this. Oh, sorry, sorry, actually, no, we’re charging with this. Nope. Sorry, this, this, and this.” So that’s the challenge. It needs to be an answer.
With AI in particular, one of the interesting things as companies move more towards AI, we think of them as black boxes. But there’s actually a field called explainable AI, which is a whole area of research, where you build these AI algorithms that you can actually ask it. When it produces an answer, it actually tells you why it made that decision. With human content moderators, companies say, well, we can’t provide you an explanation…
There’s two answers they usually give. One is, we can’t provide an explanation because that would give bad actors an idea of how to get around our rules. But again, I mean, that’s what lawyers do every day. That’s what lawyers are for. And we accept that as the cost of a transparent legal system.
But the other answer to that is, they say, well, it’s too expensive. If a moderator had to sit there and spend 10 minutes writing up an explanation, it wouldn’t be tractable for us to do this. With AI, building an AI model that can give you an explanation of why it did it, that doesn’t cost anything more than an AI that doesn’t do that.
So that eliminates that argument of, well, it costs too much. And the tools, again, that’s a new area of research, but that’s an area where there’s all this work being done there. So, as that field matures, there’s no argument then that they can’t provide that explanation to you.
And it helps them. I always say that, look, for the tech companies, this helps them too, because if you’re Twitter, and everyone starts getting an explanation, today they just see, hey, my tweet about Memphis got deleted. I don’t know, was it because I mentioned Memphis? Was it because I mentioned something else?
But if the actual explanation said, you’re being suspended because your tweet mentioned the word Memphis, and Memphis has been determined to be a harmful word that harms people. Well, now we have an explanation. But most importantly, for them, they can see, hey, there’s a problem here, this shouldn’t be the case.
But then I see it almost traps the companies because in the case of the New York Post, they can’t keep changing their explanation. They’re on record saying, this is why we banned it.
Mr. Jekielek: Very interesting. So, you’re saying that you want to put into law, for example, perhaps as an amendment to Section 230 or somewhere else, that transparency around these 10 databases as a starting point, right? Tell me a little bit about—pick a few that you think are the most important, and tell me about them and why that would make a significant difference.
Mr. Leetaru: One of them is a data set about what their algorithms are making go viral. So rewind the clock. The most famous example of all of this was the Ferguson protests. On Twitter, all you saw were the Ferguson protests. On Facebook, all you saw were happy, smiley people dumping buckets of ice water over their heads as part of the ALS Ice Bucket Challenge.
Two polar different universes. That wasn’t because the people on Facebook didn’t care or vice versa. It was because of how their systems were designed. Facebook’s algorithms decided, either on their own or through human intervention, to prioritize happy, smiley, friendly things at that point. That’s kind of the case, that’s the example that’s always given of the power of these algorithms.
Think about today. Let’s say some lawmaker puts forward a policy proposal, and it’s a dud. Nobody cares about it, nobody talks about it on social media, nobody discusses it. They don’t hear from any of their constituents. That might cause them to think, well, maybe this is not a useful policy proposal, but maybe that’s because the algorithms decided that that’s a bad policy proposal and makes sure no one actually saw that proposal out there.
So, having this visibility, what are these algorithms feeding us? Last week, the Facebook whistleblower saying that Facebook’s algorithms, either directly or inadvertently, are designed to push us towards angry rage, divisive content. Facebook, of course, responded and said, absolutely not, that’s completely false.
The problem is that, at this point, we know we don’t have the data. We can’t answer who’s right, who is correct there. If you actually had a list, one of that would be a list, what is the type of content they are prioritizing? What are their algorithms putting forward?
On a public platform like Twitter, that’s very easy. You could say on Facebook, there’s all kinds of privacy issues there. On Twitter, everything’s public anyway, and it was very easy to put that forward and just show that.
I think another interesting data set here is, you think about when a celebrity gets their tweet deleted. We read about it in the NewsMeter because they have that outlet, they can go public about it. An ordinary person doesn’t really have an easy outlet to get attention. So we don’t really know how often is content pulled down every day.
The only way we see this is [in] quarterly reports where they might say, we took down a billion pieces of content. But that’s just a statistic, like, who was behind that content? Even things like demographic, we picture the stereotype. This caricature of the person on social media putting out something bad and getting it removed. But is that really true? Who are the people that are affected?
For example, hate speech rules. We picture the KKK member there, but is it actually affecting other people? We saw this actually played out in a very interesting case. Facebook has their Oversight Board, their “Supreme Court of Facebook.” It was really interesting.
There’s a case where they ruled, it was actually in Miramar, where there was a case where someone made some comments about Muslims, and said, I forget the specific quote, I think the quote was, “There’s something psychologically wrong with Muslims,” was the quote.
So, Facebook took that down and said, well, and there was some other stuff in there, and Facebook took it down and said, “This is an attack on a specific ethnicity.” The Oversight Board ordered them to put it back and said that, “Yes, it is an attack on them. Yes, this is a country where there’s actually genocide against this particular group, but … ” I think their argument was that it was a sort of academic scholarly discussion about this.
This becomes very interesting where you can say, well, even something that you and I might say, somebody saying something like that under Facebook’s rules, that would seem to be a pretty clear violation of the rules. But yet their own Oversight Board says that’s actually not a violation.
When you look through their cases, it reminds you that, even things that seem really clear-cut, suddenly, if you look at it from that perspective, it’s a really complicated world. I think something that’s missing here is a public database of what’s coming down.
Look at Twitter. On Twitter, there is a third party called Politwoops, where they archive tweets from politicians that those politicians eventually delete. That’s a well-known site. It’s really useful to see what are people saying, and [that] they had second thoughts on. The problem is, Politwoops only archives that this tweet was published, and then removed.
Maybe it was a typo. Maybe they misspelled someone’s name, they reposted it immediately with the correct one. It doesn’t archive why that tweet was removed, and this is important, because when Twitter says that something is a violation of their rules, they don’t typically remove the content themselves. They suspend your account and say, you’re suspended until you remove the content.
So it’s hard to tell from the outside. You just see a tweet disappeared, you have no idea why. Because everything is public on Twitter, it would be very easy for Twitter, for example—every tweet that gets removed—to provide that explanation, why was this removed. And the counterargument to that, of course, would be, well, if you produce this archive full of all this material, isn’t that going to be basically an archive of horrible stuff?
Well, there’s another website called Lumen. And it basically archives copyright takedown requests. If someone posts an illegal copy of the newest “Top Gun” movie and they publish it on some website, and that website deletes it and says there’s a copyright infringement, that goes on this website called Lumen.
Now, that doesn’t give the original URL, it doesn’t give the details. It just says, hey there was a copy of this, it was removed because of the following reasons. Researchers and journalists can get additional access that gives them a little bit more information on that, and because of that … So, you’re able to archive this information without creating an archive of links that people can follow.
If you think about Twitter, what about the same model for Twitter? What, if any, tweet that Twitter deletes has to go in this permanent archive? Let’s say that you are someone on Twitter, your tweet gets deleted. Now, imagine if Twitter is required to ask you, here’s the following demographics that either you’ve told us about—maybe you have it in your bio—or that we’ve inferred about you through our algorithms. We’ve determined something about you.
You might say this is sensitive information. Maybe it says you’re LGBTQ, or you’re this, or you’re that. And you say, I don’t really want that public. You might choose not to. Or you might say, hey, you know what? I want that in there. Now you can start saying, guess what? The hate speech rules are 90% of the tweets being deleted—are actually LGBTQ people. Hey, that’s a bad situation here, that means we need to tweak these algorithms. Having that type of demographic data attached to these allows us to really understand or take fact checks.
Think about Facebook today. They remove content based on fact checks. What are the top fact checks that result in the most takedowns? If I take all the climate change coverage in a given day that’s deleted, is there one fact check that resulted in all that? Are there multiple fact checks? This tells us who are the sources of truth that are defining what’s out there today.
That’s useful to fact-checkers as well, because if they say for every climate change piece that gets taken down, there’s one fact check that’s resulting in all this, that might be something to look back at. Same thing in the early days of COVID. Facebook banned any post by an individual that said, “Hey, I had an adverse reaction to the vaccine,” those were largely banned on their platform. Then, of course, you had the blood clots, and once that was documented, then they had to go back and alter those rules to specifically exempt blood clots.
Now, again, that was a case where I asked, well, how many people were reporting blood clots before that point? Had that not been banned, could that have been an early warning into care that could have helped doctors early on? Same thing with masks. In the early days of the pandemic, remember it was the official guidance not to wear a mask.
I’ve asked Facebook, Twitter, et cetera, “Your rules say any posts that disagrees with public health authorities is removed. What would you have done in the early days of the pandemic when most of the world was saying wear a mask, the public health authorities here were saying, don’t wear a mask. What would you do there?”
The answer, I think it was Facebook that actually said, “Well, (A), it’s a hypothetical, but (B),”—what I thought was very interesting—they said, “this is one of the reasons that we say that governments should really step in here. We shouldn’t be the ones making the decisions about what should be posted or not posted. The government …”
This was very interesting. This is Facebook saying that governments really need to step in and tell us what to remove, because if people are upset about this, then in the absence of anything, we’re the ones having to make these decisions. And I thought, that’s a really truthful statement—essentially that … So much of this, again, comes back to that fact that, they’re left to their own devices there.
Mr. Jekielek: What you’re describing right now, it makes me smile a little bit because initially we were talking about how we, as a society, left it to these companies to do it, right? So, in a sense it’s a complicated question. We’ll just absolve ourselves of the responsibility, put the responsibility on these companies. Now these companies are saying, no, actually, we don’t want the responsibility here. Government, you have the responsibility. No one wants the responsibility.
Mr. Leetaru: And that’s the thing. I mean, look at fact-checkers. Facebook, Twitter, et cetera. You look early on, they’re trying to figure out how do we deal with libel and all these other things on here, and how do we deal with really bad things? Let’s say someone says, drink a gallon of bleach to cure COVID. Well, yeah, it will kill you, you’ll be dead.
How do you get rid of stuff like that? And again, there’s this fuzzy line that you draw there, but the way they did it is they said, “Look, if we start doing this …” I think this is something that’s very interesting to think about. Internally the companies realize that, if we start getting into deciding truth and falsehoods, almost immediately, we’re going to go up against politicians, and we’re going to be the ones—there’s going to be a Facebook employee saying the President of the United States has said something false. They don’t want to be in that scenario where they’re the ones labeling politicians true or false.
So, you partner with these third-party fact-checkers, so that way, you can wave your hands. You can wash your hands of it and say, we’re not the ones doing this. We are leaving [it] to these independent journalists, third-parties, to do. But what’s interesting is, you’ve seen the Wall Street Journal reported on the fact that—and I think Fast Company also did a piece on this—that the fact-checkers have overruled their judgments in cases, at the request of Facebook.
People always ask me, what would you do to solve this problem? Would you get rid of 230? Would you do antitrust? Would you do all these things? And my answer is, well, you can’t really decide what solution it’s going to be until you know what are the actual problems.
If we get rid of 230, that’s a common one that’s put out there. Without those protections, they’re just going to delete everything. Look at what’s happening in Australia right now. You heard that ruling that mainstream media outlets can be held responsible for the comments on their social media pages.
You’re seeing the chilling impact that has, and some media outlets are just pulling out of the country rather than deal with that. Try to think about that. If, all of a sudden, the companies were told, look, you’re legally liable, they’re going to remove all sorts of free speech there.
Conversely, a lot of people say, well, do antitrust, get rid of the monopolies. That’s, I think, a really interesting case. If we look back at Donald Trump, when he was removed from the platform, think about Twitter, Facebook, et cetera, removed him. But don’t forget competitors, Snapchat, et cetera, also removed him. Don’t forget TikTok removed him, a Chinese company.
The U.S. and China are not necessarily the best of friends at the moment. TikTok, owned by the Chinese government, also removed him. So this shows us that even foreign companies now, and there was no legal, it wasn’t like TikTok was ordered by some government agency to do it, they just did it on their own volition. That’s important to remember.
Let’s say you just break it up, you split up the tech companies to tiny little pieces. That’s absolutely no guarantee that you’re not going to have this. And if you look back in our country, in the U.S., look at the history of motion pictures, look at the history of radio broadcast, and then television broadcast, every single one of those, you had all these competing companies that all agreed to the same codes of acceptable speech, not because there was any government official ordering it, it was completely on their own. Their own industries came up with it. So fierce competitors that ordinarily were at each other’s throats, they all agreed to say, here are the general rules that we’re going to adopt.
This is a space where everybody has their one quick trick, just do this one thing, it’s really simple, and we solve this issue. The promise is there are no easy solutions here, and every one of these comes with these consequences. But again, understanding, prioritizing what are the biggest issues that confront us, and that’s really where this transparency can help us guide what are the most important first steps to take.
Mr. Jekielek: Well, TikTok is a very interesting case. I can’t help think about the fact [that] the Chinese Communist Party has supremacy over TikTok and it’s decision-making, as is hopefully obvious to everyone watching. In this case, there’s a lot of precedent for the Chinese regime to call anything that might shine a light on its activities to be classified as a state secret.
And that’s it, and that would be the end of it, so you wouldn’t get insight into these 10 databases or more. With a company like TikTok, would they be allowed to operate here in the US?
Mr. Leetaru: That’s a fascinating question. One of the challenges as well is, most of what we think of as the internet and social media has been mostly US-based. TikTok is a vanguard of sites that are used by ordinary people here in the U.S. Again, there’s always been obscure sites, but this is a major platform here in the U.S., and it’s owned by a foreign government.
Which also means that you have all these key questions of, who decides? I mean, think of some of these TikTok challenges that are happening, like trash your school bathroom, or hit a teacher, or any of these other kind of viral moments that are happening right now. Since they’re not a U.S. company, who’s responsible for that?
More broadly, when these memes come out, should someone be legally liable? Section 230 deals with more of censorship liability, but think about some meme that goes viral on one of these platforms, and the platform doesn’t squash it. And let’s say, hit a teacher, and somebody does that, should the platform be liable for it? What happens if they’re not a U.S. platform? Especially as other countries start having more …
Imagine the next Facebook, Twitter might be from another country who has its own values that are not necessarily aligned with America’s values. Today, if you want to really control a country, start up a social media platform that is used by everyone in that country, and just tweak those algorithms to guide them towards what you think is going to be most divisive or least divisive.
And then conversely, you’re getting all this insight, what everybody feels about, what they’re thinking about, what’s important to them, what lawmakers are resonating or not resonating, I mean, think about all this information that’s out there today. You think about this. Even existing platforms.
Last week with the Facebook whistleblower, the idea of the national security concerns of external companies that have visibility. Even things like Twitter and Facebook, the ability [of] a foreign country to go in to run ad campaigns, to do all these. This is a really important measure.
And don’t forget, the Facebook Oversight Board, which has this guidance over Facebook, don’t forget there’s at least one former elected head of state, former now, who’s on that board, who, in theory, could have the final vote on when they ruled that Donald Trump’s account should not yet be restored.
I think it’s a 20-person board now, those are not all Americans. Some of those are former officials of other countries that, in theory, could actually decide, does the former president of the United States get to go back on social media? These are really fascinating questions in this globalized world of … who’s making these decisions?
Look at the history of Silicon Valley and China. Most of Silicon Valley’s interactions with China has been to bend their products to their needs. Even Apple, if I remember correctly, even they are localizing their user data for their customers there to local data centers, even as here they promote encryption.
And there’s no way for the government to do anything. Tech companies, again, it all comes down to the money. If a country has got a huge population, they’re willing to do whatever they need to do to bend those rules, to take a softer stance on freedom of speech.
There’s also even the physical security side of things. Facebook, for example, in their advertising, I don’t know if they still do, but historically, one of their advertising parameters was, they would estimate whether someone was gay. So for example, if you’re gay, you may be going to great lengths to never make that public.
You never admit it. But based on their algorithms, they may see it. So it’s actually an advertiser. And actually, as an advertiser, you can target and say, I want someone who your algorithms believe is gay. Now, in the U.S., that’s one thing, but there are countries in the world where that carries the death penalty.
So, I asked them, “Why do you still make this selector available in those countries where it carries the death penalty?” The answer was, “Well, that way rights groups can target those people. Yes, governments can, but rights group can as well.” But I said, “Well, doesn’t the fact that people could actually die from this, doesn’t that kind of trump that?”
And the answer was, “No.” But most importantly I said, “Have you ever had,”—because again, these are data sets Facebook holds—I said, “Have you ever had a country give you a selector like gay or so on, and give me a list of every user in my country, physically within my borders that your algorithms have decided fall into one of these categories, and I want their contact information, and specifically in a country where that carries the death penalty.” I said, “Would you deny that that’s ever happened?” And they did not deny it.
But the question is, again, to what degree do governments use that information? These are all these other pieces to this puzzle, that, again, having this transparency, seeing, what is the inner workings? The fact that I, as a journalist, I deserve no answer from the social media companies about this, politicians deserve, elected officials of the U.S. government don’t get to answer me.
Look at the response when they asked about Instagram, is Instagram harmful to teenagers? At those earlier hearings, and the company cited all this research about how wonderful Instagram is and all this stuff. Meanwhile, internally they had all this research here, but again, there’s no legal obligation for them to share that, and I think that’s something that’s really important there.
[Narration]: Our team reached out to Facebook, but did not immediately receive a response.
Mr. Jekielek: This is, I guess, the big question, right? I guess I’ll be iterated again, right? This is the idea, that we have giant companies with levels of power that were almost unimaginable—that kind of control the public square or a significant part of the public square. They’re engaged in censorship.
The solution that you’re proposing is to codify into lots some transparency on them that they’ve never had to provide before as a starting point. And the question is, what kind of penalties would be required to get these companies to actually feel like there was an actual threat to them? I mean, if a billion dollars to a company is really not such a big deal to keep operations going smoothly, is it criminal liability? What are you talking about here?
Mr. Leetaru: It is fascinating. We look in Europe. They have GDPR, and in the past I’ve written a lot about this. So GDPR, it has all kinds of protections and penalties, fees, fines, et cetera on there, but the bottom line is, it has so many loopholes that really, it doesn’t have a lot of teeth.
There’s a famous case where it even said you have to report a breach within X number of days. I think it was Facebook stretched this out to more than a year, and their answer was, “Well, that clock starts ticking when we decide it starts ticking.” So, I think this is one of those challenges that even GDPR has so many loopholes in it. It has exemptions for research, it has exemptions for this, for that, it’s what your regulator decides.
I think that is one of the challenges. How do you make sure it’s got teeth? Even fines of millions of dollars per day, again, it’s pocket change. I think that’s where if you put it in 230, you could say, for example, that platforms that don’t abide by this, lose some of their protections of 230.
That’s a way of … (A) it has teeth now because suddenly now … Now again, the courts would all decide all that stuff. But I think it’s the first step, because right now when you go to platforms, their answer is, no, you’re not going to get an answer from us because we have no obligation to give you an answer. If you put in the law, yes.
These companies have incredible teams of lawyers that will find all sorts of ways around it, and they will ensure that … but at least having that starts providing something. Right now we have zero. So, adding anything to that would allow us to at least start having some insight into what these companies do.
You notice that there’s always this paradox here. Let’s say that you say, I don’t have this as one of my prescriptions, but let’s say you say, any internal research that you do, you have to publish to Congress, which in turn would have forced Facebook to hand over its Instagram report. The problem is, then the companies just won’t do that type of research anymore. They’ll basically say, is there any possible chance this could have negative findings? If so, we won’t do it.
So that is something it has to be cautious about, basic things like takedown effects. A lot of times companies take down stuff and they say, this is a violation of U.S. law. It’s actually illegal content. How often are those things reported to the police? If you’re taking something down, you say, this is a clear unambiguous violation of the law, why don’t you refer that to the police then?
If you do, and it goes to the court system, how do they rule? To what degree is your opinion of U.S. law [matching] reality? These are fascinating things. So, I think, if you put it in 230, like the specific penalties, what’s the dollar amount of their criminal liabilities? That’s what Congress specializes in.
Right now it’s not even something that companies have to think about. They don’t even have to do any concessions. So right now they can just say, no, period. Once you start pushing for this to be in legislation, yes, the lobbyists and the loyals will ensure it gets watered down, but there’ll be that conversation, they’ll have to make some concessions to it.
We’ll at least come away with something from this. We see these as kind of these modern issues, that this is something unique to the social media era. I think the root of all the problems that we’re having, every one of these challenges comes back to the fact that for more than two centuries, we’ve tried to come together as [a] society and say, this is what’s allowable, this is what’s not allowable.
We’ve never come up with that solution, so we’ve handed it to these private companies to figure [it] out on their own. And the problem is that, when we did the handoff, when we wrote Section 230 and we did the handoff, we’ve handed [it] off as a black box. We said, you do it and we don’t want to know anything about it. And that was a conscious decision to not have that transparency piece, to say, it’s better to let those private companies do it and society not to know about it.
I think we just add that in, and there’s that precedent for modifying 230, it can be modified. It has been, to add in that extra piece to say, in return for all those privileges there needs to be some transparency so, as a society, we can at least debate these issues. Again, we may decide that we don’t have a solution so we’ve got to keep doing what we’re doing, but at least it allows us to have this debate and maybe at least to create some guardrails for how it all happens.
Mr. Jekielek: Well, Kalev Leetaru, it was such a pleasure to have you on.
Mr. Leetaru: Thanks so much for having me. It’s been great to be here.
This interview has been edited for clarity and brevity.
Subscribe to the American Thought Leaders newsletter so you never miss an episode.
Follow EpochTV on social media: