Epoch Times senior editor Jan Jekielek sat down recently with Parler co-founder and CEO John Matze, who seeks to re-create social media into what is, in his eyes, a true public square.
They looked at the world of social media through Matze’s eyes and discuss his hopes for his budding new platform, and talked about what it is about Parler that led to an influx of hundreds of thousands of Saudi Arabian users in a matter of days.
They also discussed how social media giants are effectively taking on publisher roles, instead of living up to their promise of being the public square.
Jan Jekielek: So, John, you are the founder and CEO of Parler, or now I think you’re calling it “parlor”—more phonetically?
John Matze: Exactly. No one knew how to pronounce Parler. It did lead to some nice air time, though, on some television networks who couldn’t figure out how to pronounce it. So, I’ll take what we get.
Mr. Jekielek: So you’re effectively a social media. You started out as a commenting platform?
Mr. Matze: Yeah. So that’s basically how we started. As we said, there are all these commenting platforms and publications, and some of them are taking their own liberty to dictate the course of the conversation. I said: Well, we shouldn’t do that. We should give them the tools—a lot of them have the tools to allow the publication to navigate that conversation. And then as we built that. We said: When you have comments, you got to comment on something. Well, instead of an article, what if the article is a post that just has an article in it? Well, if it’s a post, what if people want to share the post with each other? And … are we just recreating a social media entirely? And then we went …[that’s] basically what we’re doing. So that’s what came out of it—let’s create a social media that has the enterprise tools that publications have, where people can dictate the conversation and moderate it, but for their own profile. And so the idea is that you have your social media presence, but if you don’t like what someone has to say, you can boot it off the comment section. You can mute them, you can get them out of there, whatever you need to do.
The whole purpose was to give people that power rather than having a central point of authority being the only figure that has that power.
Mr. Jekielek: In fact, you know, that’s how we became acquainted with you, with Parler. I keep wanting to say “par-lay.” And yeah, in fact, we’re using it.
Mr. Matze: You guys were one of the first publications that bought onto the idea and actually joined and your content’s in the discover section over there. If you use the app, you can see it, and it’s coming in. I get a lot more people trying to fight for those spots now, but—
Mr. Jekielek: Oh, yeah.
Mr. Matze: We’re proud that The Epoch Times is there.
Mr. Jekielek: I’m glad. I just made my account about a week ago, so we’ll see how that goes.
Mr. Matze: OK. Yeah. You have to make some posts and as soon as people start echoing them—that’s our term for like re-tweeting, echo—as soon as people start echoing them, your follower account is going to jump up real quick.
Mr. Jekielek: So speaking of followers or users, you were telling me that in the last few days that you’ve been experiencing some kind of massive growth from an unexpected place. Can you tell me a little more?
Mr. Matze: Sure. So our user base before this boom was mostly Trump conservatives. And so they were, with the exception of some minority groups—like we had a couple LGBT groups that were forming and other people who wanted to actually have a discussion—but for the most part, it has been Trump’s core base. And randomly, I guess, an interview that I had went viral in Saudi Arabia, and people loved the idea of this free speech platform. And they jumped on, and we took 2 percent of Twitter’s market share in Saudi Arabia in one night. We saw some definite problems with the infrastructure right off the bat, considering the servers are so far from Saudi Arabia and everything—but was really happy to have that.
So now, we have a very interesting mix of about a 50/50 split of Trump, MAGA conservatives, and religious Muslim, Saudi nationalists all on the same platform. And so when you scroll through the user feed and it shows you the most active users, you see there’s a mix of Arabic and English. And it’s very interesting to see how these two groups are coming together really unexpectedly.
Mr. Jekielek: So I noticed also that you’ve been saying in the past that that’s actually kind of the purpose of Parler in the first place: to get disparate groups talking to each other without censorship.
Mr. Matze: Yeah. Well, and it seems surprising that these two groups, this is the first time they seem to have contacted each other. Like—
Mr. Jekielek: At scale you mean?
Mr. Matze: Yes, at scale. Because I feel like there were some kind of barrier there. I don’t know what it is with current social platforms. But we weren’t expecting any international growth. And so we didn’t have any boundaries or restrictions or anything really. And so as these groups came in, they were kind of like merging together, and they’re all following each other. And, of course, some people are very angry at that. They’re like, some people were not expecting that, and other people were, welcoming them and even translating their posts into Arabic. There was a Mormon group in Utah that started translating half their posts into Arabic, just so they can communicate. Not even debating religion, not even debating anything, just saying: Hey, welcome, we’re glad to have people who are proud of their country and want to speak freely.
Mr. Jekielek: So, a little while ago, Facebook banned a number of conservative personalities off of their platform, and you actually wrote a post about this. One of the things you said was, “We are not in the business of politics, creating news, and manipulating narratives. We are in the business to partner with our users and bridge the political divide through discussion.” This post got a lot of attention. Tell me a little more about what you were saying.
Mr. Matze: The idea is that we are trying to promote discussion so people can solve problems and talk to one another. One thing that social media is doing now is what I’m not really calling social media. They’re social publishers in my opinion. And a social publisher controls the content that’s on their platform. Publishers publish their opinion, they publish what’s true, they publish what’s false. Publishers like The Epoch Times have a following that trusts them and trusts their opinion. And so they are the ones who dictate what’s true, what’s false, what’s the narrative of the day, etc. When a social publisher, a social media, gets involved and says, this is fake news, this is true. let me fact-check that for you, what they’re essentially doing is they’re killing off traditional media, and they’re trying to become a publisher. But instead of actually writing the content, they let users write the content and they dictate whose content is OK and whose isn’t.
So from that perspective, what got me inspired for that quote was I was seeing they’re banning people. Like, they were banning PJW [Paul Joseph Watson] at that time, they were banning Alex Jones. Now, I don’t follow them. I’ve heard some clips that I thought were funny, and I’ve also heard someone, like, oh, that’s a bit too much, but it doesn’t matter. It’s their right to speak in a public forum, if that’s what social media is, a public forum, and it’s wrong for a publisher to come in and say “I don’t like that creator’s content.” So, let’s say you’re a journalist. I don’t like that journalist’s content. Let’s boot them off of The Epoch Times. That’s essentially what they’re doing on their platform.
Mr. Jekielek: So they’re taking the role of basically—
Mr. Matze: A publication. But in essence, if you look at the people who rely on the companies that rely on social media for their income and referral traffic, they’re losing it right now. Because every time they take a step further to be more and more of a publication, more of the real publications out there are losing referral traffic from them. They’re not getting clicks. They are changing their algorithms to keep people on their site. They are trying to replace traditional publications the way we know it.
Mr. Jekielek: Social media ads seem effectively—especially these large ones, even though they’re private companies for sure—to have taken on the role of a public square effectively. Right?
Mr. Matze: Well, they’re supposed to be a public square. And I mean, in premise, they’re a private company. They can do what they want, but people believe they’re a public square and reflect real conversation. But, if you’re, let’s say, in the middle of New York, right, and you’re having a conversation with somebody. There’s a police officer over in the corner here going: Wait, no, fake news. Get this guy out of here.
We need to get some real conversation going in. I mean, that’s effectively a public square. If people started doing that, it would be George Orwell, “1984,” all over the place and people would be freaking out. So why is it OK that our online public square has that kind of authority?
Mr. Jekielek: Free-market principle social media.
Mr. Matze: Yes, exactly. That’s the solution. Let the people figure it out.
Mr. Jekielek: So your initial growth, from what I understand, and why you attracted, as you say, the conservative base was from a tweet by Candace Owens. This is what I was reading. Is that correct?
Mr. Matze: That was the original thing that happened back in December. We got quite a big boost from that overnight. And it wasn’t just her; it was a series of influencers. And she kind of put the cherry on top, if you will, in terms of bringing that user group in. And after that, things settled down for about three months. We had time to repair everything because we weren’t ready—that tweet went out unexpectedly. Since then, now, we had another viral episode when we went on Fox to talk about this, and we had a Politico article that came out. And, since then there’s been some rumors and some fake news that came up that President Trump is joining. That was the first time I heard about it, is when I’m reading these articles: President Trump is considering Parler. I’m like, really? I haven’t heard about this yet.
Mr. Jekielek: Right.
Mr. Matze: So, it was very interesting to see how that developed.
Mr. Jekielek: Politico published an article. That’s one of the things I was reading as I was preparing for the interview. Where do you think that article went wrong? I want to give you an opportunity to talk about it a little bit.
Mr. Matze: So I worked with the writer quite a bit, answering questions, whatever had been asked of me, I made sure to answer. And he was really good with his research. He went in-depth and really asked a lot of questions. But I think what happened was this, an anonymous source close to the Trump administration claims that he’s considering alternative social platforms like Parler. Something along those lines was stated. And once that happened, people picked up on that like wildfire, and it just spread this rumor out there. … That’s very interesting how just that kind of statement can just take off. But it was good for our user growth. But we had no idea that was coming either.
Mr. Jekielek: So this is one of the things I read in the Politico article; I want to ask your thoughts on it. It was written that “much of the content posted on Parler falls into categories deemed offensive and discouraged by the large platforms.” It’s certainly not the content that we post on Parler, nor any of the stuff that I’ve seen so far.
Mr. Matze: That’s pretty opinionated—that statement—because the users for the most part were extremely friendly. There’s nothing objectionable or really that bad. I think it was kind of a narrative leap, based off of the fact that there are figures that were banned on social media on there, but they weren’t necessarily doing anything bad. So I think he was referring to maybe Laura Loomer. She’s the No. 2 most-followed user on the platform right now. She was notorious for chaining herself to Twitter headquarters when she was banned. She doesn’t have really bad content. She’s just very concerned about her community, and she’s trying to make a statement about it. And the fact that, really the people that are on there, no one’s really saying anything too hateful or angry.
So I don’t know where that narrative came from. And the fact now that we have a large Muslim population now, mixing in with this, completely debunks that statement about us being Islamophobic, which they’re all very welcoming and they’re talking to each other, both groups. So for the most part, [but] there’s obvious exceptions. I’m sure you could find them, but for the most part, people are getting along.
Mr. Jekielek: Yeah. I think when I was interviewing Larry Elder a little while ago, he used an example that I think he said 8 percent of the population believes that Elvis is alive. Right? His point being that there’s always going to be people that aren’t representative and that some people will point to as being the problem or the main …
Mr. Matze: Well, yeah. … It depends. A lot of people, I guess, were scared of the concept of a free speech platform. I think it’s good, especially if you give people the tools. It kind of creates this environment where you can’t have these hateful figures hiding in the dark corners of the internet because you have everybody in one place. And, when something terrible is said or somebody’s off-track, people jump right on it. So there was somebody who was posting that the new Saudis that were coming in, he said something along the lines of, we don’t want that “ramen noodle speak” on our platform. And people were coming to the aid of the Saudis: Americans, people whose profiles were blatantly Trump supporter, everything. They’re getting on there: You can’t say that kind of stuff on here. I mean you can, but don’t. It’s rude and it’s uncalled for, and it doesn’t represent the people at all.
Mr. Jekielek: Fascinating. So I mean I have two questions that come to mind. The first one is obviously you have to have some rules, right? So what are the limits of the free speech on Parler?
Mr. Matze: Our rules are basically in line with Supreme Court precedents, the FCC, and the First Amendment. So things that aren’t protected by the First Amendment explicitly with, for example, some FCC cases, we don’t necessarily protect, and by that, I mean pornography and nudity. Those are things we don’t want on our platform because it gets into some moral gray area of, well, if you allow some nudity, how much is OK?
Mr. Jekielek: Got it.
Mr. Matze: Then it becomes arbitrary. And so we just said the FCC has the right to say that there’s no nudity on television during “x” amount of hours. So let’s just take their precedent, their standards, and apply it here. Because that way we can at least say, this is where we got this rule. It’s beyond us, right? It’s a bigger rule than us, and we can’t budge.
Mr. Jekielek: Second question that came to mind. What’s to prevent folks that want to incite trouble? Coming on, maybe even at scale. I mean, there’s been documented instances of folks coming in at scale trying to basically create problems, posing as different groups, and so forth. How do you deal with this kind of thing? Because, clearly, you want to be an open platform to everybody. You don’t want to be pigeonholed as a platform only for one group, right?
Mr. Matze: Yeah. When people come in and if they’re nefarious, there are certain things we can do and there’s certain things we can’t. And there are certain things that the users can do though, as well. And so depending on where it falls, it’s someone else’s, it’s someone’s responsibility. So if they’re impersonating a figure, any company or person, and they’re blatantly impersonating them—they’re not that individual, they haven’t clearly stated this is a parody or this is this, and people are confused. And they come to us and say, is this the real—and to reference last week—James Woods?
Mr. Jekielek: Oh, OK.
Mr. Matze: No, this was not the real James Woods. They were pretending to be the real James Woods, to the point where they were arguing with us on the platform about verification status. “Get me verified. I’m the real James Woods.” We were [saying]: We were pretty certain you’re not. We’re pretty sure, but I don’t want to actually ban the real James Woods. I think he’d get mad at me. So we were very careful with that. But you can’t go on there and impersonate somebody. It’s illegal in some states to do that. Not all states, but some. And, typically, impersonation leads to rules that are violated, like libel or slander.
So we don’t want to get into that mess. So we just fall back on the law on this one and say: Some states don’t allow it, and it leads to libel and slander. So no impersonation.
Mr. Jekielek: OK. No nudity, no pornography; that’s simple. It makes life easier for folks like us.
Mr. Matze: No fighting words. Fighting words, inciting violence are not allowed, death threats, blackmail, obscenity—obscenity as defined by the FCC, which is the Miller test. I don’t know if you’re familiar with the Miller test?
Mr. Jekielek: I’m a little bit familiar. Break it down for me.
Mr. Matze: Basically, if the content is sexual in nature, so not necessarily a photo or anything, but just saying something, [and] it’s deemed as prurient, which is like disgusting, and not relevant to society. If it meets those three categories, it’s deemed as obscene, and you can get rid of it. And so, that’s where we’ll step in. Otherwise, if it doesn’t meet obscenity but you view it as hateful, you’ve got a little panel, every user does. And you can say, I want to mute that person, or I want to just mute that comment, and eventually we’re going to have tools that you can say, I want to auto-mute all hateful things up to a certain degree of hatefulness. Just for the users to decide, not us as a platform, so that way they have all the comfort and tools they expect Twitter to do for them, but they can decide to do it on their own without any intervention.
Mr. Jekielek: So, you’ve described hate speech as a doublespeak weapon to enforce arbitrary rules aimed against their ideological opponents.
Mr. Matze: Yes.
Mr. Jekielek: Can you break that down for me a bit?
Mr. Matze: Hate speech is not defined by the Supreme Court. They’ve gone to the extent of not taking cases or even overruling cases where hate speech has been a topic, specifically because: How do you define hate? You really can’t. What’s hateful to me is hateful to you, or it might not be. You could go off a reasonable society standard, but then you need somebody to actively sit there and go: So what does society think is hateful today? And, that’s, I think, what we’re seeing in these social media platforms is: Well, I deem that as hateful. Whereas, you might say, that’s my political opinion. You know, this is what I believe. And that is an infringement on their First Amendment.
I think using words like hate confuse people. Because you have a lot of people who say they’re for the First Amendment, but they don’t want hate. … Well, that’s a very nice statement. And I think that in a perfect world there would be no hate, right? But the real reality is there are hateful people. And having that discussion about hate is how you solve it. By hiding hate in a corner, you get people who go to the fringe of the internet, plot all these awful things, and then go take terrible actions. And I think having an open discussion about this is the key to solving it. And by using it in a weapon to say we’re going to ban that person because he’s hateful, what you’re actually doing. In some cases is saying I’m banning that person because I politically disagree with them, because I believe their political opinion is hateful. And now you can take that to any extent. That’s why it’s a weapon, and it’s terrible to use it as a weapon. Because I can say [that] whatever point you’re about to make right now, I can just say that’s hateful and shut you down. And that’s not fair.
Mr. Jekielek: So, basically, whenever hate speech is used, in your mind, and someone cites hate speech as the reason for an action or a banning or something, that’s basically weaponized in your view.
Mr. Matze: Not always. Of course, there’s obvious instances of hate speech when someone comes out and makes a blatantly antisemitic claim, that’s hate speech.
Mr. Jekielek: Well, and there seems to be debate about that even these days, frankly, right?
Mr. Matze: You know, it seems very interesting that, that anti-Semitism is tolerated more now and isn’t viewed as hateful as someone who says something Islamophobic, as more hateful in the world today. Society changes, certain perspectives change with certain people, and my perspective is, I don’t wanna get involved in the hate argument. Let’s let the people discuss it, and we’ll just follow the law here. An people should come to their own conclusions. And generally speaking, what we’ve seen is you see good Samaritans come out, and they fight those who are saying hateful things, like my “noodle” example from earlier. Most people are not hateful. They just want to talk, and the more you censor them, the more hateful they get.
Mr. Jekielek: So, what happens in a situation where, a number of people come in and try to manipulate the conversation? For example, there’s been cases where a group of people has come in on a platform and pose as white supremacists—for example—to basically create problems. How do you deal with a situation like that?
Mr. Matze: Generally speaking, from what I’ve seen, our platform doesn’t really have a whole lot of those kinds of people yet. We’ve seen a couple of them come in, and they’re typically really suspicious. This is not someone’s real opinion. They’re trying to defame us or they’re trying to make us look bad. Having real people, having real conversations is one of the things that we like on our platform. And when you see two people with a red badge, that means they’re a real person, and they’re having a real conversation with somebody.
When you see people like that, they’re never verified. They come hot and heavy right out of the gate. You know, they’re trying to troll really hard. What’s really interesting is the real people having real conversations, the red badge people, they take screenshots of those figures, and they share them around. They’re like: “Everybody, block this guy. He’s a troll, or let’s mute him.” And the best part about muting and letting users mute people is that the mute is a user-controlled shadow ban, if you will.
Mr. Jekielek: OK. Got it.
Mr. Matze: We don’t shadow ban as a platform. That’s wrong. It’s morally wrong, but you have the right to control what’s happening on your platform. If you ban somebody, they’ll spin up another account, and they’ll come after you again. But if you mute them, they’ll keep trolling your account, and they think they’re trolling your account, and they’re not. And so as a user, that’s a powerful tool to keep angry, troll-like people, like you’re talking about, where fake white supremacists come in and defame people. It’s their tool to keep them away from their profile and to keep them from making them look bad.
Mr. Jekielek: Very interesting. So tell me, how are things going with this adoption of Parler as a commenting platform? Is it something that a lot of sites are getting interested in?
Mr. Matze: Yeah, it’s getting adopted more and more, especially with the publicity because people are seeing their commenting counts shoot up a lot. So people who are just posting articles to Parler are seeing comments in the range of like one, zero, three, five, somewhere in there. When they have the Parler commenting platform on there, when people engage on Parler, it also engages on their website. There’s no barrier. So the content is mirrored as the same thing on both. And what it’s doing is it’s creating all this conversation about the content is bringing people back. And you’re seeing that with the numbers that have increased. When our user load increases, our common integration partners’ load increases as well. So, for example, The Epoch Times, when we were on the [inaudible], the next day, you guys had 100-plus comments on everything. Whereas the week before, when we had maybe a couple thousand daily active users, you were seeing like three or four comments, and it just shoots up with their user load. And so you can see it actually growing.
Mr. Jekielek: That’s a good pitch for folks coming on the platform. So tell me, actually, what is your pitch to users? Like how is this different than Twitter really?
Mr. Matze: Fundamentally, right now—
Mr. Jekielek: For conservatives I can see. Conservatives are coming over because they’re afraid of being potentially banned on Twitter or some other social media. What about other users?
Mr. Matze: Right now, we have a really good appeal with the conservative crowd; we’re trying to expand that. And I think by offering more tools like video sharing—our influence network that we’re trying to work on, where users can actually make money with us, instead of us essentially using them as monetary sources without rewarding them. That’s where we’re going to start seeing, I think, more universal growth.
Mr. Jekielek: How does that work?
Mr. Matze: The idea is that people—content creators, whether it’s a publication, individual, a company—are creating content for the community, which is a resource which people need. And so when they create that content, other individuals, or advertisers, can seek it out and advertise with it or just give people a donation, if you will, and compete for being a part of that conversation. And in doing that, the content creators are going to be rewarded. And, obviously, we will as well. So we’re partnering with people. We’re not using them. We’re not selling their information either, which I think is something that’s becoming more and more of a concern.
Mr. Jekielek: So let me get this right. In this case, advertisers will specifically pick certain users if they liked those users?
Mr. Matze: It’s not going to be a manual process. We’re going to be placing them with each other.
And the nice part about that is it eliminates a few problems that advertisers have today. One is, for example, if an ad is placed between two posts—let’s say I follow someone who’s deemed as hateful and I follow a—I don’t know, a dog groomer or whatever. I’ve got this nice dog picture and then I have this hateful piece, right? Something that Twitter would deem as hateful or Facebook would, and then right in between is an ad for a shoe company. So someone takes a screenshot of that, and they send it the shoe company, and they send it to everyone else, and they say, your shoe company is supporting hate. And they [decide]: We gotta get rid of the hateful user now, because we’re gonna lose advertising revenue if we don’t boot that hateful user off our platform. And so they boot the user.
With our model, the shoe company, instead of targeting the user who’s reading it, [is] targeting the influencer who has better influence over the followers. They know their followers better. They know it’s going to be effective and why they’re there.
Mr. Jekielek: Right.
Mr. Matze: And so when you target them with the ad, if someone [thinks] this is a hateful person you’re advertising with, then the advertiser can deal with the person directly. They don’t have to deal with us, and they can all work it out. Everybody can exist in one place.
Mr. Jekielek: You’re trying to address a lot of big challenges in the social media ecosystem.
Mr. Matze: We’re trying to create a free-market social media, if you will.
This interview has been edited for clarity and brevity.