In the aftermath of the 2016 election, it seemed to many like the rising tide of alt-right extremists had managed to hijack the national conversation to such an extent that they put their troll-in-chief—or God-Emperor, as some of Trump’s fans call him—in the White House.
How did this happen? How did the unpleasant underbelly of social media become so powerful as to start influencing what people read, how they think—and ultimately, how they voted?
These questions are at the heart of a new book by New Yorker staff writer Andrew Marantz called Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation. He spent three years embedded with fascists and alt-light propagandists, learning their backstories, their ideologies, and how they learned to use social media to propel themselves and their repulsive ideas into the limelight. Fast Company spoke to Marantz about the bright-eyed gatekeepers of new media platforms, why Trump should be banned from Twitter, and whether these people actually believe what they post online.
This interview has been edited for clarity and concision.
Fast Company: How did you first go down the alt-right hate speech rabbit hole, and how did you stay sane while reporting?
Andrew Marantz: I was not initially drawn to the darkest corners of the web or of our society just for the sake of it. What I was originally drawn to was a set of questions about what the internet was doing to our society and our brains. The way to show what the internet was doing to us was to embed with not only the gatekeepers of social media but also what I call the gatecrashers. I kind of just held my nose and did it.
FC: Several times, you portray the descent of a so-called “normie” into radicalization. How did you get inside the heads of people that are very hard for many of us to understand?
AM: One of the key underlying concepts running in the background of the book is this concept of contingency that I borrowed from the philosopher Richard Rorty. Contingency means that the future is not predetermined. We don’t know what the outcomes will be, and it’s contingent on various factors and various human actions. That in itself is obvious, but we often assume the opposite, either tacitly or explicitly. One of the ways that we tend to think non-contingently is we think of people as being set in their ideologies or their outlooks. And that’s not the case at all. Almost no one is born a white supremacist. You know, Derek Black [godson of David Duke] was, and Eli Saslow wrote a wonderful book about him. But basically other than that, everyone starts off thinking they’re opposed to overt white supremacy.
Now, we’re swimming in a river of white supremacy that is the bulk of American history, so it’s not like supremacy doesn’t touch the average white person’s life. It certainly does, but most people think they’re opposed to it. So I really wanted to track how people become seduced by this stuff, because if you don’t understand that, you don’t understand what’s motivating anyone to do this stuff. I take a pretty clear stance in the book. I’m Jewish. I hate Nazis. I’m opposed to all forms of bigotry. I don’t try to “both sides” that or pretend that I’m objective on those questions. But I do think it’s important to understand where these things come from and why people find these ideas attractive, even though they shouldn’t.
What you find when you actually look is weirder and more specific and textured and kind of unpredictable than anything that I could have invented.
FC: Do these people really actually believe some of this stuff that they’re writing? Take the virulent anti-Semite Mike Enoch, for example. His wife is Jewish! (And then of course leaves him when she finds out about all this crazy stuff he’s been saying). Can you talk a little bit about this severe cognitive dissonance? Do these people really believe this stuff?
AM: I think there’s a spectrum. Some people don’t believe anything and they’re just nihilists or opportunists or just trying to make a buck or trying to get some attention, whether positive or negative. I think other people are true, dedicated ideologues. The metaphor they use is the metaphor of the red pill. They feel like Keanu Reeves in The Matrix, and they feel like the scales have fallen from their eyes and that the truth has been revealed to them. That’s a really powerful feeling. Even when you are feeling it erroneously, it can still motivate you really strongly, so strongly that you mess up your whole life because of it.
FC: The Overton Window is one of the most powerful metaphors in the book. Can you explain what it is and how it became so influential?
AM: The Overton Window was this concept invented by a guy named Joe Overton who worked at a think tank and wanted a metaphor to explain how ideas that were once unthinkable could become conceivable and then controversial and then enactable. In the realm of policy, the example people often use is same-sex marriage. Just a few years ago, that was off the political map. That was inconceivable. Now it’s essentially taken for granted, and that is wonderful.
But the Overton Window can be pushed in either direction. The reason I spent so long embedded with dodgy characters on the internet was to try to understand at a really deep level how they do what they do. The overall bottom line summary of what they are trying to do is they’re trying to shift the Overton Window in some extremely unsavory directions, and they are having a lot of success.
That can happen at a pretty deep or not entirely visible level. It doesn’t always happen on the level of some unsavory person showing up on TV or showing up in your inbox. It can happen in more indirect ways that still affect your life and the direction of the country you live in.
FC: I want to talk about about the tech platforms, as that’s a huge theme in the book. You portray the deadly protests at Charlottesville in August 2017 as a turning point, at least for Reddit, in terms of moderating speech. I’m curious what it was that finally convinced some of these leaders to take a stand and why they took so long.
AM: To be fair to [cofounder and CEO] Steve [Huffman] and [cofounder] Alexis [Ohanian] and everyone else at Reddit, it wasn’t like August 2017 was the first time they cracked down on hate speech. They had many high-profile bans and disciplinary actions before that. So this was happening to some extent before Charlottesville; it just wasn’t happening nearly quickly or robustly enough, and there wasn’t really systemic, cohesive thought put into it.
That’s just because the view of the guys who founded Reddit, like the guys who founded a lot of these companies, was basically pretty simple: we believe in free speech. And to the extent that there were complications with that, they just dealt with them on an ad-hoc basis because the platforms were small enough that they could do that. And because they weren’t political philosophers, they were coders. They didn’t think it was their job to have a robust view of what free speech is and how it should work. They were just trying to iterate and ship code, and you know, try things and fail and fail better and try it again.
That’s the startup mantra, and it just turns out that when you’re building platforms that are going to be hugely influential constitutive parts of American discourse, that are going to have massive effects on presidential elections, that are going to determine the world’s climate change policy, then you have to grapple with it at a deeper level. And by the time they realized that their platforms would be that influential, it was in some senses too late. They always hoped that their platforms would be that influential, but I think they didn’t either didn’t fully believe it or they didn’t fully prepare for it.
FC: The battle for the national conversation with the gatecrashers, as you so aptly call them, continues to rage. You point out that things that had once been considered ridiculous and unthinkable are now on Fox News, and the president is tweeting them daily. They’re just another thing on the internet. Where are we now in terms of the ways tech giants are policing free speech, and what do you think they really should be doing about this problem?
AM: I think the simplest way to describe where we are now is to borrow a term from Thomas Kuhn, the philosopher who invented the phrase “paradigm shift.” I think we’re in a phase now that Kuhn would call “crisis,” when you are between paradigms and you don’t know which paradigm will come to predominate. You have overthrown some previous paradigm, and you’re waiting for the revolution to deliver a new paradigm to you.
Kuhn was talking about this in the realm of scientific revolutions, but I think it applies to political and ideological revolutions as well. I think we have dispensed with a lot of the old ways of talking and thinking about not only politics proper but just who we are as a society. That has a lot of upside to it. There were a lot of ways in which the previous modes of discourse were extremely flawed and problematic. They didn’t take marginalized voices into account. They didn’t allow people to speak freely if they weren’t professional journalists, in a lot of cases. It’s just that there are massive downsides, too, including populism and violence and increased nuclear tensions, and the list goes on.
I think we are just starting to wrestle with both of those, and it’s going to take a lot of work and careful thinking to figure out how we can get out of this crisis. I don’t think there are going to be any easy answers.
FC: You did kind of pose one in your New York Times op-ed, in terms of replacing Facebook COO Sheryl Sandberg with the human rights lawyer Susan Benesch.
AM: The rhetorical point there is they could do this, but they’re not going to. Part of why they’re not going to is not because Mark Zuckerberg is personally evil and likes Nazis or something. He is motivated by the incentive structure of capitalism, and you could even make the case that if he did do something as rash as that, the board might replace them (in the Zuckerberg case, it would be extremely hard, given the way he structured his ownership of the company). But that’s part of why this is such a deep problem, because a lot of incentives that are extremely salient and robust are pointing in the wrong direction.
FC: I think it’s a really interesting point, both in terms of capitalist incentives but also just engagement and the way that these algorithms are built. What other kinds of incentives do you think that these platforms should encourage, and how could they do that?
AM: I think trying to ensure that the planet survives would be a decent incentive. I think making sure you don’t start any genocides would be a good thing to try to take into account. I’m not naive. I don’t think that these companies are going to become nonprofits, but it’s also true that the people who work at these companies want to be able to sleep at night, and they want to be able to look themselves in the mirror. Capitalism also allows you to sell tobacco, but tobacco executives should feel ashamed of themselves. We have a lot of social stigma in place to make them feel ashamed. I don’t think social media is like tobacco in the sense that it has no social value, but I think that it doesn’t have 100% pro-social value. It also has antisocial components to it.
FC: That’s also where the title of the book comes from.
AM: We were sold a bill of goods for about a decade about how these innovations would only be pro-social. And that’s just bullshit. They have pro-social elements, and they have antisocial elements.
I even put the dictionary definition [of antisocial] in the book because I know it’s not the most common word. I think of it like antibiotics and probiotics. Pro-social elements are those which tend to help a society flourish and feel well-connected to itself and antisocial elements are those that tear society apart.
FC: Towards the end of the book, you paint this picture about how some of these trolls, like men’s rights activist and conspiracy theorist Mike Cernovich, are rebranding or distancing themselves from Trump. Does that mean that some of the moderation and deplatforming efforts—in which companies refuse to host a person’s content—actually work?
AM: I think platform moderation and deplatforming do work. I don’t think that means that they should always happen. I think it’s a case-by-case judgment, and obviously we need to be careful that these companies don’t just deplatform any speech that they disagree with. But I do think it works. I think we can look at some of these as success stories. If Alex Jones violates your terms of service again and again and again, eventually you can kick him off, and his ability to sell snake oil to innocent dupes will diminish. If someone who may or may not happen to be the President of the United States breaks your rules again and again, you’re allowed to kick them off, too.
FC: Do you think that Trump should be deplatformed?
AM: Yeah, I mean, he’s violating the rules. I think at the very least, they should send someone from the company to the White House and say, “Hey bro, you should stop threatening war on our platform because we have a rule against threats.” So maybe they could take that step before they just suspend him. I mean, they’re not going to do any of this.
FC: It makes me think too about Facebook’s so-called “supreme court” that the company is talking about implementing to be an arbiter of these kinds of cases. These companies hold so much power to either make these decisions—or not. Do you think that that there should be governmental oversight in that vein?
AM: It’s really tricky, right? The First Amendment should be interpreted to mean that we have to be extremely careful about limiting the power of government to regulate speech. I definitely wouldn’t advocate for the government being in the business of establishing a ministry of truth. But that said, I do definitely think there is a role for government here. I just think we have to be exceedingly careful about what that role is.
I would also just point out that there is a lot of low-hanging fruit that can happen without the government doing a single thing. I’m thinking about reforming the way the platforms themselves work, what the algorithms incentivize. I don’t think this is really going to get better until platforms stop being run on emotional engagement. I don’t think it’ll get better until we stop incentivizing people to spark the most immediate, sharp incitement of every salient emotion they can and actually work on figuring out some way to promote a more pro-social realm of civic participation. But that’s harder to do, and it’s harder to make money that way.
FC: I’d love to talk a little bit about going on right now. Have you noticed differences in terms of either the tools that propagandists are using or even where they’re congregating for this next upcoming election versus the previous election?
AM: As Trump gets more desperate, he definitely relies more and more on his coalition of unsavory trolls and propagandists and alt-right and alt-light cheerleaders. There was a time when he could distance himself from them at least ostensibly, and he could wink and nod a little bit. As his support has diminished to essentially only his hardcore base, he has obviously felt the need to call on his trolly minions more and more. You saw the social media summit at the White House, which was a who’s who of online garbage. Several times throughout the presidency, he has retweeted just overt bigotry and xenophobia. I think a lot of that just has to do with his mood and how scared he is on a given day.
There are still huge loopholes that were exploited in 2016 that have not been fixed for the next election. But some of them have been fixed. And to me, I think the one silver lining of Trump winning the 2016 election is that it woke people up to just how severe these problems are and actually spurred some of these companies to action in a way that I frankly think would have been inconceivable if a few thousand votes in Michigan and Wisconsin had gone the other way.
FC: Are there things that you think everyday people should be watching out for on social media as we move into 2020?
AM: It’s really tough out there for everyday users. On the one hand, you want people to be aware that disinformation and misinformation and invitations down a slippery slope to fascism can be anywhere you look. On the other hand, you don’t want to freak people out so much that they don’t trust anything and so that they throw up their hands and say, “Oh, who knows what’s true, I’ll just trust my gut.”
A lot of times when people are sowing misinformation, their goal is not to get everyone to believe every piece of misinformation but rather to create muddle and confusion and exhaustion so that nobody believes anything. I don’t want to be out here saying, “Don’t trust anything you read, just retreat to the five people you know,” because you do have to be informed and participate in society.
You often hear people throughout the political spectrum saying, “I’m skeptical of mainstream commercial news organizations because they are trying to make a buck.” That’s great. You should be skeptical of mainstream news organizations. But I don’t see that same skepticism applied to what people read on Twitter or Facebook, as if Twitter and Facebook are not out there to make a buck. It has this mask of seeming like it’s coming from your friends in some sense, but it’s also coming to you through an extremely sophisticated algorithm that is trying to keep you addicted to this platform in order to pay someone’s bills. That doesn’t mean that everything you read on Facebook is false or that everything you read on Facebook is true. It just means, let’s try to apply our skepticism evenly.
FC: If there’s one overarching takeaway that you hope people get from your work, what would you say?
AM: I mean, this might be kind of self-serving or rationalizing, but there’s a certain amount of stuff we can understand at a bumper sticker level or at a tweet level or even at a TED talk level, and I say this as someone who has given a TED talk. But there is a certain amount of depth and real felt visceral understanding of this stuff that you just can’t get at that level. If there’s one takeaway it’s that by reducing this stuff so far down that we can pretend we understand it without really getting all that deeply into it, we’re actually just playing the wrong game. And the only way to work ourselves out of this phase of what Kune would call crisis is to really look at it closely and really figure out what’s going on at a felt, societal, psychological level.
I start the book with the epigraph by James Baldwin: “Not everything that is faced can be changed, but nothing can be changed until it is faced.” And facing it doesn’t mean glancing at it and then, dunking on it or ratioing it, and then looking away. It means really looking at it.