advertisement
advertisement

It’s time for tech platforms to stop tolerating the intolerable

To be truly inclusive, the internet needs to get comfortable with exclusion.

It’s time for tech platforms to stop tolerating the intolerable
[Images: GDJ/Pixabay; Clker-Free-Vector-Images/Pixabay]

The internet is not what it was supposed to be. Early tech pioneers and the tech entrepreneurs who came after them imagined a world without borders that would bring us together, where all human knowledge would be available at our fingertips and everybody would have a voice. If there had been a motto for what the internet was supposed to stand for, it would have been “collaboration, openness, freedom.” But while we got some of that, we also got polarization, falsehoods, rising extremism that destabilizes societies, autocratic powers that undermine democracies, and giant tech companies that are eliminating privacy. We got a system that not only permits, but amplifies hate, bullying, and divisiveness.

advertisement
advertisement

Big Tech has started to timidly manage the content available on its platforms, and the debate over whether the internet should be a place where anyone can say anything is raging. Facebook took down Trump ads with a Nazi sign and a campaign video with COVID misinformation. Reddit banned 2,000 subreddits, including “The_Donald,” a forum with nearly 800,000 users, because of repeated violation of the platform’s rules against hate speech, harassment, and content manipulation. Twitter started adding fact-check notices on tweets and even briefly suspended the Trump campaign’s account for spreading false information. Molyneux, a Canadian white supremacist, was banned from YouTube. Trump was suspended on Twitch for “hateful conduct.”

Some people lambast these moves as putting tech giants on a slippery slope that leads straight to censorship. Others are more quietly concerned about what they mean for the future of free speech. But these arguments seem to overlook that the internet today is a driver of division and chaos because we have lost sight of a simple truth: To be truly inclusive, the internet needs to get comfortable with exclusion.

“If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them,” Austrian philosopher Karl Popper warned 75 years ago in laying out the Paradox of Tolerance. He raised the idea that a line needs to be drawn when the preservation of society is at stake. In other words, if we truly value democracy and the free-speech environment that have allowed Facebook, Twitter, YouTube, and many other sites to flourish as places that showcase a multiplicity of voices, we need to urgently stop broadcasting the voices that threaten this environment. As UN Deputy Secretary-General Jan Eliasson declared during the celebration of the 50th Anniversary of the International Convention on the Elimination of All Forms of Racial Discrimination: “Racist hate speech […] threatens to silence the free speech of its victims.”

To be tolerated, an opinion has to respect the right of all humans to exist equally. As African American novelist and activist Robert Jones Jr. wrote: “We can disagree and still love each other unless your disagreement is rooted in my oppression and denial of my humanity and right to exist.” When someone is calling for a group of people to be discriminated against, for someone to be bullied, raped, or murdered, for a specific race or gender to be considered inferior, they are by definition making it impossible for the space they are occupying to be inclusive.

And while beliefs, truth, and interpretation of facts can be debated, it is incredibly dangerous to extend intellectual relativism to facts. “Facts do not cease to exist because they are ignored,” Aldous Huxley noted in the 1920s. A common understanding of facts is required to enable any kind of discussion, collaboration, and ultimately trust between human beings. When the frontier between reality and fiction is constantly undermined, the foundations of human society start to irremediably crumble.

Tech platforms’ laissez-faire approach to content moderation—draped in the flag of free speech and amplified by algorithms that are optimized to incite extreme emotions—has created a swamp where the strongest, loudest, and most outrageous voices triumph. By trying to be inclusive without setting boundaries and protections, we’ve allowed people who crave division and violence to have more reach than ever: They’re undermining the fabric of the society that allows them to exist in the first place.

advertisement

There are those who argue that imposing any kind of limitations on content will inevitably lead to governments muzzling dissidents and suppressing criticism. They seem to forget the continent where I’m from. Europe has been doing this for a few decades, and while mistakes have been made, free speech is alive and well in most quarters. Following World War II, Western European democracies have implemented a series of measures that, while protecting free speech, ensure that it isn’t absolute and takes into account other values and rights, such as human dignity and safety. In Germany, Austria, and France, for example, Holocaust denial is punishable by law. Germany has banned parties with Nazi ideologies.

This approach also works well in the business world. Across the globe, companies that want to be inclusive and ensure equal opportunities for all employees tend to have zero-tolerance policies toward hate speech, racism, discrimination, and fabrication of facts. In other words, to remain tolerant and competitive, they don’t tolerate intolerance and misinformation.

When it comes to the internet, some of the largest platforms recognize the need to restrict speech. Wikipedia puts content through a fairly intensive vetting process focused on ensuring accuracy of facts. Social networks have spent considerable time defining comprehensive guidelines around what’s acceptable and what’s not and have hired moderators to enforce them. Facebook released community standards in 2018 outlining six different types of content that would be restricted, and it employs more than 15.000 moderators in the U.S. to enforce (at least some of) these restrictions. YouTube, Twitter, Reddit, and others have rules that explicitly ban certain types of content. Unfortunately, judging by the results, these efforts have been inadequate. The rules have been a moving target and sanctions variable.

So, before we talk about whether we need new rules and guidelines for content on the internet, we need tech platforms to start implementing their own guidelines with strong conviction and the full force of their formidable resources. Tech used to be the place that attracted people who wanted to disrupt the status quo and were not afraid to go to war with the powerful, the rich, the established. Early tech pioneers were fearless in their conviction that they were building a better, more inclusive world. We need today’s tech leaders to be equally fearless, for the world and for our democracies. Only by being intolerant of the intolerants will we realize the original dream of building an inclusive internet.

Maelle Gavet has worked in technology for 15 years. She served as CEO of Ozon, an executive vice president at Priceline Group, and chief operating officer of Compass. She is the author of a forthcoming book, Trampled by Unicorns: Big Tech’s Empathy Problem and How to Fix It.

advertisement
advertisement