When I speak to current and former Facebook employees about the recent acts of defiance against Mark Zuckerberg, I hear about a war waging between two ethical frameworks. On one side stands Zuckerberg, his VP of U.S. policy, Joel Kaplan, and a rapidly diminishing cadre of old-guard tech leaders. On the other stands a swelling number of rank-and-file employees along with the leadership of companies such as Twitter, Apple, and Snap.
Not long ago, Zuckerberg’s side, the side of neutrality, went unquestioned among tech giants. Tech platforms were to be neutral arbiters of free speech. It was their job to serve people the content that they wanted to see, not to decide which content was worth seeing. In Zuckerberg’s words, there is an ethical imperative for tech to be a “neutral platform for free speech” rather than an “arbiter of truth.” Zuckerberg and Kaplan proudly champion this stance to the news media and their internal employees. When other ethical issues are raised, such as concerns about promoting violence against peaceful protesters, the ethical imperative for neutrality is what they are weighed against.
This stance has always had its skeptics. When decisions about a platform’s structure and ranking algorithms determine which ideas receive global amplification and which are ignored, it becomes difficult to stay out of the truth-defining business. Academic research on misinformation and polarization have made it increasingly clear that decisions made by tech platforms are having a massive, far-from-neutral impact on the way that people make sense of reality. For example, optimizing for engagement tilts public discourse toward outrage. To the many employees within tech companies who work on countering electoral manipulation, addressing anti-vaccine conspiracy theories, or countering groups advocating violence, it has become apparent that prioritizing neutrality above all else has led to disastrous results.
An alternative viewpoint emerged, one that sees neutrality as impossible. Some tech leaders began a subtle shift, referring to their communities not as neutral platforms for free speech, but as communities built on principles. This approach sees neutrality as a dangerous fiction, and takes the more precarious position of stating what a platform stands for and curating content accordingly. Apple News, for example, uses human editors to select stories based on a set of editorial principles, not “neutral” engagement-optimizing algorithms.
Many are justifiably uncomfortable with the idea of a privileged tech elite openly foisting their principles on the world, even if the alternative is optimizing for outrage. Conservative news outlets, which disproportionately benefit from the status quo, are especially vocal about maintaining “neutrality.” Playing along has allowed tech companies to avoid uncomfortable questions about governance, accountability, and whose principles our sociotechnical infrastructure should reflect. And so even though fewer and fewer people in the Valley believed it, tech leaders maintained a veil of neutrality to hide behind. Until COVID-19 hit.
Overnight the notion of arbitrating truth went from an ethical gray zone to a public health necessity.
Overnight the notion of arbitrating truth went from an ethical gray zone to a public health necessity. Those familiar with misinformation predicted an unprecedented “infodemic” of life-threatening conspiracy theories and quack health promises and were quickly proven right. Tech platforms optimized for showing people what they want to see, “not arbitrating truth,” began fighting a losing battle against conspiracy theories targeting a COVID-19 vaccine still in development. The veil of neutrality was lifted. Major platforms such as Google Search and YouTube attempted to push people toward official resources such as the CDC and the WHO while taking stronger stances against health misinformation. Employees took increasingly bold, principled stances, first on the pandemic, and then on other critical topics such as the integrity of the elections and the movement for Black lives.
So when Zuckerberg refused to intervene on presidential posts that violated his and his company’s principles, the neutrality argument didn’t fly the way it might have a few short months ago. Employees walked out and quit. CEOs of Twitter and Snap made a point of distancing themselves from his position, emphasizing not their commitment to neutrality, but the principles that their companies stand for.
Platforms such as Facebook, Google, and Twitter are already hotbeds of misinformation about COVID-19, climate change, and the fight against systemic racism. As the infrastructure of public sensemaking, these companies can’t afford to stay neutral on the crises of our time. Big Tech’s veil of neutrality is lifting, and that’s a good thing. If Zuckerberg loses this fight, and it seems increasingly likely that he will, it will lay bare a set of questions that he’d rather not have explored: questions about why a tiny cohort of billionaires get to dictate the principles that shape our social fabric.
David Jay is the head of mobilization at the Center for Humane Technology, where he works with tech insiders to transform the industry.