Fast company logo
|
advertisement
Want to solve the misinformation crisis? We already have a proven solution at our fingertips

[Source Image: Val_Iva/iStock]

BY Avi Tuschman5 minute read

The most dystopian feature of our time is not that we face formidable challenges; in a different era we might have had enough shared beliefs to navigate a bloodless presidential transition, vaccine hesitancy, racial tension, and even climate change. Today, however, our body politic suffers from an informational infection that hinders our ability to adequately respond to these serious threats.

Enough misinformation has been injected into social media channels to keep our society divided, distrustful, and hamstrung. AI-powered recommendations of user-generated content have exacerbated political polarization. And foreign governments have expertly manipulated these algorithms to interfere in the 2016 and 2020 U.S. elections. Russia-sponsored disinformation on YouTube has continued to generate billions of views beyond election years and fomented conspiracy movements the FBI has deemed a domestic terror threat.

Social media is also the primary vector of the COVID-19 “infodemic,” which contributes to the 60 percent of deaths conservatively determined to be avoidable. The head of the WHO has warned that “fake news spreads faster and more easily than this virus, and is just as dangerous.”

What has gone wrong? Google, YouTube, and Facebook are the world’s top three websites outside of China. Their status is not an accident: They organize the world’s information exceptionally well according to popularity-driven algorithms. Despite these platforms’ benefits, popularity has an uncomfortable relationship with factuality. Systems that optimize for viral content quickly spread unreliable information. Over a quarter of the most viewed English-language coronavirus YouTube videos contain misinformation. On Twitter, MIT scholars have calculated that fake news travels six times faster than true stories.

How can we improve our information systems to save lives? As NYC Media Lab’s Steve Rosenbaum has pointed out, neither the tech platforms nor governments can be trusted fully to regulate the internet, “So it’s like we want this magical entity that isn’t the government, that isn’t Facebook or YouTube or Twitter.”

Rosenbaum is absolutely correct: solving the misinformation crisis requires a “magical” third entity that lacks any incentive to manipulate information for economic or political ends. However, it is the tech companies that could build a sufficiently fast and scalable system for distinguishing facts from falsehoods. Such a solution is not only theoretical—its key components are already well developed and proven.

Among the world’s top websites there is one exceptional case that has not evolved to sort content by popularity. The fifth-largest website outside of China organizes the world’s information according to reliably documented facts. It ranks higher than Amazon, Netflix, and Instagram. This website is Wikipedia.

But how accurate is it, really? In 2005, a blind study in Nature concluded that Wikipedia had no more serious errors than the Encyclopædia Britannica. In 2007, a German journal replicated these results with respect to Bertelsmann Enzyklopädie and Encarta. By 2013, Wikipedia had become the most-viewed medical resource in the world, with 155,000 articles written in over 255 languages, and 4.88 billion page views that year. Between 50% and 70% of physicians and over 90% of medical students use Wikipedia as a source for health information.

Today, Wikipedia is cited in federal court documents and is relied upon by Apple’s Siri and Amazon’s Alexa. Google draws heavily from Wikipedia, providing excerpts for their search engine’s popular Knowledge Panel. Wikipedia’s handling of COVID-19 was described in The Washington Post as “a ray of hope in a sea of pollution.”

How has Wikipedia become “the largest bibliography in human history” and the “commons of public fact-checking”? The platform has three simple core content policies: Neutral Point of View, Verifiability, and No Original Research; yet it is also governed by hundreds of pages of policies and guidelines, which have become a veritable body of common law.

While anyone can submit an edit, Wikipedia has a formal hierarchy of administration. Editors strive to reach consensus, but the platform also provides a range of conflict-resolution mechanisms. Wikipedia then enforces sensitive results through 11 protection methods. Human oversight works in concert with AI-powered vandalism-reversing bots, which can make thousands of edits per minute. Crucially, all this occurs in a transparently logged environment.

The output of this extraordinary fact-verification technology is absolutely eye-opening: Just read the first paragraphs of Wikipedia’s article on the “Global warming controversy.” Compare Wikipedia’s article on “Vaccines and autism” with the top five hits for these words on YouTube, where 32% of vaccine videos oppose immunization.

advertisement

[Screenshot: Wikipedia]
Social media platforms can leverage Wikipedia’s strengths to reduce their weakness. They must provide an open-source fact-checking space in their content-moderation systems. Doing so is not only an ethical responsibility; it’s also a smart move to get ahead of the regulatory hammer.

Here’s how it could work: A tiny percentage of social media content contains viral misinformation deleterious to public health. Tech companies should start by implementing policies to make such content eligible for open-source fact-checking. The platforms could use several mechanisms to pass suspect content to a distributed review process. Here, fact-checking users would utilize the same open-source software and mechanisms that have successfully evolved on Wikipedia to adjudicate verifiability. The “visible process” of fact-checking would occur on a MediaWiki, ideally governed by a multi-stakeholder organization. The facts themselves—the “ground truth”—would be English-language Wikipedia text, from articles that meet minimum authorship and editorship thresholds.

A massive third-party workforce—social media users—is already available to power this solution. Wikipedia demonstrates that millions of volunteers will check facts without monetary compensation. In fact, research shows that people are wired to punish moral transgressions in exchange for only the resulting dopamine stimulation to the brain. Indeed, altruistic punishment already constitutes a significant proportion of social media activity today. Tech platforms need only harness this instinct to clean up harmful misinformation.

Further research on fact-checker behavior could help clarify the required scale of a user-powered content moderation mechanism. Some Wikipedia editors might not want to work for the “benefit” of a for-profit company. However, social media fact-checkers will likely come from a far larger pool of people. There are precedents for crowd-sourced work contributing to large tech companies. For example, Local Guides enrich Google Maps with a significant amount of information; this motivation loop works because they are not intrinsically motivated to work for Google, but to help friends and family.

When recruiting fact-checkers, social media companies should convey two important points: (1) Fact-checking benefits the community by reducing misinformation; and (2) harmful content will be deranked and demonetized, reducing the profitability of bad content for both third-party creators and the tech platform.

How quickly we adapt the most successful fact-checking technology to our popularity-maximizing social news platforms has immense ramifications. If we maintain the status quo, we will live in an increasingly dangerous post-factual era. However, if we mitigate key areas of misinformation on Facebook, YouTube, and Twitter half as well as Wikipedia has, our information age will succeed in increasing shared knowledge, understanding, and well-being for all.


Avi Tuschman is a Stanford StartX entrepreneur, a pioneer in commercializing Psychometric AI, and the author of Our Political Nature: The Evolutionary Origins of What Divides Us. This article abbreviates a white paper he presented at the Stanford Cyber Policy Center, titled “Rosenbaum’s Magical Entity.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.


Explore Topics