Fast company logo
|
advertisement

Just like driving requires an exam, social media users should be required to take a 15-minute media literacy course, followed by a quiz, before using their platform of choice.

Why using Facebook and YouTube should require a media literacy test

[Photo: Jaromir Kavan/Unsplash]

BY Mark Sullivan6 minute read

We don’t let people begin operating motor vehicles until they’ve taken driver’s education and then a test for a very good reason: Vehicles are dangerous to drivers, passengers, and pedestrians. Social networks and the misleading and harmful content they circulate are dangerous for society too, so some amount of media literacy education—and a test—should be a condition of using them.

Social media companies like Facebook and Twitter would surely object to such an idea, calling it onerous and extreme. But they willfully misunderstand the enormity of the threat that misinformation poses to democratic societies.

The Capitol riot gave us a glimpse of the kind of America misinformation helped create—and illustrates why it is so dangerous. On January 6, the nation witnessed an unprecedented attack on our seat of government that resulted in seven deaths and lawmakers fearing for the lives. The rioters who caused this mayhem planned their march on the Capitol on social networks, including in Facebook Groups, and were stirred to violent action by months of disinformation and conspiracy theories about the presidential election, which they believed had been “stolen” from Donald Trump.

While the big social networks have made significant investments in countering misinformation, removing all of it or even most of it may be impossible. That’s why it’s time to shift the focus from efforts to curb misinformation and its spread to giving people tools to recognize and reject it.

Media literacy should certainly be taught in schools, but this type of training should also be made available at the place where people actually encounter misinformation—on social networks. Large social networks that distribute news and information should require users to take a short media literacy course, and then a quiz, before logging in. The social networks, if necessary, should be compelled to do this by force of law.

Moderation is hard

So far we’ve relied on the big social networks to protect their users from misinformation. They use AI to locate and delete, label, or reduce the spread of the misleading content. The law even provides social networks protection from being sued for content moderation decisions they make.

But relying on social networks to control misinformation clearly isn’t enough.

First of all, the tech companies that run social networks often have a financial incentive to let misinformation remain. The content-serving algorithms they use favor hyper-partisan and often half-true or untrue content because it consistently gets the most engagement in the form of likes, shares, and comments by users. It creates ad views. It’s good for business.

Second, large social networks are being forced into an endless process of expanding censorship as propagandists and conspiracy theory believers find more ways to spread false content. Facebook and other companies (like Parler) have learned that taking a purist approach to free speech—i.e. allowing any speech that isn’t illegal under U.S. law—isn’t practical in digital spaces. Censorship of some kinds of content is responsible and good. In its latest capitulation, Facebook announced Monday it will bar any posts of debunked theories about vaccines (including ones for COVID-19), such as that they cause autism. But it’s impossible for even well-meaning censors to keep up with the endless ingenuity of disinformation’s purveyors.

There are logistical and technical reasons for that. Facebook relies on 15,000 (mostly contract) content moderators to police the posts of its 2.7 billion users worldwide. And it is increasingly turning to AI models to find and moderate harmful or false posts, but the company itself admits that these AI models can’t even comprehend some types of harmful speech, such as within memes or video.

That’s why it may be better to help consumers of social content detect and reject misinformation, and refrain from spreading it.

“I have recommended that the platforms do media literacy training directly, on their sites,” says disinformation and content moderation researcher Paul Barrett, deputy director of the New York University (NYU) Stern Center for Business and Human Rights. “There’s also the question of should there be a media literacy button on the site, staring you in the face, so that a user can access media literacy data at any time.”

A quick primer

Social media users young and old desperately need tools to recognize both misinformation (false content spread innocently, out of ignorance of facts) and disinformation (false content knowingly spread for political or financial reasons), including the skills to uncover who created a piece of content and analyze why.

advertisement

These are important elements of media literacy, which also involves the ability to cross-check information with additional sources, evaluate the credibility of authors and sources, recognize the presence or lack of rigorous journalistic standards, and create and/or share media in a manner reflective of its credibility, according to the United Nations Educational, Scientific, and Cultural Organization (UNESCO).

Packaging a toolkit of basic media literacy tools—perhaps specific to “news literacy”—and presenting them directly on social media sites serves two purposes. It arms social media users with practical media literacy tools to analyze what they’re seeing, and also puts them on alert that they’re likely to encounter biased or misleading information on the other side of the login screen.

That’s important because not only do social networks make misleading or untrue content available, they serve it up in a way that can disarm a user’s bullshit detector. The algorithms used by the likes of Facebook and YouTube favor content that’s likely to elicit an emotional, often partisan, response from the user. And if a member of Party A encounters a news story about a shameful act committed by a leader in Party B, they may believe it and then share it without noticing that the ultimate source of the information is Party A. Often the creators of such content bend (or completely break) the truth to maximize the emotional or partisan response.

This works really well on social networks: A 2018 Massachusetts Institute of Technology study of Twitter content found that falsehoods are 70% more likely to get retweeted than truth, and falsehoods spread to reach 1,500 people about six times faster than truth does.

But media literacy training also works. The Rand Corporation conducted a review of available research on the efficacy of media literacy education, and found ample evidence across numerous studies that research subjects became less likely to fall for false content after various amounts of media literacy training. Other organizations including the American Academy of Pediatrics, the Centers for Disease Control and Prevention, and the European Commission have reached similar conclusions and have strongly recommended media literacy training in schools.

Facebook has already taken some steps to embrace media literacy. It has partnered with the Poynter Institute to develop media literacy training tools for kids, millennials, and seniors. The company also donated $1 million to the News Literacy Project, which teaches students to scrutinize the sourcing of an article, make and critique news judgments, detect and dissect viral rumors, and recognize confirmation bias. Facebook also hosts a “media literacy library” at its site.

But it’s all voluntary. Requiring a training course and a quiz as a condition of admittance to the site is something different. “The platforms would be very hesitant to do that because they’d worry about turning away users and cutting down on engagement,” says NYU’s Barrett.

If the social networks won’t act voluntarily, they might be compelled to require media literacy education by a regulatory body like the Federal Trade Commission. From a regulatory perspective, this might be easier to accomplish than moving Congress to require media literacy education in public schools. It might also be a more focused way of mitigating the real risks posed by Facebook, compared to other proposals such as breaking up the company or removing its shield against lawsuits stemming from user content.

Americans became aware of misinformation when the Russians weaponized Facebook to interfere in the 2016 election. But while Robert Mueller’s report proved that the Russians spread misinformation, the line of causality between that and actual voting decisions remained blurry. For many Americans, January 6 made disinformation’s threat to our democracy real.

As more tangible harm is directly caused by misinformation on social networks, it’ll become even more clear that people need some help fine-tuning their bullshit detectors before logging on.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics