A new study by researchers at MIT and the University of Regina found people are less likely to digitally share false stories about COVID-19 after they’ve been asked to evaluate the accuracy of another headline.
That suggests an option for social media platforms looking to limit the spread of misinformation, the researchers say.
“There are a lot of ways the platforms could implement this that all boil down to, essentially, raising the concept of accuracy in users’ minds while they’re on platforms,” says David Rand, a professor at MIT and one of the study’s coauthors, in an interview with Fast Company.
That can be as simple as periodically surveying users about how they feel about the importance of accuracy, the researchers say. It’s a potentially easy way to approach what’s proven to be a thorny problem: widespread sharing of potentially dangerous misinformation, from conspiracy theories to dubious cures, about the deadly pandemic on social media platforms. Researchers have spotted large volumes of false information spreading about the virus, a topic that’s become increasingly politicized as the pandemic continues.
In the study, published in the journal Psychological Science, the scientists first asked one group of participants to evaluate the accuracy of 30 Facebook feed-style posts about the coronavirus, each with a headline, first sentence, and photo. Half of the stories were true and half were false, and the participants got the accuracy right about two-thirds of the time.
A second group wasn’t asked about accuracy but was asked to rate whether or not they would share the stories online. Whether stories were true or false seemed to have little effect on how likely they were to be shared. But participants who scored higher on a test of science knowledge and on a cognitive reflection test, which measures whether people seek out counterintuitive answers to problems, were more likely to correctly rate accuracy of articles and more likely to discern between true and false ones in deciding what to share.
In the next element of the study, the researchers asked two groups of participants whether or not they would share the stories based on the headlines and initial sentences. One group was simply shown the posts, while the other group was first asked to judge the accuracy of a post unrelated to the virus. The latter group was much more discerning in terms of sharing primarily true posts, indicating that simply getting people to think about accuracy could carry over to their social media activities.
Right now, the researchers suggest, social media apps primarily prompt people to think about getting likes, reposts, and new followers rather than sharing the truth.
But social media platforms could reorient this incentive by periodically prompting users with “accuracy nudges” that push them to think about headline accuracy in general. While it’s uncertain how long the effect of getting people to think about accuracy lasts, exposing different users to such prompts can still be potentially useful, Rand says. If your friends aren’t sharing false posts as a result of one of these accuracy nudges, you will likely see fewer false headlines in your feed—and be less likely to share them yourself.
The group’s research proposes that successful nudges could take a variety of forms. One option is simply prompting users generically to weigh in on how important they think accuracy is in deciding what to share. Another is periodically asking them to evaluate the truthfulness of actual content already shared on the platform.
Asking users to rate content gets them to think about accuracy and generates useful input for the platforms.”
“One of the things that I like so much about that approach is it’s kind of doubly useful,” Rand says. “Asking users to rate content gets them to think about accuracy and generates useful input for the platforms.”
The researchers are currently investigating how effective crowdsourced fact-checking can be at replacing expert verification, which can be hard to scale since there are only so many professional fact-checkers available to social media companies. A previous study led by Rand of Facebook fact-checking efforts, which add editorial labels to certain stories, suggested they can backfire if people assume that stories that don’t have a fact-checking label are true.
Even if the big social media companies don’t implement accuracy nudge features themselves, it’s possible that third parties could do so through advertising on the platforms, even potentially targeting ads to users who’ve been sharing misinformation or show signs they’re likely to do so.
“We’re looking at other ways you could do it using ads,” Rand says. “If you’re a civil society organization, you could buy ads saying, ‘how accurate do you think this headline is?’ or ‘how much do you care about accuracy?'”
If ads like this could prevent people from unwittingly sharing COVID-19 conspiracy theories, they could be well worth the price.