advertisement
advertisement
advertisement

Tagging fake news on Facebook may make no difference—and could even backfire

In the wake of the 2016 presidential election, Facebook has implemented a number of initiatives to try to give users more confidence that what they’re reading is real. But a new study by researchers at Yale finds that users are only marginally more likely to understand that a story tagged as “fake news” is, in fact, not reliable.

One of Facebook’s efforts was to incorporate fact-checking by independent third parties. But the Yale study found that “the existence of ‘disputed’ tags made participants just 3.7 percentage points more likely to correctly judge headlines as false,” Politico wrote. Even worse, the researchers discovered a “backfire effect” among certain demographics–Trump supporters, and those between the ages of 18 and 25, to be precise–such that stories tagged as potentially fake were actually more likely to be believed. 

That’s discouraging for people who hoped that Facebook could find ways to convince people not to buy the premise of the countless fake or misleading stories shared among its more than 2 billion users.

The company has had to grapple with numerous body blows recently, including revelations that Russians had purchased substantial numbers of political ads leading up to the election. That hasn’t helped with perceptions that Facebook was instrumental in the outcome of the election. In its aftermath, the company has struggled to identify and block fake news (along with lots of other controversial content).

“Together, these results suggest that the currently deployed approaches are not nearly enough to effectively undermine belief in fake news, and new (empirically supported) strategies are needed,” the researchers behind the study write. Another recent study, which showed that targeting susceptible users can fuel the spread of fake news, suggests one effective approach to combatting disinformation may be broad public awareness of it. That’s something Facebook has also been trying to do, especially around elections like Kenya’s recent fake-news-filled (and subsequently annulled) presidential vote.

Facebook told Politico that it rejects the Yale researchers’ findings, in particular because the work was conducted via an internet survey and not on Facebook itself. Further, it said that it has implemented other safeguards against fake news beyond fact-checking and warning users.  

And by the way: No, armed looters did not attack Richard Branson on his private island after Hurricane Irma.DT