Facebook has long told the world that it wants to improve its content moderation game, and the world has waited for results. Today, the company announced a slew of new tools to combat fake news. They include more robust fact-checking in more countries, new technological techniques for sussing out bad content, cracking down on repeat offenders, and expanding its test for fact-checking photos and video.
That last one is especially interesting because photos and videos have long been neglected by Facebook’s fake news moderators. As I wrote a few months back, misinformation often goes viral on Facebook through images like memes that disseminate inaccurate or inflammatory content. Until now, if a user tried to flag a questionable meme, they wouldn’t be able to list it as “false news.”
Now, it seems Facebook may be wising up to this omission. It says it will expand its test for fact-checking photos and video to four countries. The company writes in its blog post that this new content fact-check will include “[pictures and videos] that are manipulated (e.g., a video that is edited to show something that did not really happen) or taken out of context (e.g., a photo from a previous tragedy associated with a different, present-day conflict).” Let’s hope that it also includes fake news memes.
This is all part of Facebook’s plans to protect its platform in anticipation of the midterm elections. We’ll have to wait and see if these fixes are enough.