Forced to grapple with increasing scrutiny from regulators, the media, Wall Street, and users in the wake of continuing scandals such as the Cambridge Analytica controversy, and Russians leveraging its platform to try to influence the 2016 presidential election, Facebook today laid out four things it’s planning on doing to safeguard election security going forward:
- Combat foreign interference
- Remove fake accounts
- Increase ads transparency
- Reduce the spread of fake news
The company says it will begin hunting for “potentially harmful types of election-related activity, such as Pages of foreign origin that distributing inauthentic civic content.” If it finds them, it manually reviews the accounts behind them to see if they’re violating Facebook community standards or its terms of service.
Facebook says it’s been deploying machine learning tools that look for suspicious behavior to identify, and block, millions of fake accounts each day “at the point of creation.” Now, it’s also starting to use a new investigative tool to try to find fake accounts in the lead-up to this fall’s election.
As for ads transparency, the company says that in the months prior to this fall’s elections, political advertisers will be required to verify and confirm (through a manual, multi-step process) their identity and their U.S. location. And advertisers will need to say which candidate, organization, or business they’re backing.
Finally, Facebook says it’s using an increasing number of signals, including feedback from users, to flag potentially fake news stories for manual fact-checking. If a story is deemed to be false, Facebook is taking steps to slash its future views by more than 80%. It doesn’t say why it doesn’t cut 100% of the views, however. And it is notifying people who have shared the story previously and those who try to share it that it’s fake. Finally, it’s showing more fact-checked related articles to those who do still see the fake stories.