advertisement
advertisement

Tech platforms screwed up the last election. Here’s how they’re prepping for 2020

We asked major social networks including Facebook, YouTube, and Twitter how they’ll fight fake news and other attempts to undermine election integrity this year. Here’s how they measure up.

Tech platforms screwed up the last election. Here’s how they’re prepping for 2020
[Photo: Mizter_X94/Pixabay; ShariJo/Pixabay; Clker-Free-Vector-Images/Pixabay]

When misinformation spread across Facebook and other social networks during the 2016 U.S. election season, none of them were prepared.

advertisement
advertisement

Nathaniel Gleicher, Facebook’s head of security policy, admitted as much in a Des Moines Register op-ed in January, writing that the presidential election cycle was a “wake-up call” for the company. “Facebook—and our country—was caught off guard by Russia’s attack on our elections using social media, fake accounts, forged documents, and other forms of manipulation. It forced wholesale changes in how we, as a company and as a nation, approach these issues,” Gleicher wrote.

Americans aren’t convinced that Facebook and other tech companies have done enough. A survey in January by Pew Research Center found that just 25% of U.S. adults were confident that tech companies would prevent misuse of their platforms during the 2020 presidential election, down from 33% heading into the 2018 midterm elections.

Still, these companies insist that they’ve learned their lessons and are trying to do better this time around. We asked major social networks, including Facebook, Twitter, YouTube, Snap, and Reddit, for a rundown of their policies and preparations. While most companies say they’re taking election integrity seriously, they also disagree in many ways on what exactly that means. Here’s how they compare.

Misinformation

Facebook has famously enlisted third-party fact-checkers such as the Associated Press, Reuters Fact Check, and PolitiFact since the end of 2016, and it’s still alone in doing so. When those checkers find a questionable post—either on their own or based on user feedback—they’ll review the content and assign an accuracy score. Posts with false information show a warning that users must dismiss before they can view the content, and accounts that repeatedly misinform could be punished by Facebook’s and Instagram’s filtering algorithms. They also could face restrictions on advertising and monetization.

YouTube started surfacing fact checks on sensitive topics in India and Brazil last year, but in the United States, the bigger focus is on making misinformation less prominent. It started scaling back recommendations of “borderline” content in early 2019 and says it’s promoting “authoritative voices” (read: major news sources) in search and recommendations. While YouTube doesn’t prohibit misinformation in general, it does ban false claims about candidates’ eligibility (read: birtherism).

Twitter also hasn’t quite settled on how to take down misinformation on its platform. While it recently started testing a community-based point system that would call out false claims, the company told NBC News that it’s considering a lot of other ideas as well.

advertisement

Snap, meanwhile, takes a different approach by limiting what appears in its Discover section. Snap vets all media companies before allowing them onto the platform, uses professional journalists to curate stories, and moderates any Discover content made by users (using both humans and machine learning). A spokesperson also notes that because the platform is ephemeral by design and has no likes, shares, or comments, content doesn’t go viral the way it does on other networks.

Reddit says that in many cases, its community does the heavy lifting by setting its own moderation rules, with many news communities prohibiting sensational or editorialized headlines. Still, the site does sometimes enforce its own rules against misinformation and hoaxes. Last year, for instance, it placed the subreddit r/The_Donald in “quarantine,” preventing it from generating revenue and keeping its content out of searches and recommendations. The Daily Beast also reported last week that Reddit has been removing some moderators from r/The_Donald and vetting new ones.

Reached for comment, both TikTok and Pinterest also pointed to policies against misinformation on their platforms, with the latter saying it relies on outside experts, internal reviews, and user reports for enforcement.

One other thing to note: Facebook, Twitter, and YouTube have all banned “deepfakes,” which are videos that use AI or editing techniques to make a subject look like they’re doing or saying something they didn’t. Reddit has also specifically banned deepfakes that have an intent to mislead. In contrast, TikTok and Snap have not announced any explicit policies against deepfakes and are even building businesses around them.

Fake and inauthentic accounts

Tech companies also say they’re cracking down on bots and fake accounts, regardless of whether the information they’re sharing is true or false.

Facebook’s policies against “coordinated inauthentic behavior,” for instance, say that a foreign-led group of Pages can’t masquerade as local activists and that domestic groups can’t create fake accounts to amplify one another. Facebook says that when it detects this behavior from domestic, nongovernment groups, it removes both the fake accounts and the real account associated with them. (The company does, however, seem to bend its rules in some cases.) If governments or foreign actors are involved, Facebook says it removes all related properties and any associated people or organizations.

advertisement

Twitter also strengthened its rules around fake accounts before the U.S. midterms in 2018, so it now weighs more factors such as stolen avatar photos, copied profile bios, and misleading profile location in determining if an account is fake. Like Facebook, Twitter also now removes authentic accounts that are associated with misleading activity, and it also tries to boot accounts that attempt to replace previously suspended ones. (That said, CNN recently found that Twitter had verified a fake candidate for Congress, so it’s hard to say how well it’s enforcing its policies.)

Similarly, TikTok’s guidelines prohibit “coordinated attempts to manufacture inauthentic activity” and operating multiple accounts under false pretenses, and Reddit doesn’t allow coordinated disinformation or harmful impersonation.

Political ads

Political advertising is the most divisive issue among social networks, with lots of disagreement on whether it should be allowed or even moderated.

Facebook represents one end of the spectrum. Unlike most of its peers, the company does not ban or even fact-check political ads on its platform, arguing that mature democracies with a free press already should hold politicians accountable for what they say instead. The only exception is a rule against advertising that explicitly tries to suppress voting.

On the other end, there’s Twitter, TikTok, and Pinterest, which have banned political ads outright. “We have made this decision based on our belief that political message reach should be earned, not bought,” Twitter’s policy says.

Other sites are somewhere in between, allowing political ads with varying degrees of moderation. YouTube, as part of a broader policy at Google, says false political ads aren’t allowed, but it will only take action in rare cases because it can’t adjudicate every claim and insinuation. Snap and Reddit are more aggressive, subjecting every ad to human review and prohibiting false or misleading claims.

advertisement

Voter suppression (and engagement)

Compared to the knottier issue of how to handle lies, social networks’ policies against voter suppression are less ambiguous.

Facebook, for instance, prohibits users from misrepresenting details on how to vote or threatening violence against voters. It also bans advertising that tells people not to vote or suggests that voting is pointless.

Twitter also considers voter suppression and intimidation as violations of its election integrity policy and allows users to report offending tweets. (That same policy explicitly says false or misleading claims about candidates or political parties don’t count as violations.) Guidelines from YouTube, TikTok, and Pinterest all prohibit spreading falsehoods about the voting process as well.

Snap and Reddit don’t have specific policies against suppressing the vote, but both companies say other existing rules could cover this behavior, such as Snap’s policies against misinformation and Reddit’s rules against impersonation.

Transparency

In addition to taking more direct action against election meddling, most social networks say they’re trying to be more transparent about what happens on their platforms.

Facebook now requires certain Pages to post information about where they’re located and who’s running them. It has also started labeling state-run media outlets on both Facebook and Instagram, and it has added more details and tools to its library of advertising. (The company did, however, lobby against a law that would have required this behavior, and some researchers have been frustrated by limits on the data Facebook provides on misinformation, supposedly for privacy reasons.)

advertisement

Google and Snapchat provide political ad databases as well, and while Twitter doesn’t allow political ads on its platform, it does host a database of information on state-backed activity.

In addition, Reddit’s security teams regularly share their findings with the r/redditsecurity subreddit, where users can comment and ask questions. A couple of months ago, for instance, the team noted that a post containing leaked documents from the United Kingdom likely originated from Russia.

Implicit in all these claims of transparency is one unavoidable truth: No matter how much these companies say they’re doing to uphold election integrity in 2020, they’ll probably have mistakes to answer for after the ballots are cast.


This story is part of our Hacking Democracy series, which examines the ways in which technology is eroding our elections and democratic institutions—and what’s been done to fix them. Read more here.

advertisement
advertisement