advertisement
advertisement

Facebook’s latest transparency move is showing you how much objectionable content it removes

In an 81-page report, Facebook detailed how prevalent community standards violations are across six key content areas.

Facebook’s latest transparency move is showing you how much objectionable content it removes
[Photo: Tim Bennett/Unsplash]

In the latest stop on its post-Cambridge Analytica transparency tour, Facebook today unveiled its first-ever Community Standards Enforcement Report, an 81-page tome that spells out how much objectionable content is removed from the site in six key areas.

advertisement

Notwithstanding major privacy and security issues during the 2016 presidential election and the problem of fake news on the platform, the report shows that Facebook seems to be doing a fairly good job of utilizing its automated systems and human reviewers to keep the vast majority (often well over 90%) of hate speech, pornography, terrorist propaganda, fake accounts, spam, and graphic violence off its site.

“We’re sharing these because we think we need to be accountable,” vice president of product management Guy Rosen said during a press briefing on the new report. “Accountable to the community.”

[Image: courtesy of Facebook]
That’s not to say, of course, that such content never shows up–just that, at scale, Facebook is able to remove most of it, often before its 2.2 billion users ever see it.

For example, the report notes that during the period it covers–the fourth quarter of 2017 and the first quarter of 2018–Facebook removed 21 million pieces of content depicting adult nudity or sexual content, 95.8% of it before any user ever reported it.

The release of the report–the first time the company has ever made such data public–comes on the heels of a series of other first-ever efforts at transparency following the Cambridge Analytica scandal, Facebook’s subsequent apologies, and Mark Zuckerberg’s many hours of testimony on Capitol Hill.

In recent weeks, Facebook has shared the content areas on which it focuses its community standards efforts, its appeals process for those who think its enforcement decisions are faulty, and the list of content types its artificial intelligence systems can automatically detect.

advertisement
advertisement

The report issued today would seem to bookend Facebook’s efforts to show the world it’s working hard to keep its site as clear as possible of the kind of content it considers unsuitable for its community.

It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.

[Image: courtesy of Facebook]
“We aim to reduce violations to the point that our community doesn’t regularly experience them,” Rosen and vice president of data analytics Alex Schultz write in the report. “We use technology, combined with people on our teams, to detect and act on as much violating content as possible before users see and report it. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do. This is especially true where we’ve been able to build artificial intelligence technology that automatically identifies content that might violate our standards.”

Of course, the authors note, while such AI systems are promising, it will take years before they are effective at removing all objectionable content.

And backing up the company’s AI tools are thousands of human reviewers who manually pore over flagged content, trying to determine if it violates Facebook’s community standards.

Over the last year, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000. For the most part, it has not provided more details on the hiring plan, including how many will be full-time Facebook employees and how many will be contractors.

advertisement

During the press call, Schultz noted it will be a mix of full-timers and contractors spread across 16 locations around the world. Rosen added that the reviewers will speak 50 languages in order to be able to understand as much context as possible about content since, in many cases, context is everything in determining if something is, say, a racial epithet aimed at someone, or a self-referential comment.

Here are the top line numbers for each of the six types of content:

  • Graphic violence: During Q1, Facebook took action against 3.4 million pieces of content for graphic violence, up 183% from 1.2 million during Q4. “This increase is mostly due to improvements in our detection technology,” the report notes. During Q1, Facebook found and flagged 85.6% of such content it took action on before users reported it, up from 71.6% in Q4.
  • Adult nudity and sexual activity: Facebook says .07% to .09% of views contained such content in Q1, up from .06% to .08% in Q4. In total, it took action on 21 million pieces of content in Q4, similar to Q4. The company found and flagged 95.8% of such content before users reported it.
  • Terrorist propaganda (ISIS, Al Qaeda, and affiliates): Facebook says it took action on 1.9 million pieces of such content, and found and flagged 99.5% of such content before anyone reported it.
  • Hate speech: In Q1, the company took action on 2.5 million pieces of such content, up about 56% from 1.6 million during Q4. During the first quarter, Facebook found and flagged just 38% of such content before it was reported, by far the lowest of the six content types. “Hate speech content often requires detailed scrutiny by our trained reviewers to understand context,” explains the report, “and decide whether the material violates standards, so we tend to find and flag less of it.”
  • Spam: Facebook says it took action on 837 million pieces of spam content in Q1, up 15% from 727 million in Q4. It says it found and flagged nearly 100% of spam content in both Q1 and Q4.
  • Fake accounts: The company says it disabled between 3% and 4% of monthly active users during Q1 and Q4. In Q1, it disabled 583 million fake accounts, down 16% from 694 million a quarter earlier. “Our metrics can vary widely for fake accounts acted on,” the report notes, “driven by new cyberattacks and the variability of our detection technology’s ability to find and flag them.” During Q1, Facebook found and flagged 98.5% of fake accounts it ultimately took action against before any user reported them.
advertisement

About the author

Daniel Terdiman is a San Francisco-based technology journalist with nearly 20 years of experience. A veteran of CNET and VentureBeat, Daniel has also written for Wired, The New York Times, Time, and many other publications.

More