advertisement
advertisement

Facebook says it’s deleting 95% of hate speech before anyone sees it

The company has gotten much better about preventing hate from ever showing up. But moderators are unhappy about being pushed to return to office work.

Facebook says it’s deleting 95% of hate speech before anyone sees it
[Photo: Mia Baker/Unsplash]
advertisement
advertisement

On Thursday, Facebook published its first set of numbers on how many people are exposed to hate content on its platform. But between its AI systems and its human content moderators, Facebook says it’s detecting and removing 95% of hate content before anyone sees it.

advertisement
advertisement

The company says that for every 10,000 views of content users saw during the third quarter, there were 10 to 11 views of hate speech.

“Our enforcement metrics this quarter, including how much hate speech content we found proactively and how much content we took action on, indicate that we’re making progress catching harmful content,” said Facebook’s VP of Integrity Guy Rosen during a conference call with reporters on Thursday.

In May, Facebook had said that it didn’t have enough data to properly report the prevalence of hate speech. The new information comes with the release of its Community Standards Enforcement Report for the third quarter.

During Q3, Facebook says its automated systems and human content moderators took action on:

● 22.1 million pieces of hate speech content, about 95% of which was proactively identified
● 19.2 million pieces of violent and graphic content (up from 15 million in Q2)
● 12.4 million pieces of child nudity and sexual exploitation content (up from 9.5 million in Q2)
● 3.5 million pieces of bullying and harassment content (up from 2.4 million in Q2)

On Instagram:
● 6.5 million pieces of hate speech content, about 95% of which was proactively identified (up from about 85% in Q2)
● 4.1 million pieces of violent and graphic content (up from 3.1 million in Q2)
● 1 million pieces of child nudity and sexual exploitation content (up from 481,000 in Q2)
● 2.6 million pieces of bullying and harassment content (up from 2.3 million in Q2)

advertisement

Facebook has been working hard to improve its AI systems to carry the bulk of the weight of controlling the massive amounts of toxic and misleading content on its platform. The 95% detection rate for hate speech it announced today, for example, is up from a rate of just 24% in late 2017.

CTO Mike Schroepfer said his company has made progress in improving the accuracy of the natural language and computer vision systems it uses to detect harmful content.

He explained during the conference call that normally the company creates and trains a natural language model offline to detect a certain kind of toxic speech, and after the training deploys the model to detect that kind of content in real time on the social network. Now Facebook is working on models that can be trained in real time to quickly recognize wholly new types of toxic content as they emerge on the network.

Schroepfer said the real time training is still a work in process, but that it could dramatically improve the company’s ability to proactively detect and remove harmful content. “The idea of moving to an online detection system optimized to detect content in real time is a pretty big deal,” he said.

“It’s one of many things we have early in production that will help continue to cause improvement in all these things,” Schroepfer added. “It shows we’re nowhere close to out of ideas on how we improve these automated systems.”

Schroepfer said on a separate call Wednesday that Facebook’s AI systems still face challenges detecting toxic content contained in mixed media content such as memes. Memes are typically clever or funny combinations of text and imagery, and only in the combination of the two is the toxic message revealed, he said.

advertisement

Before the 2020 presidential election, Facebook put special content restrictions into place to protect against misinformation Rosen said the measures will be kept in place for now. “They will be rolled back the same as they were rolled out, which is very carefully,” he said. For example, the company banned political ads in the week before and after the election, for example, and recently announced that it would continue the ban on those ads until further notice.

The pandemic effect

Facebook says its content moderation performance took a hit earlier this year because of the disruption caused by the coronavirus, but that its content moderation workflows are returning to normal. The company uses some 15,000 contract content moderation people around the world to detect and remove all kinds of harmful content, from hate speech to disinformation.

The BBC’s James Clayton reports that 200 of Facebook’s contract content moderators wrote an open letter alleging that the company is pushing them to come back to the office too soon during the COVID-19 pandemic. They say that the company is risking their lives by demanding they report for work at an office during the pandemic instead of being allowed to work from home. The workers demand that Facebook provide them hazard pay, employee benefits, and other concessions.

“Now, on top of work that is psychologically toxic, holding onto the job means walking into a [Covid] hot zone,” the moderators woite. “If our work is so core to Facebook’s business that you will ask us to risk our lives in the name of Facebook’s community—and profit—are we not, in fact, the heart of your company?”

On Tuesday, MarkZuckerberg appeared before Congress to discuss Facebook’s response to misinformation published on its platform before and after the election. Zuckerberg again called for more government involvement in the development and enforcement of content moderation and transparency standards.

Twitter CEO Jack Dorsey also participated in this hearing. Much of it was used by Republican senators to allege that Facebook and Twitter systematically treat conservative content differently than liberal content. However, today, a couple of congresspeople–Raja Krishnamoorthi (D-Ill.) and Katie Porter (D-Calif.)—sent a letter to Zuckerberg complaining that Facebook hasn’t done enough in the wake of the election to explicitly label Donald Trump’s baseless claims that the election was “stolen” from him as false.

About the author

Fast Company Senior Writer Mark Sullivan covers emerging technology, politics, artificial intelligence, large tech companies, and misinformation. An award-winning San Francisco-based journalist, Sullivan's work has appeared in Wired, Al Jazeera, CNN, ABC News, CNET, and many others.

More