advertisement
advertisement

Here’s How Facebook Uses AI To Detect Many Kinds Of Bad Content

The company uses AI to identify objectionable content in seven areas: nudity, graphic violence, terrorism, hate speech, spam, fake accounts, and suicide prevention.

Here’s How Facebook Uses AI To Detect Many Kinds Of Bad Content
Mark Zuckerberg during his keynote speech at F8 [Photo: courtesy of Facebook]

When Mark Zuckerberg testified in Washington, D.C., last month, he repeatedly told members of Congress that Facebook planned on using artificial intelligence to help solve some of its thorniest privacy and security issues.

advertisement
advertisement

Today at F8, its annual developer conference—with the Cambridge Analytica scandal slowly receding from view—Facebook for the first time revealed a wide range of ways that it’s using AI to keep things like hate speech, fake accounts, terrorism, graphic violence, and others off its platforms.

The company’s disclosure follows the company’s publication late last month of its internal guidelines for enforcing community standards.

“With this announcement,” Facebook vice president of product development Guy Rosen tells Fast Company, “for the first time, we’re sharing specifics about how we’re using [AI] to proactively enforce community standards.”

Adds Rosen, Zuckerberg’s comments in Washington and elsewhere in recent weeks are “why we’re opening up and being transparent” about AI initiatives that, in some cases, have been in the works for years, and which in some cases, won’t bear meaningful fruit for some time to come.

At the core of the efforts, Rosen says, is the 2.2 billion-strong Facebook community submitting tens of millions of reports a week about potentially objectionable content. Those reports then become part of the larger dataset that Facebook uses to train its AI systems to automatically detect such content.

People can report anything to Facebook, and if a manual review determines that the reported content isn’t allowed, it’s taken down. “The objective” with the AI systems, Rosen says, “is how to automate the process so we can get to content faster, and get to more content. It’s about learning by examples. And the most important thing is to have more examples, to teach the system.”

advertisement

Facebook breaks down the kind of content it’s using AI to proactively detect into seven categories: Nudity, graphic violence, terrorism, hate speech, spam, fake accounts, and suicide prevention.

How the AI was trained to hunt for fake accounts

One of the most important, given the way Facebook’s platform was exploited during the 2016 presidential election, is to try to identify–and shut down–fake accounts. Rosen says the company uses AI to identify and block millions of such accounts every day, usually at the point of creation and before the people behind them are able to use them for any kind of harm. And last month, Rosen adds, the company deployed a new AI technique that has been hunting for fake accounts tied to various financial scams. He says the AI systems have already taken down more than half a million scam accounts.

To do so, Rosen says Facebook trained the AI systems to look for the kinds of signals that would indicate illegitimacy: an account reaching out to many more other accounts than usual; a large volume of activity that seems automated; and activity that doesn’t seem to originate from the geographic area associated with the account.

As well, Facebook is focusing its account-blocking efforts on identifying accounts that look like they will be used for nefarious purposes in advance of upcoming elections. “A lot of this work around fake accounts,” Rosen says, “is a big part of our focus on election security.”

[Screenshot: Facebook]

Hate speech requires manual and automated efforts

According to Rosen, hate speech presents Facebook with a particular challenge, one that requires the combined efforts of AI and the company’s community standards team.

advertisement

That’s because context is a key part of understanding whether something is actually hate speech or is something with a bit more nuance. That’s why the current system involves both AI automatically flagging potential hate speech and follow-up manual review. That’s because the AI might identify something that it thinks is a slur, but which actually isn’t. Rosen points to the fact that there are some things that would be considered a slur in the United States, but which aren’t so bad in other countries. As well, there are slurs that are self-referential. Those kinds of things would be allowed, he says.

In other areas, such as nudity or graphic violence, Facebook’s AI system depend on computer vision and a degree of confidence in order to determine whether or not to remove content. If the confidence is high, the content will be automatically removed; if it’s low, the system will call for manual review.

Facebook is also proud of the way its AI systems have been able to remove most terrorist content. It claims that 99% of ISIS or Al-Qaeda content is removed even before being reported by users.

The company also relies heavily on AI to proactively detect people expressing suicidal ideation, in some cases alerting first responders to the situations.

Of course, while Facebook has faith in its AI technology, it wants people to know that the systems aren’t a panacea and that some of the efforts may still be years from working as designed. An example, given that the systems require being seeded with huge amounts of training data, is content in less-common languages.

“A lot of this is still years away from being effective for many community standards violations,” Rosen says, “so that’s why we need to keep on investing.”

advertisement
advertisement

About the author

Daniel Terdiman is a San Francisco-based technology journalist with nearly 20 years of experience. A veteran of CNET and VentureBeat, Daniel has also written for Wired, The New York Times, Time, and many other publications

More