advertisement
advertisement

Fearing 2020 ‘deepfakes,’ Facebook will launch industry AI ‘challenge’

After hosting a massive Russian disinformation campaign in 2016, the company is hoping to keep video disinformation off its platform during the coming election cycle.

Fearing 2020 ‘deepfakes,’ Facebook will launch industry AI ‘challenge’
[Photos: Malik Earnest/Unsplash; Joseph Gonzalez/Unsplash; Ludvig Wiese/Unsplash]

Facebook wants to be ready for a deepfake outbreak on its social network. So the company has started an industry group to foster the development of new detection tools to spot the fraudulent videos.

advertisement
advertisement

A deepfake video presents a realistic AI-generated image of a real person saying or doing fictional things. Perhaps the most famous such video to date portrayed Barack Obama calling Donald Trump a “dipshit.”

Facebook is creating a “Deepfake Detection Challenge,” which will offer grants and awards in excess of $10 million to people developing promising detection tools. The social network is teaming up with Microsoft and the Partnership on AI (which includes Amazon, Google, DeepMind, and IBM), as well as academics from MIT, Oxford, Cornell Tech, UC Berkeley, and others on the effort. The tech companies will contribute cash and technology and will help with judging detection tools, a Facebook spokesperson told me.

Importantly, the group will create a benchmark tool that can be used by people developing deepfake detection tools to measure the effectiveness of their technology. The best accuracy scores will be ranked on a leaderboard. The benchmark will include a scoring system to reflect the accuracy of tools. Facebook also says it will hire actors to create “thousands” of deepfake videos, which will be used as the test material for detection tools.

The group plans to start the challenge and release the benchmark and video data set at the Conference on Neural Information Processing Systems (NeurIPS) in December.

Facebook may be a little more wary of disinformation on its platform than others. The company unwittingly played host to thousands of fake news items and events carefully targeted to persuade voters to vote for Donald Trump or stay home on election day. Facebook at first denied that the Russian interference played a meaningful role in the election, then was slow to react.

Deepfakes present a potentially calamitous form of disinformation. A convincing deepfake of a candidate saying crazy things, if gone undetected, could spread quickly on Facebook and influence the votes of vast numbers of people.

advertisement

“I’m trying to get out ahead of this problem before it becomes a big problem on our platform,” Facebook CTO Mike Schroepfer said on a conference call with reporters Wednesday.

Schroepfer said that he is focused on the technology side of the problem, while others at Facebook are trying to figure out how to adapt the social network’s disinformation policies to address deepfakes. The policy must determine whether or not all deepfakes should be removed, or only ones that spread political disinformation.

Schroepfer says the creation of deepfakes is getting easier and less expensive, while the creation of detection tools lags behind. With the development and deployment of better detection tools, it may become harder and more expensive to create deepfakes that can avoid detection.

He added that Facebook employees may be among those who accept the challenge but that they won’t be eligible for grants or prizes.

“This is a constantly evolving problem, much like spam or other adversarial challenges,” Schroepfer wrote in a blog post, “and our hope is that by helping the industry and AI community come together, we can make faster progress.”

advertisement
advertisement