Google has announced it has released a data set containing 3,000 deepfake videos it has created. Google hopes the release of the videos will allow researchers to develop ways to combat malicious deepfakes, giving them, news organizations, and the public ways to identify “synthetic” videos—that is videos manipulated by or entirely created using computers.
Deepfakes at first came onto the scene in 2017 and were mainly used in crude, falsified porn videos. The technology originally allowed for people with limited computer skills to easily paste the face of, for example, an actress onto a porn star’s body.
But since then the quality of deepfake videos has rapidly advanced. Matter of fact, it’s now possible to deepfake entire bodies, not just an individual’s head. While deepfake technology has legitimate commercial purposes, many fear it will also lead to a new era of fake news and propaganda where no one will be able to tell if the video they are seeing actually, in fact, ever happened.
In order to ensure that doesn’t become a reality, news organizations, social media networks, and law enforcement will need to come up with ways to easily identify deepfake videos. Google’s trove of data it released yesterday is a first step in helping do just that.