When a deepfake algorithm is trained on face images of a person, it’s dependent on the photos that are available on the internet that can be used as training data. Even for people who are photographed often, few images are available online showing their eyes closed. Not only are photos like that rare–because people’s eyes are open most of the time–but photographers don’t usually publish images where the main subject’s eyes are shut.
Without training images of people blinking, deepfake algorithms are less likely to create faces that blink normally. When we calculate the overall rate of blinking, and compare that with the natural range, we found that characters in deepfake videos blink a lot less frequent in comparison with real people. Our research uses machine learning to examine eye opening and closing in videos.
[Illustration: Daniel Salo]This gave us the inspiration to detect deepfake videos. Subsequently, we developed a method to detect when the person in the video blinks. To be more specific, it scans each frame of a video in question, detects the faces in it, and then locates the eyes automatically. It then utilizes another deep neural network to determine if the detected eye is open or close, using the eye’s appearance, geometric features, and movement. We know that our work is taking advantage of a flaw in the sort of data available to train deepfake algorithms. To avoid falling prey to a similar flaw, we have trained our system on a large library of images of both open and closed eyes. This method seems to work well, and as a result, we’ve achieved an over 95% detection rate. This isn’t the final word on detecting deepfakes, of course. The technology is improving rapidly, and the competition between generating and detecting fake videos is analogous to a chess game. In particular, blinking can be added to deepfake videos by including face images with closed eyes or using video sequences for training. People who want to confuse the public will get better at making false videos–and we and others in the technology community will need to continue to find ways to detect them.This post originally appeared on The Conversation. Siwei Lyu is Associate Professor of Computer Science and Director of the Computer Vision and Machine Learning Lab at University at Albany, State University of New York.