advertisement

Adobe has an ambitious plan to help the public spot fake images

The world’s most popular computer graphics company wants to use AI to detect manipulated images and video. Is it too little too late?

Adobe has an ambitious plan to help the public spot fake images
[Image: Adobe Research/UC Berkeley]

As powerful visual AI makes its away across the internet, it’s getting easier and easier to manipulate images, audio, and videos, which poses a serious problem for society. Adobe, which has a hand in popularizing some of the earliest image editing technology, wants to fight back—by harnessing artificial intelligence to detect manipulated images of people. The company’s researchers are working with UC Berkeley scientists and DARPA, the Pentagon’s advanced technology research arm, to develop software that is capable of flagging, analyzing, and even reversing facial manipulation in photographs. And the group of organizations wants to make their program available for everyone.

In a new research paper, the scientists describe how most malicious photo editing happens in widely available tools like Adobe Photoshop, especially with the software’s warping tool, known as “liquify.” With liquify, any user can deform specific areas of a face to totally change a person’s expression, turning happiness into sadness or a serene gaze into the look of an insane person. Adobe’s new forensic tool focuses in on this effect, detecting pixels that have been deformed (even if the change is invisible to human eyes). The software uses a Convolutional Neural Network—a form of artificial intelligence inspired by the way an animal’s visual cortex works—which is the same basic technology that lets manipulation tools like Deepfakes swap faces onto different bodies, make people say things they never said, and make videos of people doing things they never did.

In a blog post, the California digital graphics company says that its digital manipulation detection program is still “in its early stages.” Adobe’s efforts actually started in 2018, so it’s clearly going slow; the company also says the new detection feature is just the beginning of a battery of public forensic tools that will use AI to flag everything from manually edited photos to full Deepfake videos.

[Image: Adobe Research/UC Berkeley]

They’d better hurry, because liquefying an image is child’s play compared to what’s happening with AI now. Experts say they are outgunned when it comes to detecting AI manipulation in particular. There’s a major chance that image and sound manipulation could become so seamless that it will eventually be impossible to detect, even using other AI. In that case, as some experts have suggested, every real image and video may need to be distributed with some form of watermark or trust certificate—probably using blockchain—rendering any media file without a certificate impossible to trust. It’s a solution, but given the power needs of blockchain, perhaps it is not an optimal one.

advertisement

But, as Adobe says in its annoncement, “the journey of democratizing image forensics is just beginning.” May it arrive soon, because we’re going to need it.