In the photo, Beyoncé looks beatific, with a closed-lip Mona Lisa smile. But it’s easy enough to give her a toothy grin. Just dial up her “Happiness” to the maximum level using Adobe Photoshop’s Smart Portrait tool, and her face gets a Cheshire cat-like smile, white teeth appearing out of thin air.
Smart Portrait, released in beta last year, is one of Adobe’s AI-powered “neural filters,” which can age faces, change expressions, and alter the background of a photo so it appears to have been taken at a different time of year. These tools may seem innocuous, but they provide increasingly powerful ways to manipulate photos in an era when altered media spreads across social media in dangerous ways.
For Adobe, this is both a big business and a big liability. The company—which brought in $12.9 billion in 2020, with more than $7.8 billion tied to Creative Cloud products aimed at helping creators design, edit, and customize images and video—is committed to offering users the latest technologies, which keeps Adobe ahead of its competition. This includes both neural filters and older AI-powered tools, such as 2015’s Face-Aware Liquify, which lets people manually alter someone’s face.
Adobe executives are aware of the perils of such products at a time when fake information spreads on Twitter six times faster than the truth. But instead of limiting the development of its tools, Adobe is focused on the other side of the equation: giving people the ability to verify where images were taken and see how they’ve been edited. Step one: a new Photoshop tool and website that offer unprecedented transparency into how images are manipulated.
Adobe has been exploring the edge of acceptable media editing for a while now. During the company’s annual Max conference in 2016, it offered a sneak peek of a tool that allowed users to change words in a voice-over simply by typing new ones. It was a thrilling—and terrifying—demonstration of how artificial intelligence could literally put words into someone’s mouth. A backlash erupted around how it might embolden deepfakes, and the company shelved the tool.
Two years later, when Adobe again used Max to preview cutting-edge AI technologies—including a feature that turns still photos into videos and a host of tools for video editing—Dana Rao, its new general counsel, was watching closely. After the presentation, he sought out chief product officer Scott Belsky to discuss the repercussions of releasing these capabilities into the world. They decided to take action.
Rao, who now leads the company’s AI ethics committee, teamed up with Gavin Miller, the head of Adobe Research, to find a technical solution. Initially, they pursued ways to identify when one of Adobe’s AI tools had been used on an image, but they soon realized that these kinds of detection algorithms would never be able to catch up with the latest manipulation technologies. Instead, they sought out a way to show when and where images were taken—and turn editing history into metadata that could be attached to images.
The result is the new Content Credentials feature, which went into public beta this October. Users can turn on the feature and embed their images with their identification information and a simplified record of edits that notes which of the company’s tools have been used. Once an image is exported out of Photoshop, it maintains this metadata, all of which can be viewed by anyone online through a new Adobe website called Verify. Simply upload any JPEG, and if it’s been edited with Content Credentials turned on, Verify will show you its metadata and editing history, as well as before-and-after images.
Content Credentials is part of a larger effort by both tech and media companies to combat the spread of fake information by providing more transparency around where images come from online. An industry consortium called the Coalition for Content Provenance and Authenticity (C2PA), which includes Adobe, Microsoft, Twitter, and the BBC, recently created a set of standards for how to establish content authenticity, which are reflected in Adobe’s new tool. Members of the group are also backing a bill in the U.S. Senate that would create a Deepfake Task Force under the purview of the Secretary of Homeland Security.
But while Adobe has thrown its weight behind this fledgling ecosystem of companies championing image provenance technologies, it also continues to release features that make it increasingly easy to alter reality. It’s the accessibility of such tools that troubles researchers. “Until recently . . . you needed to be someone like Steven Spielberg” to make convincing fake media, says University of Michigan assistant professor Andrew Owens, who has collaborated with Adobe on trying to detect fake images. “What’s most worrisome about recent advances in computer vision is that they’re commoditizing the process.”
For content provenance technologies to become widely accepted, they need buy-in from camera-app makers, editing-software companies, and social media platforms. For Hany Farid, a professor at the University of California, Berkeley, who has studied image manipulation for two decades, Adobe and its partners have taken the first steps, but now it’s up to platforms like Facebook to prioritize content that has C2PA-standardized metadata attached.
“You don’t want to get in the business of [saying] ‘This is true or false,'” Farid says. “The best [Adobe] can do is to arm people—the average citizen, investigators—with information. And we use that as a launchpad for what comes next: to regain some trust online.”
Three other efforts to authenticate images before they’re released into the wild
Content provenance company Truepic recently announced an SDK that will allow any app with a camera to embed images and videos with verified metadata.
A project between Stanford and the USC Shoah Foundation, this lab uses cryptography and decentralized web protocols to capture, store, and verify images and video.
Microsoft and the BBC joined forces in 2020 to help people understand whether images and videos have been manipulated.