Fast company logo
|
advertisement

The Adobe-led Content Authenticity Initiative will help people tell what’s real from what’s fake.

Adobe’s Dana Rao doesn’t want you to get duped by AI

[Photo: courtesy Adobe; Vishnu Mohanan/Unsplash; Google DeepMind/Unsplash; Markus Spiske/Unsplash]

BY Jared Newman2 minute read

In trying to get other companies to help debunk deepfakes, Dana Rao got an unexpected assist from Pope Francis.

A fake image of the pope in a white puffer jacket, which went viral in March after being generated in the AI artwork program Midjourney, fooled a lot of people. Despite its benign nature, Rao, who for the past five years has worked as Adobe’s general counsel, says it was a wake-up call that gave a boost to the Content Authenticity Initiative, an Adobe-led group that wants to bring transparency to AI-generated artwork.

“We really saw membership spike because people realized the intensity of the problem we were trying to solve,” Rao says. “The pope jacket was a catalyzing event.”

The Content Authenticity Initiative’s goal is to give online images a kind of nutrition label. Images that support it will display a badge that viewers can click on, popping up information about its origins. Adobe and others will even host pages with more detail, including a way to compare images with their original, undoctored versions.

Rao and Scott Belsky, who is now Adobe’s chief strategy officer, hatched the idea in 2019 after teasing some AI features at the company’s annual creativity conference. Even before the generative AI boom, they realized the tech would open a Pandora’s box of misinformation. Human labeling of AI content would be too slow, they knew, and using technology to detect deepfakes would never be accurate enough to trust. What Rao settled on was a way for users to see for themselves.

“People don’t believe institutions anymore, and so our solution with content credentials is basically saying, ‘Here’s what happened, you decide,’” he says.

The Content Authenticity Initiative isn’t the only way Adobe is trying to hash out AI’s ethical issues. With its own Firefly generative AI tools, the company trained its models in a way that avoids potential copyright infringement, and has even offered to compensate companies if they get sued for using the features.

But when it comes to deepfakes, Adobe can’t do it alone. For the past four years, Rao has been in charge of bringing more companies on board with the Content Authenticity Initiative, including camera makers, news organizations, and competing software makers.

It’s a work in progress. The group is still in talks with social networks, which must decide how to integrate informational badges—or whether to use entirely different solutions—and a lot of public education will be necessary for people to understand what an AI image nutrition label even is. But with the next U.S. presidential election just over a year away, Rao sees another catalyzing event on the horizon, noting, “We don’t want elections being decided by viral deepfakes.”


This story is part of AI 20, our monthlong series of profiles spotlighting the most influential people building, designing, regulating, and litigating AI today.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Jared Newman covers apps and technology from his remote Cincinnati outpost. He also writes two newsletters, Cord Cutter Weekly and Advisorator. More


Explore Topics