Fast company logo
|
advertisement

As the 2024 elections near, Meta still lacks the technology to detect and label deepfakes at scale.

Meta’s plan to fight AI-generated misinformation and deepfakes is too little, too late

[Photo: Meta]

BY Mark Sullivan3 minute read

Meta announced on Tuesday it’s taking steps to label AI-generated content, including misinformation and deepfakes, on its Facebook, Instagram, and Threads social platforms. But its mitigation strategy has some major holes, and it’s arriving long after the threat of deepfakes has become real.

Meta said that “in the coming months” (when we’re in the thick of the 2024 presidential election), it will be able to detect AI-generated content on its platforms created by tools from the likes of Adobe and Microsoft. It’ll rely on the toolmakers to inject encrypted metadata into AI-generated content, according to the specifications of an industry standards body. Meta points out that it’s always added “visible markers, invisible watermarks, and metadata” to identify, and label, images generated by its own AI tools. 

But those labeling tools are for the good guys; bad actors that spread AI-generated mis/disinformation use lesser-known, open-source tools to create content that’s hard to trace back to the tool or the creator. Or they may select tools that make it easy to disable the addition of metadata or watermarks. 

There’s little data to suggest that Meta has the technology to detect and label that kind of content at scale. The company says it’s “working hard” to develop classifier AI models to detect AI-generated content that lacks watermarks or metadata. It also says it isn’t yet able to detect AI-generated videos or audio recordings. Instead, Meta says it’s relying on users to label “photorealistic video or realistic-sounding audio that was digitally created or altered” when they post it, and says it may “apply penalties” for those that don’t.

“I think this is a necessary thing that is probably occurring at least a year too late,” says Ben Decker, CEO and founder of Memetica, which works with organizations being targeted by harassment, disinformation, or extremism. “We’ve seen this before when an emerging tech problem goes unaddressed on social media. How many years does it take to develop a response? And what are their real intentions? Is it a PR motivation, or is it a real platform integrity motivation?”

Defending against harmful AI-generated content will likely be an ongoing process of platforms trying to keep up with the latest tactics of misinformation spreaders, Decker says. “Trolls have always been at the forefront of platform exploitation and abuse, and human ingenuity will outpace the machines nine times out of ten,” he says. “We need to be thinking about how we limit the odds of them winning.” Part of the answer is more red-teaming and more research, he adds.

In a blog post, Meta public policy guy Nick Clegg portrays the problem of AI-generated mis/disinformation as an industry problem, a society-wide problem, and a problem of the future. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.” But Meta controls, by far, the biggest distribution network for such content now—and the need to detect and label deepfakes is now, not a few months from now. Just ask Joe Biden or Taylor Swift. It’s too late to be talking about future plans and approaches when another high-stakes election cycle is already upon us. 

“Meta has been a pioneer in AI development for more than a decade,” Clegg says. “We know that progress and responsibility can and must go hand in hand.”

Meta has been developing its own generative AI tools for years now. Can the company really say that it has devoted equal time, resources, and brain power to mitigating the disinformation risk of the technology?

Since its Facebook days, and for more than the past decade, the company has played a profound role in blurring the lines between truth and misinformation, and now it appears to be slow-walking its response to the next major threat to truth, trust, and authenticity.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics