Fast company logo
|
advertisement

Deepfakes will continue to plague the world in 2024, impacting everyone from the Pope to Taylor Swift to everyday internet users.

Can anything be done to prevent a deepfake misinformation crisis?

[Source Photos: Public Domain]

BY Kolawole Samuel Adebayo6 minute read

As the U.S. presidential election draws near, tensions around generative artificial intelligence are beginning to mount, particularly in regard to the use of deepfakes to influence voters’ preferences and behaviors.

But it’s not just the political sphere that’s up in arms. Everyone—from gig workers to celebrities—is talking about the potential harms of generative AI, questioning whether everyday users will be able to discern between AI-produced and authentic content. While generative AI offers potential to make our world better, the technology is also being used to cause harm, from impersonating politicians, celebrities, and business leaders to influencing elections and more.

THE DEEPFAKE AND ROGUE BOT MENACE

In April 2023, a deepfake picture of the Pope in an ankle-length white puffer coat went viral. The Pope recently addressed this matter in his message for the 58th World Day of Social Communications, noting “We need but think of the long-standing problem of disinformation in the form of fake news, which today can employ ‘deepfakes,’ namely the creation and diffusion of images that appear perfectly plausible but false.”

Earlier this month, CNN reported that a finance worker at an undisclosed multinational firm in Hong Kong got caught in an elaborate scam that was powered by a deepfake video. The fraudsters tricked the worker by disguising as real people at the company, including the CFO, over a video conference call. This worker remitted a whopping $200 million Hong Kong dollars (about $25.6 million) in what police there highlight as a “first-of-it’s-kind case.”

Celebrities are also not immune from this onslaught of bad actors riding the sleigh of deepfakes for malicious intent. Last month, for example, explicit AI-generated images of music superstar Taylor Swift circulated on X and found their way onto other social media sites, including Telegram and Facebook.

It’s not the first time we’re witnessing deepfakes in the zeitgeist. In 2020, The Atlantic reported that then-President Donald Trump’s “first use of a manipulated video of his opponent is a test of boundaries.” Former President Barack Obama was portrayed saying words he never said in an AI-generated deepfake video in 2018.

But we are now in a major election year, with the highest number of global voters ever recorded in history heading to the polls in no fewer than 64 countries, representing almost 49% of the global population, according to Time. The impending elections have set the stage for a digital battleground where the lines between reality and manipulation are increasingly getting blurred.

The ease with which misinformation can be disseminated, coupled with the viral nature of social media, creates a perfect recipe for chaos. “On social media, many times people do not read past the headline,” says Stuart McClure, CEO of AI company Qwiet AI. “This could create a perfect storm as people will just react before understanding if something is real or not.”

Rafi Mendelsohn, VP of Marketing at Cyabra—the social threat intelligence company that X hired to address its fake bots debacle—says “these tools have democratized the ability for malicious actors to make their activities that influence operations and their disinformation campaigns much more believable and effective.” In the battle against fake bots and deepfakes, “we’re currently seeing an inflection point,” Mendelsohn says.

THE ROLE OF RESPONSIBLE AI: DEFINING THE BOUNDARIES 

The discussion on combating the risks of generative AI is incomplete without addressing the crucial role of responsible AI. The power wielded by artificial intelligence, like any formidable tool, requires a commitment to responsible usage. Defining what constitutes responsible AI is a complex task, yet paramount in ensuring the technology serves humanity rather than undermining it.

Auditable AI may be our best hope of understanding how models are built and what answers it will provide. Consider also ethical AI as a measure of healthy AI. All of these structures go to understand what went into building the models that we are asking questions to, and give us an indication of their biases,” McClure tells Fast Company.

“First, it’s crucial to understand the unique risks and vulnerabilities brought by AI,” he says. “Second, you must strengthen defenses across all areas, be it personnel, processes, or technology, to mitigate those new potential threats.”  

Although there are experts like Mike Leone, principal analyst at TechTarget’s Enterprise Strategy Group, who argue that 2024 will be the year of responsible AI, Mendelsohn warns that “we will continue seeing this trend because a lot of people are still willing to use these tools for personal gain and many people haven’t even gotten to use [them] yet. It’s a serious threat to personal brand and security at a level we cannot even imagine.”

advertisement

It will take a multifaceted approach to effectively combat the misinformation and deepfake menace. Both McClure and Mendelsohn stress the need for rules, regulations, and international collaboration among tech companies and governments. McClure advocates for a “verify before trusting” mentality and highlights the importance of technology, legal frameworks, and media literacy in combating these threats. Mendelsohn underlines the importance of understanding the capabilities and risks associated with AI, adding that “strengthening defenses and focusing on responsible AI usage becomes imperative to prevent the technology from falling into the wrong hands.”

The battle against deepfakes and rogue bots is not confined to a single sector; it permeates our political, social, and cultural landscapes. The stakes are high, with the potential to disrupt democratic processes, tarnish personal reputations, and sow discord in society. As we grapple with the threats posed by AI-enabled bad actors, responsible AI practices, legal frameworks, and technological innovations emerge as the compass guiding us toward a safer AI future. In pursuit of progress, we must wield the power of AI responsibly, ensuring it remains a force for positive transformation rather than a tool for manipulation, deception, and destruction.

BREAKING DOWN THE ACTION IN D.C.

There are a number of bills floating around the Capitol that could—in theory, at least—help stop the proliferation of AI-powered deepfakes. In early January, House Representatives María Salazar of Florida, Madeleine Dean of Pennsylvania, Nathaniel Moran of Texas, Joe Morelle of New York, and Rob Wittman of Virginia introduced the No Artificial Intelligence Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act. The bipartisan bill seeks to establish a federal framework making it illegal to create a “digital depiction” of any person without permission.

Jaxon Parrott, founder and CEO of Presspool.ai, tells Fast Company that if passed into law, the No AI FRAUD Act would establish a system that protects people against AI-generated deepfakes and forgeries that use their image or voice without permission. “Depending on the nature of the case, penalties would start at either $5,000 or $50,000, plus actual damages, as well as punitive damages and attorney fees,” he says.

The DEFIANCE Act, another bill introduced in the House last month, suggests a “federal civil remedy” allowing deepfake victims to sue the images’ creators for damages. Then there’s the NO FAKES Act, introduced in the House last October, which aims to protect performers’ voices and visual likenesses from AI-generated replicas.

But whether these bills have any chance of becoming law is another matter.

“Legislation must navigate through both houses of Congress and receive the president’s signature,” says Rana Gujral, CEO at cognitive AI company Behavioral Signals. “There’s bipartisan support for addressing the harms caused by deepfakes, but the legislative process can be slow and subject to negotiations and amendments.”

As Gujral notes, one major hurdle could be debates over free speech and the technical challenges of enforcing such laws. Another challenge is the speed of technological advancement, which will likely outpace the legislative process.

On the other hand, Parrott says given that almost 20 states have already passed such laws, it’s likely that more states will follow and that Congress will take action also. “It’s worth noting that the NO AI FRAUD Act bill is cosponsored in the House by several representatives from both major political parties. Also, recent polling by YouGov reveals that the spread of misleading video and audio deepfakes are the one use of AI that Americans are most likely (60%) to say they are very concerned about.”

But he also notes that some opponents of the current language in the No AI FRAUD Act are concerned that it’s too broad in scope and that it would outlaw certain forms of political satire—in other words, violate First Amendment constitutional rights.

“If there were enough political pushback formed along these lines,” Parrott says, “congressional legislators likely could find a compromise that would strike a balance between protection against malicious deepfakes while ensuring traditional freedom of speech.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Kolawole Samuel Adebayo is a tech writer with a decade of experience writing about technology, particularly cybersecurity, AI, 5G, and their applications in everyday living. More


Explore Topics