Fast company logo
|
advertisement

Recent incidents with fake Biden robocall and explicit Taylor Swift deepfakes could further ratchet up disinformation fears.

AI deepfakes get very real as 2024 election season begins

[Photo: VA/Eugene Russell-Army Veteran/Flickr; Paolo Villanueva/Flickr]

BY Mark Sullivan6 minute read

Welcome to AI DecodedFast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

AI deepfake tech is advancing faster than legal frameworks to control it 

Over the past two weeks the world got a preview of the kind of damage AI deepfakes are capable of inflicting. Some New Hampshire voters received robocalls featuring an AI-generated Joe Biden telling them not to vote in the state primary election. Just days later, 4chan and Telegram users generated explicit deepfakes of pop star Taylor Swift using a diffusion model-powered image generator; the images quickly spread across the internet. Though details remain scarce in both cases—we don’t yet know who created the fake Biden robocall, nor do we know what tool was used to make the Swift deepfakes—it’s clear we may just be at the beginning of a long and ugly road.

Former Facebook Public Policy director Katie Harbath tells me deepfakes might be an even bigger problem for people outside the celebrity class. AI-generated depictions of people like Biden and Swift get a lot of attention and are quickly debunked, but everyday people—say, someone running for city council, or an unpopular teacher—could be more vulnerable. “I’m especially worried about audio, as there are just less contextual clues to tell if it’s fake or not,” Harbath says.

The deepfakes are particularly troubling because they’re as much a product of the social media age as they are of the AI age. (The Swift images spread like wildfire on X, which struggled to contain any such posts in part because its owner, Elon Musk, decided to gut the platform’s content moderation teams when he bought the company in 2022.)

Social media platforms have little legal incentive to quickly extinguish such content, in large part because Congress has failed to regulate social media. And social platforms benefit from Section 230 of the 1996 Communications Decency Act, which shields “providers of interactive computer services” from liability for user-created content.

The Biden robocalls, on the other hand, underscore the fact that it’s possible to commit such dastardly AI crimes without leaving a lot of bread crumbs behind. Bad actors—domestic or foreign—may be emboldened to circulate even more damaging fake content as we move deeper into election season.

Several deepfake bills have been introduced in Congress, but none have come anywhere near the president’s desk. Last summer, Republicans on the Federal Election Committee blocked a proposal to more explicitly prohibit the deployment of AI-generated depictions. Biden has already assembled a legal task force to quickly address new deepfakes, but AI works at the lightning speed of social networks, not at the slower plod of courts. (If there’a a sliver of hope it’s that some states, most recently Georgia, are considering classifying deepfakes as a felony.)

Even if the AI tool used to create a deepfake can be detected, it’s questionable whether the people who made the AI tool can be held liable.

A central legal question may be whether or not Section 230’s protections extend to AI tool makers, says First Amendment lawyer Ari Cohn at the tech policy think tank TechFreedom. Are generative AI companies such as Stable Diffusion and OpenAI shielded from lawsuits related to content users create with image generators or chatbots? Section 230 aims to protect “providers of interactive computer services,” which could easily describe ChatGPT. Some argue that because generative AI tools create novel content, they’re not entitled to immunity under Section 230, while others claim that because the tool simply fulfills a content request, responsibility lies solely with the user.

It remains to be seen how the courts will decide that question, Cohn says. Even more interesting is whether the courts’ position will extend to makers of open-source generative AI tools. Deepfake makers prefer open-source tools because they can easily remove restrictions on what types of content can be produced, and remove watermarks or metadata that might make the content traceable to a tool or a creator.

AI in biology will be used for way more than drug discovery 

Though science will find meaningful uses for large language models, it’ll likely be other kinds of models working with very different data sets that do the heavy lifting in solving the world’s big problems.

While LLMs deal in words, scientific problems are often expressed in other terms—numerical vectors defining things like DNA sequences and protein behaviors. Ginkgo Bioworks head of AI Anna Marie Wagner says that humans invented language, so it’s taken a long time for AI to be able to do things with language that humans can’t already do. With new LLMs, we now have a tool that can read 100 documents in five minutes, and summarize their similarities and differences. 

“Human beings did not invent biology—we are students of it, so AI is already much better at it than humans, and has been for a very long time, at certain types of tasks, like taking in massive amounts of biological data and making sense of it,” Wagner says.

advertisement

The biology world uses AI in bioinformatics as a way of managing the vast amounts of information scientists collect to understand the behaviors of the most basic building blocks of life—DNA, RNA, and proteins. But unlike the field of natural language, Wagner says, biology is still very early in the process of discovering, and codifying, all the possible ways that various sequences of DNA can manifest (via RNA, then proteins) in the human body, or in the body of a microbe, or in a stalk of corn. Understanding the logic behind each possible step in that process implies a mind-bogglingly large body of information. 

Ginkgo has been using AI for years to help design proteins to catalyze certain chemical reactions, or to develop new drugs, or for designing DNA sequences in synthetic biology. Wagner says people often associate biology with the pharma industry and biotech, and while that’s where the money is today, biology will be applied to a much wider set of challenges than just drug discovery in the future. 

“Biology is the only substrate, the only scientific discipline, that is capable of solving the great challenges of the world—food security, climate change, human health—all of those are biological problems,” says Wagner. “There has already been so much value created [with AI], even with the tiny little surface-scratching work that we’re doing now.” 

Microsoft’s New Future of Work report is all about AI

Not surprisingly, Microsoft’s recently released New Future of Work report focuses on the use of AI in the workplace. The report, which draws on surveys of folks both within Microsoft and outside the company, yields some eye-catching stats and themes. For example, it took people 37% less time on average to complete common writing tasks when they used AI tools, and consultants produced over 40% higher quality on a simulated consulting project. Meanwhile, users solved simulated decision-making problems twice as fast when using LLM-based search over traditional search. However, in some tasks, when the LLM made mistakes, BCG consultants with access to the tool were 19 percentage points more likely to produce incorrect solutions. 

A few more findings from the Microsoft report: 

  • Researchers think that as AI tools are more widely used at work, the role of human workers will shift toward “critical integration” of AI output, requiring expertise and judgment. 
  • AI assistants might be used less as “assistants” and more as “provocateurs” that can promote critical thinking in knowledge work. AI provocateurs would challenge assumptions, encourage evaluation, and offer counterarguments.
  • Writing prompts for AI models remains hard. Prompt behavior can be brittle and nonintuitive. Seemingly minor changes, including capitalization and spacing can result in dramatically different LLM outputs. 

You can read the full report here

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the extended deadline, June 14.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics