Fast company logo
|
advertisement

The agency has been helping states investigate the calls; now it’s providing more tools for enforcement and conviction.

The FCC just put AI robocaller creeps on notice during election year

[Photos: Michael Burrell/Getty Images;
Pixabay
/Pexels]

BY Mark Sullivan2 minute read

The Federal Communications Commission (FCC) on Thursday made it illegal to place AI-generated robocalls to voters in the U.S. 

The issue came into sharp focus when New Hampshire voters received AI-generated calls in the voice of Joe Biden implying that if they cast a ballot in the state’s primary they couldn’t vote in the general election. 

The FCC commissioners voted unanimously to make explicit that the 1991 Telephone Consumer Protection Act, which already outlaws artificial or prerecorded messages, also covers AI-generated calls. Starting now, the FCC can fine companies that place AI robocalls up to $23,000 per call, and can block service providers that carry the calls. Robocall recipients can also now sue for up to $1,500 per call in damages per.

But according to the consumer rights advocacy group Public Citizen, the FCC’s move still doesn’t cast a wide enough net.

“The Telephone Consumer Protection Act applies only in limited measure to election-related calls,” says Public Citizen president Robert Weissman. “The Act’s prohibition on use of ‘an artificial or prerecorded voice’ generally does not apply to noncommercial calls and nonprofits.” 

Still, the revised law would likely apply to Walter Monk, the Texas man who New Hampshire authorities believe was behind the Biden robocalls. Monk is the proprietor of Life Corporation, and authorities believe the calls were distributed by the Texas carrier Lingo Telecom. 

The FCC’s action may make robocallers even more careful about covering their tracks. Generating an AI robocall is relatively simple with available tools (New Hampshire authorities believe Monk may have used ElevenLabs’ voice-cloning tool), and techniques for masking the origin of a call are readily available. 

But law enforcement’s investigative powers are becoming more high-tech as well. The New Hampshire authorities used traceback technology to follow the robocalls back up through the communications network and finally to the originator of the calls.

The FCC has been helping state authorities with both federal resources and investigation tactics to hunt down robocallers. The federal-state partnership can help build an air-tight case against suspected violators when it’s time to prosecute. Now, the FCC has provided prosecutors more tools for making robocallers pay.

Still, experts say that all these efforts won’t be a silver bullet for AI robocalls and other misinformation this election season. In the end, the FCC’s action, and the attention it gets, may help more than the threatened penalties. It may communicate to voters to be aware that those election-year dinnertime calls may not be what they seem.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics