Fast company logo
|
advertisement

With the arrival of new generative AI tools, much of the information we see about candidates and causes next year could be machine-made, including misinformation.

2024 elections: Get ready for AI-powered politics

[Images: Hill Street Studios/Getty Images; Rawpixel]

BY Mark Sullivan5 minute read

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m focusing on the growing urgency among government officials to set up regulatory guardrails addressing the dangers posed by new AI tools, such as the easy creation of political misinformation via deepfake technology. Also, I’m looking at Snap’s controversial (and lucrative) new chatbot. 

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com.

As 2024 looms, FEC mulls political deepfakes ban

The Federal Election Commission (FEC) is meeting this week to discuss whether it should create new rules to protect voters from political deepfakes. The meeting comes in response to a May 16 petition from Public Citizen asking for a full ban. The FEC has been slow to pass new rules governing the ways in which campaigns can use AI-generated content. Republican leadership on the commission has taken the position that AI content can be regulated via existing rules.

Next year, a billion people will cast votes in elections around the world. False, hateful, and divisive content hosted on social networks has already done great harm to the open public discourse that leads to good elections; but in the past, that content was created by humans. Next year, social networks will give wide distribution to new AI-created content. Voters could encounter heavily tested and hyper-targeted ads or memes, audio and video deepfakes of candidates, or social media-bot accounts that can comment, discuss, and propagandize with human-level skill. New generative AI tools can churn out this content on the cheap, which could open the door to political operators normally excluded from mass-influence operations by the cost.

And we’re not ready. Most voters don’t yet have a full understanding of the real capabilities of generative AI. There is no law requiring that political organizations’ AI-generated text, image, or video content be labeled as such. And the social media companies that distribute such content are reducing, not growing, the elections teams that would normally protect voters from fake or misleading media. The startups that build the AI tools have just begun establishing guidelines for the use of their tools for political content—and their ability and motivation to enforce those rules during election seasons are as yet untested. A federal government entity, such as the FEC, may be needed to establish and enforce guidelines.

Why you should care about Europe’s AI Act

President Biden was in San Francisco Tuesday talking to a number of AI and safety experts about how best to regulate AI without stifling the tech’s enormous potential. 

AI regulation in the U.S. is likely to be influenced by European law, and the European Union is now getting close to passing its “AI Act,” a broad slate of rules covering the development and application of AI models. Last week, the European Parliament approved the working draft of the bill. European Parliament’s president Roberta Metsola called the AI Act “legislation that will no doubt be setting the global standard for years to come.” 

The act contains language that will directly impact how companies like OpenAI, Google, and Stability.ai will do business in the EU. A study from the Stanford Institute for Human-Centered Artificial Intelligence shows that most of the major AI companies would have to make substantial changes in order to comply.

While the AI Act still faces political wrangling that could soften its language, it’s likely to require AI developers to be far more transparent than they are now about the inner workings of large models, as well as about the data used to train the models. It may include a ban on the common practice of scraping the web for copyrighted data for use in training. It will likely include a framework for regulating AI-powered recommendation engines, which it classes as “high risk.” And it may include a total ban on using “real-time biometrics” (face scanning, for example) in public places to support “predictive policing.”

advertisement

The European lawmakers hope to have an agreed-upon set of draft regulations ready by year’s end. If all goes well, the EU could have a new AI law on the books by 2026. Passing an AI bill roughly within that timeframe would be a major win for the Biden Administration.

Snap’s much-maligned “My AI” friend has a secret purpose—selling ads

Snap’s “My AI” feature, which lets users customize their own AI friend, hasn’t earned great reviews since its launch in February, but giving users an AI companion may have been only part of the goal. Snap says more than 150 million users have sent more than 10 billion messages to their My AI friends. The knowledge the chatbot collects from those messages is used to further customize its output for the user, but Snap says it’s also testing ways of using that data to target ads based on the user’s specific traits and needs.

It’s possible that generative AI chatbots will not help companies sell more ads exactly, but will help them sell better ads—ones that are far more personalized and appealing to the user. Most of today’s high-profile AI startups are focused on selling models and tools to enterprises looking to reinvent their business practices. But consumer AI is coming soon, and personalized AI assistants will be a main focus. Snap is one of many companies trying to understand how to monetize them.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics