Fast company logo
|
advertisement

2020 ELECTION

We can have social media as we know it, or we can have democracy

I’m a digital propaganda expert, and I’m horrified by what I’m seeing ahead of the 2020 election.

We can have social media as we know it, or we can have democracy

[Photos: Amir Hanna/Unsplash; Aaron Burden/Unsplash]

BY Katharine Schwab3 minute read

In early September, President Trump retweeted a video allegedly showing an “black lives matter/antifa” activist pushing a woman into a subway car. The video is nearly a year old, and the man in question was mentally ill and had no connection to either group.

As a researcher studying social media, propaganda, and politics in 2016, I thought I’d seen it all. At the time, while working at University of Oxford, I was in the thick of analyzing Twitter bot campaigns pushing #Proleave messaging during Brexit. As a research fellow at Google’s thinktank Jigsaw that same year, I bore witness to multinational disinformation campaigns aimed at the U.S. election.

That is nothing compared to what I am seeing in 2020. The cascade of incidents surrounding both this year’s U.S. Presidential contest as well as a multitude of other contentious political events around the globe is staggering. From doctored videos, “smart” robocalls, spoofed texts and—yes—bots, there’s an overwhelming amount of disinformation circulating on the internet.

Meanwhile, political polarization and partisanship inflamed by these technologies continues to rise. As I sift through social media data relating to the ongoing U.S. election, I’m constantly confronted with new forms of white supremacist, anti-Black, anti-Semitic, and anti-LBGTQ content across massive social media sites like YouTube, Twitter, and Facebook. This rhetoric also shows up on private chat applications such as Parler, Telegram, and WhatsApp.

I thought we’d have made progress in addressing the problems of propaganda and disinformation on social media by now, and on the face of things we have. Major tech firms have banned political advertisements, flagged dis-informative posts by politicians, and made tweaks to their algorithms in attempts to stop recommending conspiracy-laden content. But, in the grand scheme of things, these actions have done little to quell the sheer amount of both low-tech and algorithmically generated propaganda online.

Prior to the 2020 U.S. election, there has been a great deal of fervor amongst academics and journalists about the potential threat of high-tech deepfake videos, which use AI to create realistic representations of people saying or doing things they never said or did. As of yet, we haven’t had a major political disaster involving the use of deepfakes in this years’ contest. We have, though, seen a profusion of low-budget, and perhaps equally damaging, “cheap” fake videos.

Old technological tricks are clearly having major successes in sowing false news and information. In the spring, President Trump and White House social media director Dan Scavino retweeted a fake video of Joe Biden allegedly endorsing Trump for reelection. The simply edited video got millions and millions of views. A month later, the president retweeted a very poor quality deep fake of Biden “twitching his eyebrows and lolling his tongue.”

This isn’t just limited to social media. In the latest example of low-tech propaganda, people in Detroit received robocalls spreading lies about mail-in-voting. The calls used “racially-charged stereotypes and false information to deter voting by mail” and were specifically created to target citizens in the key swing state of Michigan. Back in March, similarly automated calls spread disinformation about fake coronavirus tests to vulnerable Americans.

In my book The Reality Game, I discuss a nearby future in which AI-enabled technologies will be used to manipulate public opinion. In 2016, a lot of people wrote about how artificial intelligence had brought about the “info-pocalypse,” but it really hadn’t—not yet. Most of the tools, including simple bots, were clunky and relatively easy to detect. But I’m sad to say that alongside low-tech propaganda campaigns, we are now also experiencing a new wave of much more sophisticated technological manipulation.

We have known for years now that this was happening. My research team has catalogued computational propaganda campaigns going back to as early as 2010; and the 2016 election outcome was regarded by many as the “cyber Pearl Harbor” that would awaken a national response. But despite a few high-profile Congressional hearings, real regulation of social media companies has remained inexcusably sparse, not nearly sufficient to slow the spread of disinformation.

It comes down to this: we can have social media as we know it or we can have democracy. If we don’t force major changes to the way big tech platforms function, then the choice is made.


Samuel Woolley, PhD, is the author of the book The Reality Game: How the Next Wave of Technology Will Break the Truth. He is an assistant professor in the School of Journalism and program director of the propaganda research team at the Center for Media Engagement, both at the University of Texas at Austin.

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5

.
PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Katharine Schwab is the deputy editor of Fast Company's technology section. Email her at kschwab@fastcompany.com and follow her on Twitter @kschwabable More


Explore Topics