advertisement
advertisement
advertisement

#FireMcMaster, Not Damore: Twitter Bots Are Thriving, And They’re More Lethal Than Ever

Political Twitter bots, many with ties to Russia, didn’t go away after the election, and experts say they’re getting more sophisticated.

#FireMcMaster, Not Damore: Twitter Bots Are Thriving, And They’re More Lethal Than Ever
[Photo Illustration: Joel Arbaje for Fast Company, Source Photo: Flickr user Photographic Archive, Alexander Turnbull Library]

James Damore, the Google engineer recently fired for his controversial memo on workplace diversity and women in tech, has quickly become a cause célèbre for the political right. He’s already appeared on at least two conservative YouTube broadcasts, and his memo has received praise from right-wing publications like Breitbart. But according to researchers at the Alliance for Securing Democracy, Damore’s story has also gone viral, thanks partly to a network of Twitter trolls and bots with ties to the Russian regime.

advertisement

“Russia frequently amplifies content related to the far-right in both the U.S. and in Europe, but it does so opportunistically,” the researchers write. “It’s easier to amplify something people are already talking about than to create a trend out of thin air.”

Twitter bots—bits of automated code that post to the service—have become a disturbingly big force in worldwide political rhetoric, promoting viral partisan memes and fabricated stories that can dominate the news cycle. While some bots on the service are innocuous, like automated tools that share genuine news stories on a particular topic or send out emergency weather alerts, many are designed to manipulate people into believing they’re actual humans sharing interesting news.

“Humans are vulnerable to this manipulation, retweeting bots who post false news,” write a group of Indiana University researchers who studied Twitter bots around the 2016 election. “Successful sources of false and biased claims are heavily supported by social bots.”

Some political bots are created by individual Twitter users looking to promote their own views, drive traffic to their own websites, or simply gain attention and influence. They’ve made appearances in U.S. politics at least since the 2010 Senate campaign in Massachusetts, when Republican Scott Brown defeated Democrat Martha Coakley.

“At the time, the bots were very, very simple, and they were used to attack the Democratic candidate,” says Emilio Ferrara, research assistant professor at the University of Southern California, who has studied social media bots.

Since then, they’ve become more sophisticated, partly in an effort to seem more human. Instead of just posting a handful of partisan messages every hour, for example, they might intersperse commentary on different subjects.

advertisement

Experts say the practice has been particularly adopted by Russia as part of a rapid-fire, all-media propaganda campaign that a 2016 RAND Corp. report dubbed “the firehose of falsehood.” In just the past week, Russian bots have reportedly promoted Damore’s story, gone after former attorney general Loretta Lynch, and taken up a far-right call for President Trump to fire National Security Adviser H.R. McMaster. Before that, Russian bots were part of a well-documented disinformation campaign during the 2016 U.S. presidential election, harnessed to spread right-wing messages during this year’s election in France, and used to influence political discussions at home in Russia.

Other countries have used Twitter bots for political purposes as well: A report by Samuel Woolley, director of research at the Oxford Internet Institute’s Computational Propaganda Project, points to bots deployed by pro-government forces in China, Mexico, and Turkey, among other countries. In some cases, bots have been used to stifle conversation around controversial subjects or social movements, posting endless streams of spam tagged with particular hashtags until actual humans can’t get a tweet in edgewise.

“We’ve observed examples of that, for example in Mexico, where large amounts of bots are created to prevent people from coordinating social movements on Twitter,” says Filippo Menczer, professor of informatics and computer science at Indiana University.

And when they do post about political topics, the links, images, and ideas they share can spread quickly through networks of like-minded people.

“If you have some very polarized agenda ordinarily that a set of bots is pushing, those are more likely to become viral in the respective echo chamber where the bots operate,” says Ferrara.

Propaganda bots might be bad for democracy, but bot spam traffic of any kind is also bad for business at Twitter and other social networks. Bots don’t click on ads, Menczer observes. And they can cast doubt on social services’ audience metrics and alienate real users by drowning out posts they actually want to see.

advertisement

“If people think that the platform is full of bots, then they will leave, and the platforms won’t make money from ads,” he says.

A research paper published earlier this year by Menczer, Ferrara and other researchers estimates that between 9% and 15% of active Twitter accounts are actually bots. Twitter has cautioned that “research conducted by third parties about the impact of bots on Twitter is often inaccurate and methodologically flawed.” Twitter estimated in its quarterly report last week that “false or spam accounts” accounted for less than 5% of monthly active users as of the end of last year but acknowledged the issue presents a risk to the company.

“Our actions to combat spam require the diversion of significant time and focus of our engineering team from improving our products and services,” Twitter told investors in its quarterly report last week. “If spam increases on Twitter, this could hurt our reputation for delivering relevant content or reduce user growth and user engagement and result in continuing operational cost to us.”

The company declined to make anyone available for an interview with Fast Company, but a spokesperson pointed to a June blog post highlighting efforts to stop spam and misinformation.

“When we do detect duplicative or suspicious activity, we suspend accounts,” wrote Colin Crowell, VP of public policy, government, and philanthropy, in the post. “We also frequently take action against applications that abuse the public API to automate activity on Twitter, stopping potentially manipulative bots at the source.”

But as bots become more complex and humanlike in their behavior, researchers say it naturally becomes harder and more expensive to detect them. Automated tools can use machine learning to identify bots fairly accurately, but they’re not without false positives, says Menczer. And if services automatically take down suspicious accounts too aggressively—using bots to take down bots—they risk being accused of censorship when they accidentally ban humans for sharing controversial political content.

advertisement

Ultimately, social media bots may prove similar to email spam, he says. While huge numbers of automated junk emails still travel the internet every day, modern filtering methods mean most never make it to anyone’s inbox, and many users are now sophisticated enough to disregard the spam they do come across. Continuing research might help weed out unwanted bot posts on social networks, now that the issue has been firmly identified as a problem.

“I don’t think the problem will ever be completely eliminated,” Menczer says. “But it will possibly make it better, so that people are not exposed to a huge amount of misinformation, that way that we are today.”

About the author

Steven Melendez is an independent journalist living in New Orleans.

More