How Twitter Bots Fool You Into Thinking They Are Real People

When a group of researchers made a small army of Twitter bots, many of the fake accounts amassed relatively large followings and high Klout scores. Here’s how they’re fooling us.

Twitter hates its estimated 20 million automated users. The company has made acquisitions, filed lawsuits, and tweaked its product in the name of combatting spam. But bots still squeak by its filters, and it’s not just the filters they’re tricking. It’s us.

Several news outlets, including this one, reported that a computer program apparently convinced several humans that it was a 13-year-old boy—the first time a machine has passed the Turing Test. It turns out the test was probably a fake. But people are already being fooled by simple chatbots all the time on Twitter. In order to fake a real account and gain influence, the best bots interact with other Twitter users.

Researchers from the Federal University of Minas Gerais in Brazil and the Indian Institute of Engineering and Technology released a study last month in which they programmed 120 bots with simple strategies for acquiring followers in order to look at how robots manage to pull off this deception. After a month, only 31% of the fake accounts had been suspended by Twitter, and together they had received a total of 4,999 follows from 1,952 distinct users.

More than 20% of the bots acquired at least 100 followers within a month—which is higher than the follow number for about 50% of all Twitter users. When the study’s authors submitted the paper to a social media conference, they noted that some of the bots had higher Klout scores than the members of the review committee.

Here’s what made the most popular bots successful:

High activity level: The bots were programed to complete two consecutive activities: to either post a new tweet or retweet a tweet and to follow a random number of target users. Half of them did these activities once every hour (with the exception of between midnight and 9 a.m., when they “slept” to look more human). The other half did them once every two hours (minus sleeping hours). Unsurprisingly, the active bots were more popular (as measured by followers, Klout score, and message-based interactions with other users) than the less active bots.

Making stuff up: Half the bots were exclusively retweeters. They found content that matched a set of target terms and recycled it. The other half also retweeted content, but in addition, they generated their own tweets using algorithm that produced semi-believable sentiments like “Night y’all ???!” and “I don’t have an error in it :).” You would think that automated text would easily out the bots as non-humans, but the bots that wrote their own copy actually acquired marginally higher levels of popularity. “This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style,” the authors suggest, “so that even simple statistical models can produce tweets with quality similar to those posted by humans.”

Targeting users along topic lines: One-third of the bots followed users randomly. Another third followed users who posted tweets related to software development. And the last third followed users who were connected among themselves. The group that followed people who were interested in software, but who didn’t necessarily interact with each other, became the most popular.

None of these bot strategies is particularly sophisticated. “This is something that an undergraduate could do,” says Fabricio Benevenuto, a professor of computer science at UFMG who worked on the paper.

Benevenuto says he thinks humans' acceptance of bots result more from a culture of returning favors on Twitter than an evaluation of the robot accounts' humanity. "The bot may, for instance, mention someone," he says. "Then [the owner of that account] says, that's nice, let me mention him."

There may be a day when bots are all as sophisticated as the 13-year-old boy impersonator that passed the Turing test. The bots hijacking the site today, however, seem to be capable of pulling it off using something less complicated: real people who aren't paying attention.

[Robot Hand: Holbox via Shutterstock]

Add New Comment

2 Comments

  • Great article. I'm a researcher who builds software that detects these realistic looking bots & I can confirm they are VERY common. As you allude, the challenge is that w/ 500+ million tweets a day, the adversaries can basically hide in plain sight. Without automated methods to detect these behaviors there's not enough time in the day to scrutinize the activity of every account on the network.

  • Great post Sarah. for the average tweep, like me, it's murky waters trying to figure out who the bots are. I don't trust them, don't want them, even if it makes me have a lower twitter following.
    It is so disappointing to see a new follower and their bio is, "Start business using twitter now, We selling 5k twitter followers for $29, Interested ?? Click.." Really? That has Bot written all over it. Can I just have other humans to share conversations and content? Off with their robot heads! Humans, let's take back twitter.