The Twitter account @NYT_first_said tweets words from The New York Times that have never appeared in the newspaper before. Even though the Times itself has disclosed that the account is a bot, not the opinion of an attentive reader, the fact that the account is automated isn’t necessarily obvious when its posts show up retweeted in your timeline.
So that users no longer have to guess which accounts are automated bots and which are run by humans at keyboards, Twitter is testing the addition of a label that will appear alongside bot account names, indicating the account is automated and providing a link to the Twitter account of the person who runs it.
The new feature comes after the company’s research revealed that users are a bit confused about the role of automated posting on the platform, says Twitter product management director B Astrella. It’s part of a general push to help people understand special types of accounts that customers have found confusing, including bots, he says. Astrella indicated that Twitter may also soon offer special labeling for memorial accounts that are preserved after a person’s death.
The goal isn’t so much to ferret out malicious bots trying to mislead users, he says, but to point out how “good” bots work. Astrella says such good bots include automated accounts that post real-time earthquake data, share ASCII art, share information about what’s in the newspaper, or just remind people to stay hydrated.
While the system is currently being tested with a few hundred accounts, Astrella expects that it will ultimately be required of all bots on the platform, which as of last year, are already supposed to disclose in their profiles that they’re bots and indicate who made them. The new system will standardize that label rather than making people parse it out of the account description field, and will publicize that accounts are bots where their tweets appear in user timelines, not just when people click through to their profiles.
Good bots and bad bots
There isn’t a hard-and-fast definition of what counts as a bot. Astrella says it’s generally an account that posts more than half its posts automatically, but that other accounts might require the label if they, say, periodically post bursts of automated material.
Not all bots are good. Automated accounts have been used in the past to share misinformation and propaganda on the platform: A study last year, for instance, found that nearly half of users then sharing information about the coronavirus pandemic were bots. Various research has over time suggested that between 5% and 15% of accounts on the site are fake, and Twitter has previously taken down swaths of bots and other fake accounts used by governments for political manipulation.
The company says it has rules to make sure automated accounts don’t try to manipulate conversations on the social network, or annoy users. For instance, bots aren’t allowed to send bulk, unsolicited direct messages, and they’re not allowed to automatically like content on the platform. The company has invested in rooting out such “inauthentic behavior” on the platform, which includes users posting commercial spam and bots attempting to drive conversations through tweeting or popularizing posts. That will continue, Astrella says.
Bots that currently try to masquerade as nonautomated accounts may try to violate the new rule, perhaps even taking advantage of the policy to mislead users who assume all bots will be labeled as such. Astrella says he doesn’t currently see a lot of bots masquerading as real users.
“In the places where I’ve seen folks not disclosing, it’s mostly just because they don’t know that they’re supposed to,” he says.
Astrella says the feedback he’s heard from developers has been generally positive, especially since it gives them a more official way to take credit for their bots.
“This gives their ownership of the bot a little bit higher status on the profile,” he says.