Fast company logo
|
advertisement

There may be no miracle cures for what ails Twitter, but insiders and outsiders alike have constructive ideas for improving its “conversational health.”

19 Ways Twitter Can Make Itself Safer Right Now

[Illustration: Delcan & Company; Prop stylist: Kathryn Brilinsky]

BY Austin Carr and Harry McCracken10 minute read

For our new cover story on Twitter, which charts the company’s struggles to rid its service of trolls, bots, and other assorted troublemakers, we spoke with more than 60 sources, including 44 former Twitter employees–most of whom requested anonymity–as well as third-party experts. During this process, we heard endless tales of internal dysfunction at Twitter, and plenty of criticism of the company for past mistakes and missed opportunities. But these sources also shared smart suggestions on ways Twitter can do more to protect its platform from abuse and toxicity. Many would be relatively straightforward to implement, requiring neither AI breakthroughs nor massive hiring sprees.

1. Spell Out The Rules

Social platforms pour immense resources into dealing with the consequences once someone has violated their user standards, which in the case of Twitter are known as the Twitter rules. Most users, of course, never bother to bone up on those mandates in the first place. Susan Benesch, whose Dangerous Speech Project is a member of Twitter’s Trust and Safety Council, says she’s implored executives to raise the visibility of these policies, which outline what you can’t say on the service. “They say, ‘Nobody reads the rules,'” she recounts. “And I say ‘That’s right. And nobody reads the Constitution, but that doesn’t mean we shouldn’t have civics classes and try to get people to read it.'”

2. Give Trust And Safety (Even) More Resources

In a live Q&A on safety Twitter held last month, CEO Jack Dorsey said that general counsel Vijaya Gadde, whose reports include the trust and safety team, “feels she’s in a position to always have to ask for resources, and pull people along on certain changes we know we need to make, and that speaks to a lack of a collaboration, a lack of shared goals, a lack of shared prioritization.” This same complaint applies to Twitter’s approach for nearly every year of its existence. Indeed, there were continuous cries for more safety resources dating back to 2010, yet they were never heeded. This needs to change.

3. Better Align The Safety Incentives

The company’s trust and safety team (which reports to legal and oversees policy) and its user services team (which reports to engineering and is responsible for operationalizing these policies with enforcement) are too often at odds. Many sources bemoaned the friction between these two groups; as we report in our feature, they constantly fought over how to enforce certain policies and what content to allow on the platform. Simply put, some sources argue the trust and safety team moved too slow, coming across as idealistic about free speech and naïve about the realities of policing the service, whereas the user services team moved too fast, too focused on Whac-a-Mole and hitting quarterly goals while not paying enough attention to addressing underlying platform problems in a cohesive manner.

4. Redefine Twitter’s Belief In Free Speech For This Era

Does the company still consider itself apart of the “free speech wing of the free speech party?” Dorsey’s actions would suggest that his views on this long-held corporate value have evolved. Yet Twitter hasn’t clarified what that ethos means today. How has it changed since Dorsey’s return in 2015, or when the company first began codifying these ideals–“let the tweets flow”–around 2011?

5. Productize More Policies

One refrain from our reporting: Twitter has gotten better about developing policies to counter abuse on its platform, but it still lags behind services like YouTube in automating how these policies are enforced. A number of sources wished that Twitter would develop more internal tools to scale policy enforcement so it could rely less on users self-reporting abuse. In the long term, that would also let it rely less on human employees manually combing through this flagged content. One user services agent estimated that 75% of the work today was being done manually, while just 25% was being flagged by an algorithm or another automated system. This approach needs more balance between human and engineering solutions.

6. Label The Bots

There’s nothing inherently pernicious about using automation to push tweets to a Twitter account without human intervention; it’s how plenty of media outlets promote their content to readers, for example. But when bots pose as people–such as everyday Americans spouting sentiments crafted by professional trolls in other countries–they become a problem for Twitter and society. The company recently imposed new restrictions on automated tweeting designed to foil abuse, but it could go further by requiring that such tweets be clearly labeled as such. “Simple measures to apply some level of transparency to those sorts of activities would go a long way toward improving the user experience,” says Jonathon Morgan, founder of Data for Democracy. “Most people are duped into believing that the public is interested in some story or some idea when in fact that crowd has been manufactured.”

7. Redefine User Metrics

Twitter has frequently been accused to failing to deal with bots adequately because eliminating them would shrink its monthly active users, thereby hurting its ability to make money through advertising. Though the company already says that its flat user numbers are due in part to it cracking down on malicious accounts, it could change the conversation by declaring that it will henceforth subtract bots–which don’t pay attention to ads anyhow–from its stats. “It would be a statement to measure not users, but human users,” says Anna Westelius, director of security research at bot-fighting company Distil Networks.

8. Keep Track Of Ongoing, Targeted Abuse

Much of the abuse on Twitter is not one-off instances of bad behavior but a form of digital stalking which only grows more dangerous as incidents pile up. The company should assess such cases in their entirety, which requires documenting them on a continuing basis. “Let’s say your address was doxed on Twitter and you have someone that was obsessed with you and kept tweeting out your public address and phone number,” says Brianna Wu, one of the principal targets of the 2014 online harassment campaign known as Gamergate. “As best as I can tell, Twitter doesn’t really keep notes in your Twitter account file to follow up on that well.” (Overall, however, Wu does praise Twitter for steps it’s taken to make life harder for trolls.)

9. Create A Fast-Track Queue For Abuse Tickets

Right now, reports of abuse on Twitter enter a single queue of tickets, which the company’s user support agents manually go through to review flagged content. The company should recognize that some users and third-party organizations are more adept at submitting material that is actionable or helpful in refining policy. Twitter could determine which people or groups have a proven track record in safety reporting and allow their tickets to enter a second, accelerated queue, escalating them faster to support agents. YouTube has a variation of this system; it would be like a verification status for users specializing in platform safety, potentially creating Twitter’s equivalent of the neighborhood watch.

10. Seek Advice From Specific Communities

Twitter trolls are not a unified army but a fragmented jumble of bad actors targeting different people for a variety of reasons, sometimes in ways that are hard to pick up unless you know what to look for. Experts on specific cultures–including folks who have been harassed on Twitter themselves–could help the company figure out how best to direct its safety efforts. “Twitter seems to ignore/antagonize its user base, sometimes, on some problems,” says Tablet Magazine senior writer Yair Rosenberg, whose Imposter Buster bot took on anti-Semitic trolls until Twitter banned it. “But it could very easily create channels of communication with different groups of users and different parts of the social media sphere, who could give them tips.”

11. Let Users Help Define Accountability Metrics

Amnesty International’s Azmina Dhrodia wants the company to implement enhanced systems for its users not only to track reports of hateful conduct and abuse, but also to more clearly understand the company’s responses to incidents. She hopes part of this system will include an option for users to rate their satisfaction with the company’s response to abuse reports as well as to share their feedback on the decision. This could help users feel more included in the process and perhaps additionally yield novel safety performance metrics.

12. Defend Against Abuse Planned Off-Platform

A number of sources felt that Twitter was far too reactionary and narrowly focused on protecting users within the confines of its own service. Yet many incidents of harassment get their start on other platforms, with trolls coordinating their targets and attack approach (hashtags, memes, etc.) on, say, public forums or Reddit first before pulling the trigger on Twitter. The company ought to do more to investigate this type of abuse planning, if only to give its safety teams a better sense of what offensives might be just around the corner.

advertisement

13. Engage With The Competition

Given that much of the misbehavior on social networks spans multiple platforms, it would also behoove Twitter to participate in an ongoing dialog with its rivals for consumers’ attention. That would allow all of these companies to share best practices in the war against trolls and bots–and given that it’s increasingly clear that Twitter is hardly alone in the difficulties it’s faced in dealing with this mess, everybody involved has strong incentive to be part of the conversation. A technological generation ago, major tech companies banded together to fight email spam, and it helped.

14. . . . But Make Transparency A Competitive Advantage

Dorsey has said that the company wants to collaborate–rather than compete–on addressing safety issues in social media. While this is the right philosophy when it comes to developing new defenses against digital pollution, it’s likely the wrong approach when it comes to transparency regarding both Twitter’s current progress on safety and its past lapses in this area. It’s a bad strategy to follow the lead of other tech companies when it comes to openness with the public–Senator Mark Warner even went so far as to tell us that Twitter was “drafting” off Facebook on this front, which is alarming considering that Facebook has set a low bar for transparency. Let Silicon Valley collaborate on safety, but transparency ought to be another field of competition.

15. Institutionalize Processes For Suspensions

As reported in our feature, Dorsey has been increasingly injecting himself into the process of who and what to allow on Twitter’s platform. While this more aggressive policing is a step in the right direction, some sources feel his direct involvement could lead to questions over how much his personal biases are playing into content decisions. Twitter has long struggled with this dilemma: Dick Costolo, Dorsey’s predecessor, ultimately made the call to permanently ban alt-right troll Chuck Johnson from the service, resulting in a lawsuit against the company earlier this year. The more Twitter can institutionalize a fair and transparent system for suspensions, the safer the company will be in the long term.

16. Commit To A No-Holds-Barred Transparency Report

Some sources involved in producing these annual reports have suggested that they have largely been inconsequential in years past. One source says they have been published “to pat themselves on the back. It was kind of useless information-like number of takedown requests from governments-which you couldn’t really dig down into for any wider context. It was like showing health stats for people with bad teeth or heart rates, but only showing that they’re all [consuming enough] Vitamin C. It just emphasized the wrong things.”

17. Share More Data With Third-Party Researchers

Amnesty International technology researcher Azmina Dhrodia believes there could be major upside to Twitter opening up its metrics–even if they aren’t always pretty–so experts and academics can assist in developing possible solutions, while also holding Twitter accountable. In particular, Dhrodia wishes the company would at least become more transparent around data on response times to abuse reports.

18. Don’t Send Abuse Down The Memory Hole

Twitter should delete tweets that engage in harassment and deception from users’ feeds. But instead of simply eradicating them, it should preserve them to share with researchers. It’s difficult, after all, to assess a problem and its solutions if you don’t have access to real-world examples.

19. Listen More To Twitter’s International Offices

Facebook and Google have traditionally looked abroad to figure out which new technology problems might eventually arrive in the U.S., so they can develop countermeasures before they reach our borders. But in many cases, Twitter has ignored or tolerated glaring warning signs in international markets, only to be forced to deal with similar problems stateside later, once it’s too late. Throughout our reporting, we heard a near-constant plea from those who have worked in overseas offices for the company’s Bay Area-based leadership to pay more attention to the red flags in their markets–and not just react to comparable problems once they affect high-profile American users and celebrities.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics