advertisement
advertisement

A Snapshot Of How Twitter Deals With Online Harassment

A nonprofit called WAM spent three weeks fielding harassment reports to Twitter. Here’s what it learned.

A Snapshot Of How Twitter Deals With Online Harassment
[Photo: Flickr user Adam Baker]

Last year, a two-person-large nonprofit advocacy organization called Women, Action, & the Media (WAM!) announced it would collaborate with Twitter to address online harassment. You may have heard about this: By WAM’s count, the announcement generated more than 204 stories in 21 countries.

advertisement

It was widely misunderstood as a major partnership with Twitter; however, both sides agree it was actually more of an experiment. For three weeks in November, WAM became an “authorized reporter” for Twitter, which meant that the group could identify and report abusive content on behalf of other people. It accepted reports of harassment through an interface on its website, accelerated reports it believed had merit to Twitter, and used the opportunity to better understand online harassment reports and how the social media platform responds.

Throughout the experiment, WAM reviewers received 811 reports of harassment. They escalated 161 of those reports to Twitter, which responded by suspending 70 accounts, handing out 18 warnings, and deleting one account.


This week, WAM released its analysis of this experiment, which provides a rare snapshot into the abuse people report to Twitter and how the social network responds to it.

Here’s what WAM learned:

The majority of people who reported harassment did so on behalf of someone else. About 57% of the reports that WAM received came from either bystanders who witnessed someone else being harassed and reported it, or from delegates like an attorney or family member who reported harassment on behalf of the person being harassed. Twitter changed its policies to allow bystander reports for impersonation and doxxing in February.

Most people who reported harassment had been harassed before. 67% of them said they had notified Twitter at least once about harassment.

advertisement
advertisement

Gamergate made up only a small percentage of reports of online harassment. Though the Gamergate controversy has been one the most visible stories about online harassment in the mainstream media over the past year or two, only about 12% of the 512 alleged harassing accounts reported to WAM could be linked to it.

Some people reporting harassment were also reported for harassing others. This happened in 27 cases. “In some cases, receivers of harassment may also be engaging in activity that might constitute harassment,” the report reasons. “In other cases, receivers of harassment may be subject to ‘false flagging’ campaigns that attempt to silence the harassment receiver through bad faith reports.”

Twitter took action in more than half the cases. Among the 161 reports WAM referred to Twitter, the company took action 55% of the time by deleting, suspending, or warning the reported accounts.

Twitter was unlikely to delete an offending account. Twitter only deleted one account in response to the 161 reports. It was much more likely to suspend the account (which it did 43% of the time) or to deliver a warning (11% of the time).


Twitter did not favor more established accounts. WAM found no relationship between the age of an account or its number of followers and Twitter’s actions.

The 811 reports of online harassment that WAM gathered are a relatively small sample size, and a self-selecting one, at that. But with a dearth of information coming from Twitter and other technology platforms about how they handle harassment, it’s better than nothing. The experiment, at the least, allowed WAM to point out a few places where Twitter could improve its processes, including:

advertisement

Finding a better way to handle doxxing. The practice of publishing personal information like phone numbers and addresses was the second-most reported form of harassment, second to hate speech. Twitter only took action in 7 of the 20 reported doxxing cases (35%), as compared to the 60% action rate for hate speech. The social network explicitly banned doxxing in March, but the problem may be that many harassers who post personal information on Twitter remove it before the company has time to act. The information has still been released, and the damage is still done, but the evidence is (sort of) gone.

Building better reporting tools. Twitter currently requires people who report abuse on its platform to provide URLs to harassing tweets. If a person is being harassed on Twitter, but not through a tweet—say, by a pornographic profile photo or a username—there’s no way to report it. Twitter doesn’t accept screenshots as evidence, which means that harassment that is reported and then deleted can’t be reported.

Twitter already knows it has work to do in dealing with online harassment. Twitter’s CEO, Dick Costolo, summed it up in a memo that leaked in February. “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” he said, promising, “We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.”

At that point, the company had already started making small changes in its online harassment policies. In December, for instance, it had announced a more streamlined way to flag abusive tweets and allowed bystanders to report abuse. And since then, it has announced more policy changes. Later in February, it announced it tripled the size of its abuse support team (Twitter would not say how big that team is). In March, the company officially banned revenge porn. It also improved its features for reporting threats to law enforcement and provided verified accounts a filter designed to catch abusive tweets.

“We’ve never said, okay, we’re done. Our policies are set. Everything is perfect,” Twitter’s head of trust and safety, Del Harvey, told me in April. “We’ve always been saying we have to keep improving and iterating on this stuff.”

advertisement

About the author

Sarah Kessler is a senior writer at Fast Company, where she writes about the on-demand/gig/sharing "economies" and the future of work.

More