Tech companies like Facebook, Google, and Twitter employ thousands of people to moderate violent and disturbing content on their websites. But neither legions of moderators nor sophisticated machine learning algorithms can curb the rampant harassment on their platforms. Abuse is a problem at Twitter in particular, which Fast Company‘s new cover story explores in great detail.
Amy Zhang, a Ph.D. student at MIT’s Computer Science and Artificial Intelligence Laboratory, has studied online harassment in detail. She’s found that many victims take an old-fashioned approach to surviving the flood of violent, racist, and sexist comments that come pouring into their inboxes: they ask their friends for help.
That’s because their friends often understand the context behind the harassment. For instance, an algorithm or a human moderator might not flag a word that’s recently emerged as an epithet against a marginalized group–but a friend, who is likely part of the victim’s community, would know it. Zhang interviewed one victim whose ex-partner would harass her with inflammatory messages during important business meetings–again, something that a friend would be far more attuned to. Zhang found that some people who are harassed will give their friends their account passwords and ask them to clean their inbox of the worst messages; others will ask that a friend read the positive comments on a video aloud to them and skip over the negative ones; some will request that friends report particular users to Twitter.
Zhang wondered–could she somehow formalize that behavior, making it easier for people to handle the onslaught of harassment and simpler for their friends to help them do so?
The system is designed to work best with a group of moderators who all tag-team the task. As emails come in, they’re distributed evenly to all the moderators, who get notifications saying there’s something in their queue. Then, when they feel like tackling the hurtful things someone is saying about their friend, they can go to Squadbox’s site and review the messages. Squadbox is also meant to be flexible, accommodating anyone who might want to adapt the basic idea of friend-moderators into something that makes handling harassment more manageable.
“So far we’ve demoed it to several people who have been harassed, including some really high-profile harassment targets,” Zhang says. “They’ve been overwhelmingly positive.”
For the time being, Zhang is focused on building out the platform. Though people get harassed across all kinds of platforms, right now Squadbox is only designed to handle emails. She says the structure may also work for Facebook and Twitter, depending on their APIs–the next step is just to build it.