“I have the perfect kitten,” Twitter’s Director of Trust and Safety Del Harvey says, holding her empty cupped hands out for effect. In this analogy, the kitten is a new product, and she is playing the role of its designers and engineers.
“It’s hypoallergenic,” she continues in a cooing voice, “it doesn’t need food or water, it’s never going to get old, it doesn’t even have to use the bathroom. This is the perfect kitten. It’s cuddly, it loves you soooo much. It will live forever.” I’m just about ready to adopt when her voice returns to its normal tone, “Then you turn the kitten over and you say, ‘did you notice that this kitten can be used to shoot bullets?’ And they’re like, ‘Well, yeah, but why would anyone shoot bullets with it? It’s a kitten.’ ”
“And the thing is that there is always someone who will use that kitten to shoot bullets.”
Harvey is a perfectly sane, seemingly un-feline-phobic person whose job, to continue the analogy, is to jam deadly kittens’ bullet barrels. When porn shows up as an editor’s choice in Vine, when Jihadists or harassing Olympic swimming fans decide to use Twitter for something illegal, or when law enforcement demands information about a user, it’s her team of about 35 people that deals with it.
For Harvey and her team, solutions aren’t always clear-cut, especially when it’s law-enforcement who comes knocking at Twitter’s doors for information.
“No one wants a pen that’s going to rat them out,” Alexander Macgillivray, Twitter’s chief lawyer, explained to the New York Times. “We all want pens that can be used to write anything, and that will stand up for who we are.” That means walking a fine line between keeping activity on the platform legal and spam-free on one side and protecting user content on the other.
Last year, government, law enforcement and court orders sent 26 requests to remove information believed to be illegal. But it withheld just one account and 44 tweets. Of 1,009 requests for information about users, it complied with about 57%. Twitter fought New York City courts before giving up tweets from an Occupy Wall Street protester and made public the Department of Justice’s request for information about Wikileaks supporters.
Legal policy isn’t the only gray area. Linking to porn without gong on a spam spree? Probably okay. Putting porn in your avatar? Against Twitter’s policies. Posting it on Twitter’s new video app, Vine? It’s really too early to say. “They’ve screwed the legs on the kitten and there’s a tail,” Harvey jokes, “but the whole mid-section is still missing…there’s going to be a lot of stuff we’ll be doing.”
Even in the spam arena, where many decisions are made by algorithms, there are few simple rules. If you @ reply a million random users with a link to a Viagra site, and they all block you, that might be considered spam. If you tweet links to a Viagra site to people who are having a conversation about Viagra, and they say thank you, it might not. Sometimes a behavior can look spammy at first glance, but it’s not. An account called Your Mom Bot, for instance, @ replies to Twitter users with a “your mom” variation on their latest comment, but it hasn’t been suspended because users don’t react to it like spam. Meanwhile, flagging mobs try to kick users they don’t like off the site by blocking them en masse.
Over the years, learning algorithms have helped sort these behaviors out.
It wasn’t always this complicated. When Harvey started in 2008, she was the trust and safety department’s only employee, and she spent much of her time manually investigating individual blocked accounts.
Her experience at the time included administering psychological evaluations to reality TV show contestants, as a lifeguard in a state mental institution, and, most famously, at the nonprofit that lured sexual predators off the Internet and on camera for NBC’s To Catch A Predator series. While she had written protocol and worked with law enforcement in the latter role, she hadn’t worked in general Internet security.
Taking a cue from Vanilla Ice, (her background summary on LinkedIn begins, “If there was a problem, yo, I’ll solve it / check out the hook while my DJ revolves it”), Harvey says she grew with the job by doing whatever needed to be done. When investigating spam accounts one by one became unrealistic, she lured engineers from other corners of Twitter to build basic internal tools. She wrote protocol for dealing with multiple terms of service offenses, introduced trademark policies and greeted Twitter’s first lawyer with a handful of protocols. Eventually, she recruited a team of engineers to automate some anti-spam and safety issues, and they reported to her until this year.
Twitter is still building its trust and safety team. Recently it paired product managers with trust and safety representatives in hopes of building better bulletproof Kittens. The company’s job listings now feature positions for engineers who can build a multi-step verification process. And sorting out that new kitten, Vine, which recently got slapped with a 17+ rating in the App Store, has just begun.
“I’ve never thought we’re done with Twitter,” Harvey says. “Every time you’re like, OK, done with the policies, written the best ones ever, we’re never going to have to change or evolve. We’re the best, you are saying, ‘Dear world, send me a shit storm.‘”
Correction: A previous version of this article referred to Del Harvey’s title as head of safety and security. She is the director of trust and safety.
[Photo Mash: Joel Arbaje]