Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

8 minute read

Startup Report

Whisper's Master Of Content Moderation Is A Machine

How the anonymous social network used deep learning to teach "the Arbiter" to spot material that's mean, gross, and/or illegal.

[Photo: Flickr user Erik]

When people are granted anonymity on the Internet, does that make them more likely to behave in a way that's sincere, constructive, and uplifting, or trollish, disgusting, or even dangerous?

You could spend hours debating that question, but one thing is pretty clear: even if the nice anonymous people dramatically outnumber the sketchy ones, even a few troublemakers can ruin it for everybody. (Hey, that's often precisely what they're trying to do.)

That means that any app or site that chooses to permit anonymous use must decide how it wants to deal with material that's hateful, abusive, or otherwise inappropriate. In the case of Whisper, the goal is not simply to delete such stuff once it's appeared but to inspect every item before it goes live, and prevent anything that's unacceptable from appearing in the first place.

The bigger the service grows—it went from 10 million to 20 million active monthly users between April and December of 2015—the more challenging that gets. But the company has a secret weapon: The Arbiter, a piece of software that uses the artificial intelligence techniques known as deep learning to moderate content in the same way a human would, only faster and at far greater scale.

On Whisper, "the golden rule is don't be mean, don't be gross, and don't use Whisper to break the law," says the company's chief data officer, Ulas Bardak, who spearheaded development of the Arbiter along with data scientist Nick Stucky-Mack. That's not a philosophy that you can boil down to a simple list of banned words. The Arbiter is smart enough to deal with an array of situations, and even knows when it's not sure if a particular item meets the service's guidelines.

Whisper CEO Michael HeywardPhoto: Brian Ach, Getty Images for TechCrunch

Where Nobody Knows Your Name

Whisper falls into the same broad category of anonymous social networks as Yik Yak and the short-lived Secret. People can post brief messages without saying who they are—on Whisper, each item is overlaid on an image—and the app uses factors such as location to determine which users see which messages. There's also a private-chat feature.

Like other services enabling anonymous communications, Whisper says that stripping away real-world identity makes it a force for good. "When you think about what we're trying to do, we're trying to create a place where people can be authentic," explains CEO Michael Heyward, who cofounded the Los Angeles-based company in 2012. "Say you're a 19-year-old kid in Provo, Utah. Maybe you have a very religious family and want to say something like, 'I'm scared to come out of the closet because I'm afraid my parents won't love me anymore.' Or you're a 32-year-old mother who says, 'Wow, my kids never stop crying. I can't deal with being a parent. Is this what it's always like?'"

On Facebook, where every post is tied to a name and users tend to be followed by friends and family, Heyward says, such people might stay mum rather than confiding. On Whisper, they can express themselves without fear of repercussions, and get advice and support from fellow Whisperers.

A heartfelt Whisper

Of course, given the opportunity to express themselves fear of repercussions, some people will bully others, discuss topics that Whisper doesn't want on its service (such as the pro-anorexia chatter and imagery known as thinspiration, or thinspo), engage in hoaxes, or otherwise go to dark places. Heyward readily acknowledges that: "Anonymity is like a hammer. If we didn't have hammers, we couldn't build temples and schools and all these great things. But you can also kill someone with a hammer."

By stomping out bad stuff, he adds, Whisper is also helping to encourage the posting of even more good stuff: "If you have a positive community, it becomes exponentially more positive. Monkey see, monkey do—that's what people do."

Whisper has sometimes come under criticism for not thoroughly policing its content: For instance, it was once possible not only to post about thinspiration but to search for it, potentially making the service a magnet for such material. But Heyward says the service has never wanted to be a free-for-all. "We had people moderating from day one, literally the first minute the service went live. It was a huge waste of money. People were sitting around, because there were no Whispers yet. Nobody was on the service except me and my mom."

People Plus Processing Power

Today, Whisper's moderation effort has grown to include a team of 100 based in the Philippines, who check content using over 100 pages of documentation about what's acceptable on the service and what isn't. That may sound like a lot of people, but it's tough for purely manual moderation to keep up with Whisper's users, who open the app a million times an hour and consume 10 billion views a month.

The Arbiter's high-powered hardware

So since the fall of 2015, much of the heavy lifting of Whisper moderation has been performed by the Arbiter. "The dictionary definition for 'arbiter' matches what we're trying to do," says Bardak. "It's someone who has the power to decide things, like an umpire or a judge. Also, it's a pretty cool word."

The Arbiter runs on, essentially, an incredibly high-octane PC with 128GB of RAM and four Nvidia GeForce Titan X graphics cards, each of which has a graphics processing unit with 3072 computing cores and 12GB of RAM. More than the machine's CPU, it's those GPUs that give it the mathematical muscle it needs to perform its AI on the fly, letting it use neural-network technology to assess incoming Whispers.

"Basically, it's a kind of a beast designed for deep learning, and deep learning only," Bardak says. "Technically, you could play really good games on it, but that's not what we're going to do."

To train the Arbiter, the company crunches vast quantities of Whispers that have been moderated by its human team. Each one is a case study of sorts, from items that are obviously appropriate or inappropriate to ones which, for one reason or another, are edge cases. En masse, they teach the Arbiter about the decisions that moderators have made in the past.

As the Arbiter moderates new Whispers, it attempts to determine whether each one is acceptable or not and rates its own level of confidence about its analysis. "If the probability that it's okay is above the threshold, it gets approved," Bardak says. "If the probability that it's not okay is above the threshold, it gets deleted. If it's in the middle, the Arbiter passes the Whisper to the human team to pass the final judgment. It makes decisions in milliseconds, sometimes less."

Because the Arbiter can do its work so swiftly, it helps make Whisper's moderation process invisible in a way that would be difficult to replicate without automation. "We're not only focused on bad stuff getting removed, but that bad stuff doesn't get seen," Heyward says. "If a post we delete has a single view, we've made some sort of error."

Some Whispers of the sort that the service is not trying to moderate into oblivion

Fighting Words

It's no shocker that certain words and phrases are warning signs that a Whisper might be questionable. The Arbiter has its eye out for over a thousand of them: If an item includes "thinspo," "fag," or "white power," it's obvious fodder for deletion. In some cases, however, a human moderator may approve a Whisper for publication if someone uses such a term in a context that does not suggest endorsement.

This Whisper is a no-go

On its own, the Arbiter is capable of identifying acceptable and unacceptable Whispers based on evidence that goes beyond use of specific keywords. There's no blanket ban on mention of Hitler, for instance. You just can't express a positive stance on him—and the Arbiter is able to draw this distinction.

Thanks to big data, the software also knows about words and phrases which, though not obviously horrifying, are signs of trouble. For instance, after analyzing millions of Whispers and how users engaged with them, the company concluded that ones that use the word "horny" are typically low in quality and have issues beyond the simple use of a particular term. So it now nixes items that use the word, even though it's by no means trying to stomp out all discussion of sex.

Then there are the words whose definitions are so fungible that they could mean anything. Describing a movie as a bomb is fine. Saying you plan to bring a bomb to school is not. Even without help from its human colleagues, the Arbiter is able to sort through such Whispers without generating massive quantities of false positives or false negatives. "The misclassification rate is actually lower than a human moderator," Bardak says.

Because the software's knowledge reflects both the real things that millions of Whisper users have said and how moderators handled them, its understanding of language can be remarkably subtle. As Bardak explains, "computers generally have trouble with detecting sarcasm, but in this case, we have enough examples."

Another Whisper which never got approved

And does anyone attempt to sneak undesirable content onto Whisper by deliberately avoiding specific words and phrases that are likely to get an item deleted? Sure. "People are very creative," Bardak says. They will try different things. Fortunately, we have enough examples of people being creative that they're represented in the data set."

In its first few months of operation, the Arbiter has had a huge impact on how Whisper moderates itself. But even though there's plenty of opportunity to fine-tune it over time, Whisper has no plans to eliminate the human touch in moderation altogether. After all, the only reason the Arbiter is effective is because it bases its decisions on those of human moderators. Which is why the company is continuing to shovel data from human-moderated Whispers into the software's knowledge bank.

There's always going to be a hybrid approach," says Heyward. "The truth is, the way we use people today is very different from the way we used them a year ago or six months ago." With the Arbiter humming along and handling much of the grunt work, the humans can focus more on the material that isn't an easy call. And maybe Whisper will be able to pull off the not-so-easy feat of improving the quality of its content even as its community continues to grow.