Fast company logo
|
advertisement

Post-Charlottesville, apps and media services are clamoring to zap hate speech. But none of them will tell us how.

Can We Trust Dating Apps And Music Services To Police Hate Speech?

[Source Photo: Flickr user Ted Eytan]

BY John Paul Titlowlong read

In the aftermath of last weekend’s neo-Nazi violence in Charlottesville, we’ve watched tech companies scramble to quash hate speech online. And its not just infrastructure-level providers like GoDaddy, Cloudflare, and Google. Media services and social apps are declaring their own war on offensive sentiments within their respective platforms as well.

Spotify, Deezer, Google, DistroKid, and CD Baby all released statements last week vowing to pull hateful music from their services. So did dating apps like OKCupid and Bumble, each of which announced an approach to dealing with hate speech on their platforms.

While few people can muster up legitimate gripes with pulling down content laced with violently racist and otherwise hateful rhetoric, some are sounding the alarm about the long-term implications of handing this kind of authority over to tech companies whose standards and methods for policing hate speech are not always fully disclosed to the public. Whether they’re using algorithms, human moderators, or some combination of the two, the inner workings of these systems are often shrouded in mystery. How is offensive speech defined? Who makes the call to pull content and on what criteria do they base these decisions? Are they humans or machines? If human, what do those teams look like and how are they trained? What guidelines are used in the process?

Fast Company asked several companies about their internal processes for policing hate speech and got a few variations of the same answer: We can’t tell you.

Unlike First Amendment issues in the public square or in the traditional media, hate speech and censorship online are not governed by legal precedents or a set of centralized rules. Rather, each platform and service provider sets its own policy and establishes its own system for dealing with hate speech. This, digital free speech advocates warn, could become a problem in the future.


Related: Could The Tech Purge Of Hate Speech Backfire And Harden The Views Of Extremists?


“Maybe there is room to develop more granular guidelines about types of content and where platforms should have a responsibility, if anywhere, to deal with content of these kinds,” says Jeremy Malcolm, a senior global policy analyst at the Electronic Frontier Foundation. “Right now, it’s all very ad hoc. It often depends on who’s at the desk that day. It’s not a conducive environment to free speech, at the end of the day.”

In a recent blog post co-authored by Malcolm, the EFF criticized moves by Cloudflare, Google, and GoDaddy to shut down white supremacists online—not out of deference for the viewpoints of sites like The Daily Stormer, but rather because of the “dangerous” precedent set by companies quashing unpopular speech at the infrastructure level of the internet. The same mechanics used to wipe neo-Nazis from the web, EFF reminds us, could just as easily be used to stifle nonviolent speech in the future.

Drilling down further from the internet’s infrastructure into social networks, music services, and dating apps, the potential for heavy-handedness or blowback is no less serious, according to the EFF.

We’ve already seen the sometimes blurry lines between hate speech and legitimate expression confuse the algorithms and human moderators tasked with keeping apps and networks free of hate. Facebook famously blocked the historical photo a Vietnamese child running from a napalm attack and the video of Philando Castille being killed by a police officer. It has also mistakenly shuttered accounts by LBGTQ members (for using words like “dyke” to describe themselves, for example) and women of color who shared screenshots of racist harassment. On YouTube, videos of the U.S. military destroying Nazi monuments during World War II were taken down for violating hate speech policy. And in its quest to eliminate ISIS recruiting videos and other extremist propaganda, YouTube has given the ax to content with historical and legal value, like videos that document war crimes in the Middle East.


Related: What Facebook Considers Hate Speech Depends On Who Is Posting It 


With so many examples like these—and so little information about how these companies’ systems for dealing with hate speech actually work—can the public really trust tech companies to effectively police the sentiments found in art and written posts online?

Policing Music With Opaque Standards

Hate speech can take many forms, from antisemitic slogans shouted in the streets of Charlottesville to Islamophobic epithets tweeted by people with frog avatars to Hitler-worshipping, violence-inciting lyrics shouted by an underground hardcore band. But not every example is quite as straightforward as these, especially in a medium as nuanced and artistically open-ended as music.

Last week, Spotify pulled down music by white power and neo-Nazi bands identified in a post published on Digital Music News (with some help from the Southern Poverty Law Center) and pushed out its own playlist called Patriotic Passion in response to events in Charlottesville. Before long, Deezer, Google, and CD Baby followed suit and zapped white supremacist music from their catalogs. Bandcamp, the artist-uploaded DIY music storefront, told Fast Company that it saw a “small lift in reported accounts” that were dealt with in accordance with Bandcamp’s longstanding policy against hateful content, which the company says it has always enforced.

Bigger, subscription-based platforms like Spotify and Google’s music services seemed to engage in a game of white supremacist whack-a-mole—what appeared to be a sudden, knee-jerk response to the fallout from Charlottesville. While most of these companies have long had policies against hate speech in place, they weren’t aggressively enforcing them. In 2014, Apple removed from iTunes artists whose music spread “white power” messages, following pressure from the SPLC. At the time, Vice’s Noisey wondered why Spotify, Google, and Amazon weren’t doing the same.

But how do we know the content-quashing processes being used aren’t sweeping up non-hateful music? Are the guidelines applied broadly and fairly? How do these companies preempt accusations of double standards—or worse, avoid having a chilling effect on freedom of expression?

“Art is very different than other types of speech,” Malcolm says. “The courts will treat art differently. For example, pornography is treated with a bit more latitude when it’s in an artistic context. You can say the same for music. Music that expresses violent thoughts in lyrics doesn’t mean it should be treated like a blog post declaring you want to kill.”

When asked how it evaluates hate speech in music, a Bandcamp rep says that it relies on its community to flag hateful content for internal review and that it’s “usually pretty obvious” when songs violate its policies. For an online community as close-knit and progressive as Bandcamp’s, that approach may be enough.

Reps from Spotify and Deezer were equally vague in their descriptions of how content is evaluated. Spotify forbids “content that favors hatred or incites violence against race, religion, sexuality, or the like,” a company spokesperson says over email. Spotify’s internal review process, the rep explains, relies on public lists of forbidden content like Germany’s Federal Review Board For Media Harmful to Minors (or BPjM, an acronym of the German translation). The BPjM, a controversial index of digital content deemed harmful to young people in Germany, is not published, so there’s no way to know exactly what’s on it. Last year, the German industrial metal band Rammstein sued the country’s government after being included on the BPjM. While its primary objective is to blacklist violent, racist, and other inappropriate media, the BPjM has been criticized for de facto censorship and the stifling of free speech. Spotify says it also uses data from the Southern Poverty Law Center to identify hate music, ultimately relying on human moderators to judge and take down songs.

Spotify explicitly bans any music that “is in clear violation of our internal guidelines, which includes content that incites hatred or violence.” We asked the company for details on these guidelines, as well as information about who determines which music is in violation of them. They declined to clarify.

How are these questionable tunes identified in the first place? In the case of last week’s white supremacist music takedown, Spotify was tipped off by a blog post on Digital Music News. But there does not appear to be an easy way for the general public to flag hateful music for review. When asked how users can flag objectionable music, a Spotify rep declined to comment.

advertisement

So, other than an overarching prohibition on music that “incites hatred or violence” and that draws guidance at least in part from Germany’s BPjM media index, we know next to nothing about how Spotify finds, evaluates, and removes music that is purported to encourage hatred and violence.

The policy raises bigger questions about the parameters and limitations of Spotify’s music-zapping machinery. Perhaps most obviously, there’s a lack of detail about whatever line may exist between “inciting” hatred and violence and simply referencing those things.

“Some genres of music have inherently violent lyrics,” Malcolm says. “They tend to be thematically about that darker side of life. That doesn’t mean that they’re violent people.”

There are various sub-genres of heavy metal and hip-hop, for instance, laden with lyrics that most of us would agree are violent and even potentially hateful. Would some of the more graphic, potentially threatening verses from popular metal bands like Cannibal Corpse, Slipknot, and Slayer run afoul of Spotify’s restrictions and risk getting pulled? What about violent rap lyrics by Eminem or N.W.A.? Or verses by militant leftist rap duo Dead Prez that call for white politicians to be assassinated?

Each of these examples (not to mention countless others) may well fall under some exception to Spotify’s content guidelines, but at the moment its broadly-worded policy and lack of public details offer no indication one way or the other.

At Deezer, a content team “reviews our catalog deeply and listens to the music to make sure there is no direct hateful speech within the flagged content,” David Atkinson, head of label relations at Deezer, told us via email. “We do not condone any type of discrimination or form of hate against individuals or groups because of their race, religion, gender, or sexuality, especially any material that is in any way connected to any white supremacist movement or belief system.”

Again, in most cases this policy could offer a clear roadmap—Deezer at least calls out white supremacist ideologies specifically—but details about the parameters and how they’re enforced are just as elusive as other examples. And as the EFF is keen to point out, these solutions may satisfy us amidst the anti-Nazi fervor, but we have no way of knowing how they’ll be implemented in the future.

Swiping Left On Hate: Where Do Dating Apps Draw The Line?

Things can get even murkier when it comes to dating apps. Services like Tinder, OKCupid, and Bumble have plenty of experience dealing with harassment and hate speech (indeed, Bumble itself was born out of a desire to make dating apps less hostile to women). Most of these apps already have strict policies against violent or hateful language. So it didn’t come as much of a surprise when the gun-wielding white supremacist featured in Vice’s gripping mini-documentary about Charlottesville was banned for life from OKCupid last week. Earlier this year, Tinder banned a user for disparaging a woman with racist and misogynist epithets.

Bumble seemed to take things one step further when it announced a partnership with the Anti-Defamation League designed to “ban all forms of hate” on the dating app. Through a combination of human moderators and algorithms, Bumble says it will flag profiles (and presumably messages) containing known hate symbols and words associated with racism and hate.

The ADL’s database of hate symbols is publicly available, but the glossary of hateful words that Bumble says it will use to flag offensive content is not public. This is notable because while the ADL does extensive work combating bigotry and monitoring hate groups, the organization is also heavily involved in political advocacy in defense of Israeli policy.

For Malcolm, that presents a problem. “There are other groups that you could go to that don’t have that political agenda that would be a far better partner,” he says. “It seems reckless to hand that power over to an organization like that.” The ADL did not respond to a request for comment.

The ADL has been accused in the past of working to silence and delegitimize political opponents like the philosopher Noam Chomsky and the late historian Tony Judt, both harsh critics of Israel’s policies toward the Palestinians (and both Jewish). The group has also been accused of blurring the line between criticism of Israel and anti-Semitism, with critics of Israeli policy and advocates of Palestinian sovereignty–most recently Black Lives Matter activists and Pink Floyd’s Roger Waters–having to defend themselves against claims of engaging in hate speech.

When asked what terms are included in its ADL-inspired glossary of hate words, Bumble declined to specify, citing the iterative, ever-changing nature of this list. When asked specifically about whether any terms related to Palestinian rights or related activism were included on the list, Bumble declined to comment.

If the ADL’s position on the Middle East carries over into Bumble’s hate speech policy, could that result in free speech being quashed? The odds of this happening are unknown, since Bumble–like the rest of the companies we talked to–declined to go into specifics about how its policies are defined and enforced.

So What Should Be Done?

While the EFF has concerns about the swiftness and blunt nature of the past several days’ speech-policing, Malcolm admits there’s no easy answer.

Certainly, some kind of universal tech-industry guidelines of the sort Malcolm alluded to earlier could help, but even then there’s no guarantee that such principles would be adopted by everyone. In general, he says, it might be best for companies to “leave content decisions alone” until compelled to by a court. But such a hands-off approach likely wouldn’t sit well with many users of these same services, who are anxious about the volatile political climate and don’t want to feel threatened while browsing a playlist or dating pool.

As a society, we’ve shifted some of the responsibility for defining and policing unsavory speech from courts and media organizations that are beholden to the First Amendment to technology companies. And while most people seem comfortable erring on the side of stomping out hate speech and removing avowed racists from social platforms, organizations like the EFF task themselves with asking the bigger, sometimes more discomfiting questions like, what precedent are we setting?

“Maybe it’ll have an impact on Nazis using the internet, but what other impacts is it going to have?” Malcolm says. “Rarely do we find that censoring the internet is a good solution for any kind of problem.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

John Paul Titlow is a writer at Fast Company focused on music and technology, among other things.. Find me here: More


Explore Topics