Facebook’s decision to make an exception to its hate speech policy by not removing Donald Trump’s Muslim ban video last week highlights social media companies’ differing and often inconsistent responses to offensive content.
Though it apparently violated Facebook’s own internal guidelines, Trump’s video was not removed because as a political candidate, his statements were “an important part of the conversation around who the next U.S. president will be,” a Facebook spokesperson told Fast Company. Yet Facebook users who shared Trump’s video saw their posts removed for violating its community standards policy. At Twitter, it would be difficult to argue that Trump’s tweet about his proposal violated the company’s rules, which tend to be much more permissive.
Facebook, Twitter and other private companies aren’t subject to laws that protect people from government interference in free speech, but they’ve borrowed some of the same theoretical frameworks. Facebook’s policy on hate speech resembles the U.K.’s free speech law, which bars speech “intended or likely to stir up racial hatred” (this is the basis for a petition that seeks to ban Trump from Britain). The social media platform has specifically banned hate speech and will remove content that violates this policy if someone reports it.
By contrast, Twitter has a policy more like U.S. free speech law, which doesn’t regulate hate speech but makes exceptions for threats and incitement. The platform has not explicitly banned hate speech, though it does ban threats or content that promotes violence on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability.
The policies that social media platforms use to moderate content have an enormous impact. Facebook has more than 20 times as many active users as the New York Times has monthly readers, and a recent Pew Research study found that millennials get their political news from social media more than any other source. Unlike a newspaper or a TV network, the decisions that tech companies make about permissible content directly impacts our conversations with each other. “Facebook has more power in determining who can speak and who can be heard around the globe,” journalism pundit Jay Rosen once remarked, “than any Supreme Court justice, any king, or any president.”
Social media content moderation policies, meanwhile, are driven largely by those companies’ own interests. Facebook’s attempt to create a friendly environment could be viewed as an effort to attract and retain as many users as possible. Twitter’s declaration that it is “the free speech wing of the free speech party” could be seen as an attempt to keep controversy–and the many tweets and retweets it generates–on its platform. It’s also possible that these social media giants could be tweaking their moderation of content to advance a political motive, which led the Electronic Frontier Foundation to launch a project last month that tracks content takedowns by social media sites. “We need to hold Internet companies accountable for the ways in which they exercise power over people’s digital lives,” Jillian York, one of the site’s cofounders, said in a statement.
Like free speech laws, social media content policies are evolving. Facebook clarified its hate speech rules in March, and Twitter expanded its rules to ban promotion of violence in addition to direct threats in April.
When Facebook decided not to take down Trump’s post, it implicitly clarified part of its content policy, and its decision will have consequences across the social media landscape. As Internet policy expert Marvin Amorri put it last year in the Harvard Law Review: “Some of the most important First Amendment lawyering today is happening at top technology companies.”