This week something changed. Technology platforms, long known for trying to keep out of messy politics and ideology, began taking a stand against hate groups after a bloody white-nationalist rally in Charlottesville ended with a man driving his car into and killing a counter-protester.
In the past, many of these tech companies, mindful of free-speech laws and their bustling businesses, have tried to stake a neutral position. Yet now, in the face of white supremacy, racist hate, and the online outrage sparked by Charlottesville and President Donald Trump’s soft response, the facade of neutrality began to come down—at least for now.
Here’s a running list of the technology platforms that have ended service to groups affiliated with the rallies in Charlottesville:
- Airbnb, acting prior to the rally, quietly refused to let people it believed were white supremacists from using its app to book rooms to attend the event.
- GoDaddy refused the Neo-Nazi website The Daily Stormer domain registration after the site published a story denigrating Heather Heyer, who was killed in the attack.
- Google also denied The Daily Stormer access.
- Twitter suspended accounts linked to The Daily Stormer.
- LinkedIn also deleted pages associated with the white nationalist website.
- GoFundMe has shut down efforts to raise money for James Fields, the man accused of killing Heyer.
- WordPress stopped providing web and e-commerce service to Vanguard America, the group to which Fields pledged allegiance.
- PayPal announced that it would not do business with hate groups.
- Apple Pay stopped offering support for websites that sold white supremacist and hate-related merchandise.
- Uber permanently banned known white supremacists from using the app, and pledged to ban other members of hate groups from its platform.
- Chat app Discord shut down white supremacist channels.
- Facebook launched an unusual campaign specifically against the Daily Stormer article, and announced it would crack down on blatant white supremacist content and delete individual accounts spreading “threats of physical harm.”
- Squarespace said it was taking a number of hate group websites offline, including that of white nationalist Richard Spencer.
- Cloudflare stopped providing its services to the Daily Stormer (an “arbitrary” move, its CEO said, because the people behind the website “are assholes.” On free speech grounds, he has admitted to feeling uneasy about the decision).
- MailChimp said it terminated some groups’ accounts after it changed its terms of service on Monday to exclude customers whose primary purpose was “inciting harm” or promoting “discriminatory, hateful, or harassing content.”
- Spotify has said that it would take down white supremacist artists from its app.
- SoundCloud has reaffirmed that, per its terms of service, the company will take down any hateful content.
- OKCupid said it banned the white supremacist Chris Cantwell, and asked users to report “immediately” other people on the dating network who are “involved with hate groups.”
- Bumble has announced that it’s working with the Anti-Defamation League’s Center on Technology and Society to identify symbols of hate on users’ profiles. The dating app also says it was targeted by neo-Nazis who were harassing the company over its pro-women’s empowerment stance.
These bans aren’t the end of the story, of course, and they belie a larger challenge to tech companies as they seek to more vigorously police potentially dangerous speech; a number of companies, including those above, still provide services to an untold number of extremist groups. Tech platforms will need to grapple with how to tweak and enforce their policies, and how to actually carry out the messy whack-a-mole business of human and automated moderation.
“We endeavor to be content-neutral, and so I worry about” future bans, Cloudflare CEO Matthew Prince tells Fast Company. “But hopefully now we can have a conversation without name-calling and think through what the right policy is. And I think we should be working with the entire tech industry, with policymakers, with legislators, with content creators with content consumers, to think about, ‘OK, where do we want to put controls in place?’… I think it is the responsible thing for us to ask ourselves, does that mean that we change our policies?”