Fast company logo
|
advertisement

Moderators of hateful and violent speech on Facebook were told to treat popular posts differently: “It’s all about making money at the end of the day.”

Undercover video shows Facebook is loath to delete toxic content

[Images: Rawpixel/Unsplash; Vlad Tchompalov/Unsplash]

BY Mark Sullivan5 minute read

An undercover investigation has shed new light into the often hidden process by which Facebook removes hateful or violent content from its platform. In some cases, according to the report, it takes a lot for the company’s moderators to remove even the most toxic posts or pages–especially if they are popular.

The investigation aired on Tuesday by Britain’s Channel 4 centers on a Dublin-based content moderation firm called CPL Resources, which Facebook has used as its main U.K. content moderation center since 2010. An investigative reporter got a job there, revealing how CPL’s content moderators decide to remove content reported by users to be hateful or harmful.

Perhaps the most telling are the revelations about how Facebook polices racist or hateful political speech on the platform. This, after all, is the stuff that was used at massive scale to influence both the 2016 presidential election and the U.K.’s Brexit vote.

Normally, if a given page posts five pieces of content that violate Facebook’s rules in a 90-day period, that page is removed, a policy described in documents recently seen by Motherboard. YouTube, by comparison, allows user pages only three strikes in 90 days before deletion.

However, if the Facebook page happens to be a big traffic generator, moderators use a different procedure. CPL is required to put these pages in a queue so that Facebook itself can decide whether or not to ban them. The investigation found that pages belonging to far-right groups with large numbers of followers were allowed to post higher-than-normal numbers of hateful posts, and were moderated in the same way as pages belonging to governments and news organizations.

One post contained a meme suggesting a girl whose “first crush is a little negro boy” should have her head held under water. Despite numerous complaints, the post was left on the site.

One CPL moderator told the undercover reporter that the far-right group Britain First’s pages were left up despite repeatedly featuring content that breached Facebook’s guidelines because “they have a lot of followers so they’re generating a lot of revenue for Facebook.” Facebook confirmed to the producers that they do have special procedures for popular and high-profile pages, including Britain First.

CPL trainers instructed moderators to ignore hate speech toward ethnic and religious immigrants, and to ignore racist content. “[I]f you start censoring too much then people lose interest in the platform . . . It’s all about making money at the end of the day,” one CPL moderator told the undercover reporter.

On Wednesday, Denis Naughten, Ireland’s Communications Minister, said he had requested a meeting with Facebook management over the “serious questions” raised by the exposé, and that company officials would meet with him on Thursday in New York, where he is attending a UN meeting.

“Clearly Facebook has failed to meet the standards the public rightly expects of it,” he said in a statement.

Facebook’s complex moderation system–peopled by thousands of employees and contractors behind closed doors in offices around the world–has become an increasing focus of European reporters amid growing scrutiny by regulators. A series in the Guardian last year exposed a trove of the company’s content policies, which some moderators cited for their “inconsistency and peculiar nature.”

advertisement

“The crack cocaine of their product”

One of Facebook’s earliest investors, Roger McNamee, told Channel 4 that Facebook’s business model relies on extreme content.

“From Facebook’s point of view this is, this is just essentially, you know, the crack cocaine of their product, right? It’s the really extreme, really dangerous form of content that attracts the most highly engaged people on the platform. Facebook understood that it was desirable to have people spend more time on site if you’re going to have an advertising-based business, you need them to see the ads so you want them to spend more time on the site. Facebook has learned that the people on the extremes are the really valuable ones because one person on either extreme can often provoke 50 or 100 other people and so they want as much extreme content as they can get.”

(McNamee was a mentor to CEO Mark Zuckerberg, and recruited Sheryl Sandberg to the company from Google to develop Facebook’s massive advertising business.)

This is what makes the Channel 4 exposé so remarkable. It suggests that Facebook not only hosted lots of racially  and socially charged political content during events like Brexit and the 2016 presidential election, but that its leadership was aware of the share-ability of that content, and of the ad impressions it meant.

A Facebook representative took issue with McNamee’s assertion.

“Shocking content does not make us more money, that’s just a misunderstanding of how the system works,” he told Channel 4. “People come to Facebook for a safe secure experience to share content with their family and friends.” The spokesperson offered no numbers to reinforce his claim.

The Silicon Valley giant responded to the investigation in a blog post on Tuesday, saying it was retraining its moderation trainers and fixing other “mistakes.” In an interview with Channel 4, Facebook vice president of global policy Richard Allen described efforts the company was taking and apologized for the “weaknesses” the broadcaster had identified in the platform’s moderation system.

Separately on Wednesday, Facebook addressed growing criticism about the role of its platform in inciting deadly mob violence in some countries, telling reporters that it would begin to remove misinformation from Facebook that leads to physical harm.

“We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline. We have a broader responsibility to not just reduce that type of content but remove it,” Tessa Lyons, a Facebook product manager, told the New York Times. The new policy does not apply to Instagram or WhatsApp, which has also been implicated in spreading dangerous rumors.

In an interview with Recode published on Wednesday, CEO Mark Zuckerberg offered a head-scratching explanation for why Facebook should permit certain content. “I just don’t think that it is the right thing to say we are going to take someone off the platform if they get things wrong, even multiple times,” he told Recode‘s Kara Swisher.


Related: How Facebook blew it


In April an undercover video aired by Channel 4 helped expose the sometimes incendiary methods by which Cambridge Analytica tried to influence voters, including by using Facebook. The broadcaster’s new revelations help shed more light on the platform’s role in that equation, and reinforce the suspicion that when it comes to toxic but popular content, Facebook prefers to look the other way. Deleting social media content, be it fake news or hate speech or other terrible things, is a messy business. But from Facebook’s point of view, deleting content can look like bad business, too.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics