Fast company logo
|
advertisement

TECH

Facebook has deleted 19,500 groups tied to ‘militarized social movements’

The company released its quarterly Community Standards Enforcement Report Thursday and touted its progress on taking down pages associated with violent groups between August 2020 and mid-January 2021.

Facebook has deleted 19,500 groups tied to ‘militarized social movements’

[Photo: Getty]

BY Mark Sullivan2 minute read

During a media call Thursday morning to report its content moderation efforts, Facebook executives did not explicitly acknowledge their platform’s role in the planning of the January 6 Capitol attack, instead focusing on the platform’s efforts since last summer to avoid giving a platform to violent groups.

Facebook VP Monika Bickert said that between August 2020 and January 12, 2021, the company identified “militarized social movements,” including QAnon, and removed 19,500 groups created by those movements, along with about 3,400 pages and 7,500 Instagram accounts. Those numbers come from a blog post published on Facebook’s site January 12.

Bickert said Facebook removed the original “Stop the Steal” group back in November and then began removing groups that used that phrase and also encouraged violence.

When asked by Fast Company Tuesday, Facebook declined to provide the number of groups and accounts that encouraged violence or planned Trump supporters’ convergence on Washington and were removed between the election and the January 6 riot.

Bickert said that Facebook content moderation people monitored the events of January 6 in real time. “We were actively looking for content posted by people involved in the violence and we were making appropriate referrals to law enforcement,” she said.

In the wake of the January 6 attack on the Capitol, Facebook COO Sheryl Sandberg said that the event was “largely” not planned on Facebook but rather on other, less-moderated social networks. However, watchdog groups point out that Facebook Groups indeed were widely used for the planning of the “Stop the Steal” events in Washington that led to the riot.

Facebook’s Community Standards Enforcement Report covers all the content the company acted upon (removed, labeled, or limited the reach of) from October through December 2020 across 12 content types, ranging from nudity to bullying to hate.

The company, however, has no category in the report for incitements to violence. Facebook VP of Integrity Guy Rosen said the standards enforcement report is a “multi-year journey” and that category is on its way. “We want to expand to new content areas,” Rosen said, “and violence and incitement policy is certainly one that’s on our road map.”

Facebook reported late Tuesday that it had removed networks of accounts and groups engaged in “coordinated inauthentic behavior” in Palestine and Uganda, but the report contained no mention of Facebook accounts or groups used to motivate and organize the attack on the U.S. Capitol January 6. Facebook confirmed Wednesday that it found no evidence that the people and groups who promoted or planned the event used fakery or deception to do so.

Facebook has become very dependent on artificial intelligence to detect and in some cases delete content that violates its community standards. CTO Mike Schroepfer reported that Facebook’s AI is now detecting 97% of posts that violate its policies before any users see them, up from 94% the previous quarter.

One of the reasons Facebook trumpets its content moderation successes is to demonstrate that it can manage the harmful content on its platform without being told how to do so by regulators.

Bickert said she worries that government regulation might force social networks such as Facebook to remove “everything that’s remotely close to the line” of harmful content, which would have a chilling effect on free speech. She added that there’s a risk that new laws might focus on content that’s less harmful but easier to regulate.

Bickert declined to say if her company supported a new high-profile Senate bill sponsored by Democrats Mark R. Warner, Mazie Hirono, and Amy Klobuchar known as the SAFE TECH Act, which would hold social media companies legally accountable for enabling cyberstalking, targeted harassment, and discrimination on their platforms.

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics