advertisement
advertisement

Facebook is quietly pressuring its independent fact-checkers to change their rulings

As Facebook struggles with waves of misinformation, the company’s political and business concerns are influencing its fact-checking policies.

Facebook is quietly pressuring its independent fact-checkers to change their rulings
[Photos: iStock and Visuals]

When Mark Zuckerberg went to Washington for a rare, three-day charm tour last September, his schedule featured behind-closed-doors lunches, dinner with President Trump and Peter Thiel, and one-on-ones with lawmakers who, among other things, wanted to talk about a video on Facebook.

advertisement
advertisement

The two-minute-40-second clip, titled “Abortion is never medically necessary,” had racked up a few thousand shares since it had appeared weeks earlier, and had already stirred up a litany of outrage. The anger wasn’t over the video’s misleading title or its content, but because Facebook had slapped the video with a “false” label.

A post-2016 innovation, the labels aren’t placed by Facebook but by harried subcontractors, fact-checkers and journalists, and scientists who are fed a never-ending feed of potential misinformation. They can flag extreme misinformation for total removal—think dangerous coronavirus hoaxes—but mostly they place “false” or “partly false” labels on content, which gray out posts with a warning message and a link to an article explaining the fact-checkers’ reasoning. Fact-checks bring internal penalties too, like limits on content distribution or on a page’s ability to microtarget ads. In some cases, Facebook says repeat offenders can be deleted entirely.

The video was notable because it had been shared by Lila Rose, the founder of antiabortion group Live Action, who has upwards of five million Facebook followers. Rose, leery of restrictions on her page and handy with claims of Big Tech censorship, quickly launched a petition protesting what she alleged was bias by Facebook’s fact-checking partner, a nonprofit called Health Feedback. Soon, four Republican senators, including Josh Hawley of Missouri and Ted Cruz of Texas, wrote a letter to Zuckerberg condemning what they called a “pattern of censorship.” They demanded a correction, a removal of all restrictions on Lila Rose and her group, and a “meaningful” audit of Facebook.

Soon, the fact-check labels were gone. A Facebook spokesperson told BuzzFeed News at the time that the labels would be removed pending an investigation “to determine whether the fact checkers who rated this content did so by following procedures designed to ensure impartiality.”

A week later, its CEO was on Capitol Hill, sitting in front of the letter’s lead author.

advertisement

“Zuckerberg admitted there ‘clearly was bias’ in the @LiveAction @LilaGraceRose censorship,” Senator Hawley tweeted after the meeting. “Said bias is ‘an issue we’ve struggled with for a long time.'” (Facebook fact-checked Hawley’s account: The spokesperson says the CEO only “said it appeared there might be bias” in the handling of the fact-checks.)

But there was no bias, according to the investigation conducted by the Independent Fact Checking Network, the nonpartisan group that certifies Facebook’s fact-checkers. Two of the doctors should have disclosed affiliations with pro-choice advocacy groups, the IFCN said, but after reviewing the fact-check and 10 previous fact-checks, it found Health Feedback’s assessment to be accurate, unbiased, and based on sound science.

Two of the fact checkers, Daniel Grossman and Robyn Schickler, defended their evaluation in a Washington Post op-ed, calling it a matter of life-or-death medicine, not opinion. “Everyone is entitled to his or her views of abortion,” they wrote. “But promulgating misinformation about when abortion is medically necessary is dangerous.”

Perplexingly, Facebook did not reinstate the false label.

Emmanuel Vincent, the director of Science Feedback, the parent of Health Feedback, tells Fast Company that, despite the investigation, Facebook concluded that the videos should be classified as “opinion/advocacy,” and thus not subject to fact-checking labels at all.

Facebook did not comment on its involvement in the label, except to say that it had “talked with” the IFCN. Since Zuckerberg’s visit to D.C., the original video alone has racked up an estimated three million views.

advertisement

The removed labels are among more than half a dozen instances in which Facebook managers have interfered with fact-checks in ways that appear at odds with the program’s spirit of independence and nonpartisanship. At times, its employees have used a broad exemption for opinion content and previously undisclosed powers to make editorial decisions in ways that favored certain publishers. Content that has been deemed false by its fact-checkers has not always been labeled false on Facebook. In some cases, Facebook has reevaluated fact-check labels or penalties after fact-checkers had acted, often in the wake of political, financial, and PR pressures.

In one previously unreported label change, for example, Facebook pressured fact-checkers to downgrade a label on a video shared by influential conservative publisher PragerU from “false” to “partly false.”

Was the change warranted? “Let me put it this way,” says Scott Johnson, an editor at Climate Feedback, another Facebook partner and sister to Health Feedback. “Our reviewers gave it a -2 rating on our +2 to -2 scale and our summary describes it as ‘incorrect and misleading to viewers,’ so we had selected the ‘false’ label accordingly.”

Our summary describes it as ‘incorrect and misleading to viewers,’ so we had selected the ‘false’ label accordingly.”

Scott Johnson

In recent days, Facebook has clarified that while opinion is generally not eligible for fact-check labels, fact-checkers can still label op-eds and similar content if they contain misinformation. And, the company says, it does not consider content on controversial topics like abortion and climate change to automatically constitute opinion. But Facebook still decides what counts as opinion, and it can compel changes to fact-check labels or remove misinformation strikes from a page accordingly.

Facebook’s landmark fact-checking program “has many seemingly valuable aspects,” says Brendan Nyhan, a professor of government at Dartmouth College who has studied fact-checking. But Facebook’s management of the program is raising pressing questions about the world’s largest organized battle against misinformation.

“Facebook is making what are essentially policy decisions for a huge amount of public debate with little oversight or accountability,” he says. “We should know if fact-checkers can be overruled, and the extent to which opinion content is exempt from fact-checking.”

advertisement

What we know about how fact-checking works at Facebook

How do you correct a fact? That’s a tough one, especially in a time of fact-free politics. Americans are rapidly losing faith in an objective media and increasingly getting their political news from social media, which already struggles to stem tides of lies. Facebook has touted its program of independent fact-checkers as central to its fight against the kind of garbage that flooded its platforms in 2016, a bulwark against hoaxes around elections and viruses and vaccines. Facebook now says it pays over 70 independent third-party groups, with nine in the U.S., to review a never-ending stream of potential misinformation.

“We do not believe that a private company like Facebook should be the arbiters of truth,” Facebook says in a blog post about the program, emphasizing standards like independence, fairness, and transparency. Any content is eligible for review, except politicians’ posts and opinion content, an exemption designed in part to avoid appearing to be an “arbiter of truth”—and more pointedly, to deter persistent allegations of anti-right-wing censorship. (Those allegations remain as loud as ever, notwithstanding a lack of evidence that Facebook perpetrates systematic bias against conservatives, and despite Facebook’s own data showing that right-wing content is consistently the platform’s most engaging.)

But in trying to avoid the appearance of bias and cater to certain advertisers, Facebook has also appeared to soften its rules. In recent months, Facebook’s interference in fact-checks, particularly on climate change content—reported variously by BuzzFeed News, the Wall Street Journal, E&E News, Heated, Popular Information, and NBC News—have perplexed fact-checkers and incensed lawmakers.

“Facebook clearly doesn’t get it: Climate denial is the original fake news,” Senator Sheldon Whitehouse, a Democrat from Rhode Island, tells Fast Company. “The facts are that climate change is happening and human beings are driving it. Either Facebook recognizes that and makes good on its commitment to [combat] false information on its platform, or it tacitly endorses climate denial drivel.”

In a series of conversations with Fast Company and in a letter to a group of Democratic senators, Facebook emphasized an important exception to its exemption for opinion content. “[W]hen someone posts content based on false facts—even if it’s an op-ed or editorial—it is still eligible for fact-checking,” Kevin Martin, Facebook’s vice president for U.S. public policy, told Democratic senators including Elizabeth Warren and Sheldon Whitehouse in an August 7 letter shared with Fast Company.

advertisement

But in its response to the senators, Facebook did not address other significant exceptions to its rules, caveats that the company has not previously disclosed.

While fact-checkers decide which ratings to place on content, Facebook says it makes a superseding decision: It “sets the guidelines for the scope of the program,” says a Facebook spokesperson—that is, it determines what content is eligible for fact-checking. In some cases, Facebook may intervene if it thinks that a piece of content was mistakenly rated, by asking fact-checkers to adjust their ratings, a spokesperson acknowledged to Fast Company.

This would seem to break with a policy that says fact-checkers, not Facebook, are responsible for determining the rating on a piece of content, and that publishers must appeal their ratings to the fact-checkers directly. “If a publisher wishes to dispute a fact-check rating, they can do so directly with the fact-checker,” Martin wrote in his letter to the senators.

Facebook’s power to determine what counts as “opinion” is still based on an expansive definition, thanks in part to a single clause in its policy. The company says that opinion content includes not only articles like op-eds, but posts “shared from a website or Page with the main purpose of expressing the opinions or agendas of public figures, think tanks, NGOs, and businesses.” That broad exemption would appear to cover people like Lila Rose, who is now designated as a “Public Figure” on Facebook.

Apart from the fact-check labels, a Facebook spokesperson also acknowledged the company may remove a page’s “misinformation strikes,” the internal penalties that come with false ratings, if the company determines that a repeat offender “does not warrant additional consequences.” Earning strikes can restrict a page’s distribution and its ability to monetize.

Facebook did not disclose to Fast Company how often it intervened in fact-checks or misinformation strikes, saying it only occurred in “rare” instances. The company also declined to share a list of specific URLs that had recently been given fact-check labels, or content that had been labeled but later lost a label for any reason. That has left the discovery of altered fact-check labels up to journalists and the fact-checkers themselves.

advertisement

The mystery of the melting climate labels

On May 8, Prager University, which is not in fact a university but instead a prolific right-wing nonprofit content creator, republished a video that claimed that “there is no evidence that CO2 emissions are the dominant factor” in climate change. Soon, fact-checkers with Climate Feedback rated the video false, saying it was “incorrect and misleading to viewers,” and a label popped up on PragerU’s page. PragerU appealed to the fact-checkers, but they upheld their verdict.

That’s when Facebook called.

“The Facebook team reached out to suggest we change the rating from ‘false’ to ‘partly false’ based on the content of our review,” says Johnson, the Climate Feedback editor. Climate Feedback complied, changing the label to “partly false,” which brings lesser penalties.

In some cases, the post now has no apparent label at all. After an update that Facebook announced last week, the company is using what it calls a “lighter-weight warning label” for “partly false” content in the U.S.: an unobtrusive box below the video under “related articles” that says “fact check,” with a link. Meanwhile, older versions of the video appeared to evade labels completely: A handful of other PragerU posts containing the video appear without any labeling, a review by Fast Company found. Versions of the labeled and unlabeled video have now racked up millions of views since April 2016, when it was first published.

Facebook did not explain why PragerU’s label was downgraded. But internal Facebook messages first obtained by BuzzFeed News and NBC News this month shed more light on the company’s behind-the-scenes treatment of PragerU.

Shortly after Climate Feedback labeled the May 8 video, a Facebook employee argued for a review by pointing to PragerU’s ad spend—the organization had “500 active ads on our platform”—which is another factor that can influence Facebook’s decisions about what labels to change. The fact-check was PragerU’s second “false” rating that month—in another video it had asserted that the polar bear population was increasing. According to Facebook’s policies for “repeat offenders,” that could mean limits to PragerU’s distribution and advertising. Ultimately, Facebook removed the misinformation strikes from PragerU’s internal record.

advertisement
Three screenshots show the evolution of a single post’s fact-checking label.

The change, according to data collected by a Facebook engineer and reported last week by BuzzFeed News, went unreported to the public or to the fact-checkers. Facebook’s decision to also push Climate Feedback to downgrade its rating has not been previously reported.

A Facebook spokesperson did not explain why it downgraded the PragerU label, but acknowledged the power the company exercises over its penalty system. Fact-checkers are responsible for rating content, but Facebook is “responsible for how we manage our internal systems for repeat offenders.” Facebook issues more serious penalties, “unless we determine that one or more of those ratings does not warrant additional consequences,” spokesperson Liz Bourgeois tells Fast Company.

The label was one of over two dozen fact-check penalties “escalated” for review by Facebook managers in recent months, often in ways that supported conservative publishers including Breitbart News and Diamond and Silk. According to the Facebook engineer’s data, the reasons for the interventions included that the content should have been classified as opinion, or because of “PR risk,” a page’s “ad spend,” or “partner sensitivity” over perceived bias.

That sensitivity has become an animating cause for PragerU. Since its founding in 2009 by conservative talk show host Dennis Prager, with funding from fracking billionaires Dan and Farris Wilks, the nonprofit has become a right-wing megalith in the Trump era culture wars, a kind of Fox News for millennials. (Critics have called it a factory for climate falsehoods, xenophobia, and conspiracy theories.) Last summer, Prager was one of a number of right-wing figures and provocateurs at President Trump’s Social Media Summit, where social media censorship was a main focus.

For PragerU, alleged bias by Facebook is not only a source of grievance and a badge of honor but a lucrative fundraising cause. In May, after another video was marked false, PragerU raked in $16,000 in a Facebook fundraiser protesting alleged censorship. Since 2018, PragerU has raised at least $400,000 in similar fundraisers, according to BuzzFeed News, a fraction of its estimated $25 million in income this year.

advertisement

And, despite its ongoing protest, much of that cash is likely to go to Facebook ads. Its page consistently spends more on Facebook advertising than major political campaigns and national advocacy groups, and ranks among the 10 biggest political spenders on the platform, the Los Angeles Times reported last year. PragerU says its videos have been watched more than two billion times.

Related: A new report says Facebook’s anti-misinformation strategy isn’t working

Craig Strazzeri, PragerU’s chief marketing officer, says its page was not the recipient of favoritism, but the opposite. “As a result of the fact-check labels, Facebook has significantly suppressed our content and prohibited us from reaching large portions of our own audience who have opted in to follow PragerU content,” he says.

In the past two months, Strazzeri says PragerU has had four videos labeled false, and “several additional videos” completely removed from Facebook. Recent reports of favoritism came as Facebook was also threatening to “unpublish” its page completely, he says. “The notion that Facebook has insiders actively working to help our page is absurd.”

So far, however, the misinformation labels have not appeared to impact the page’s ability to run ads. This week, PragerU is running five Facebook ads with a total estimated reach of over five million users, each asking for signatures on a petition protesting Big Tech’s censorship against conservatives.

“Mark Zuckerberg has recently stated how Facebook doesn’t want to be the arbiter of truth and that Facebook errs on the side of free speech, but talk is cheap,” Strazzeri says of the fact-check system. “The entire program was an attempt for them to not have to flag content themselves, but clearly the fact-checkers they chose have weaponized this process to target conservatives.”

advertisement

‘It didn’t make any sense’

Last August, similar complaints derailed another fact check label on another post about climate change. Around the time that Lila Rose shared her video about abortion, a pro-carbon advocacy group called the CO2 Coalition posted an op-ed on its Facebook page, an article in The Washington Examiner titled “The Great Failure of the Climate Models.”

As the post began to trend on Facebook, it was flagged to scientists working with Climate Feedback. The article, they wrote in their fact-check, is “highly misleading, including a number of false factual assertions, cherry-picking datasets that support their point, failing to account for uncertainties in those datasets, and failing to assess the performance of climate models in an objective and rigorous manner.” On August 31, they rated the story “false,” threatening restrictions on the CO2 Coalition’s ability to advertise.

Within days, the label was gone.

“The Facebook team informed us that it should not be subject to a fact-check and we should remove the rating,” says Johnson of Climate Feedback.

“It didn’t make any sense,” says Andrew Dessler, one of seven fact-checkers on the post, and a professor of atmospheric sciences at Texas A&M University. Facebook’s policy says that pages must file appeals only with fact-checkers, not Facebook. And the post would appear to be eligible for a label under the Facebook rule that says that opinion content and op-eds can still be labeled false if they contain falsehoods. (You can read the full fact-check here.)

Dessler is still confused about the circumstances of Facebook’s actions around climate content, but he has an opinion. “I think their decision to exempt pieces like this from fact-checking is a business decision designed to minimize anger from people who don’t want action on climate,” he says, “who are very powerful.”

advertisement

I think their decision to exempt pieces like this from fact-checking is a business decision.”

Andrew Dessler

Caleb Rossiter, executive director of the CO2 Coalition and one of the article’s authors, thinks that Facebook removed the label for another reason: The fact-checkers were biased. He told Facebook that Science Feedback, “as they had on the abortion stuff, [they] had been censoring opinion—which is, as with climate models, all opinion.”

After Rossiter appealed to Climate Feedback and Facebook, a “conservative” employee at Facebook intervened to change the label, E&E News reported in June. Rossiter denies that the employee was conservative. “She said she would do it for anybody,” he says.

Facebook also likely understood that not removing the label from the CO2 Coalition’s page could lead to more loud complaints of bias, or what Facebook refers to internally as “partner sensitivity.”

The Washington, D.C.-based group has about 5,000 followers on Facebook, but boasts a number of influential friends in its hometown: One of its founders, William Happer, served on Trump’s National Security Council, where he pursued an adversarial review of climate science; member Mandy Gunasekara was recently sworn in as chief of staff at the Environmental Protection Agency. Largely funded by foundations that oppose energy regulations, the group spends much of its time providing lawmakers with talking points to challenge climate science.

“They were aware of the fact that we work with Senator Cruz’s office and advise him on climate science,” Rossiter says. “I think Facebook had the sense that we were not going to lie down either, like the anti-abortion groups.”

A systemic problem

Facebook’s puzzling interventions are not aberrations but part of a systemic problem, says Melissa Ryan, a disinformation researcher who studies social media platforms. Facebook’s fight on falsehoods is already challenged, but it’s further weakened by a set of policies that caters to some of the influential advertisers who are spreading those falsehoods. “In addition to fact checking—which I think is dubious to the people who need it most, as far as its effectiveness—you’ve got this system that seems designed to make no sense.”

advertisement

But platforms like Facebook still treat misinformation as a PR problem, rather than a systemic problem with their platforms and their business model, says Ryan. “So what happens is they ignore the issue, they ignore users or advocates until it becomes a big enough PR problem, and then they try to put a band-aid on it.”

Related: Facebook is pouring money into 2020 campaigns as antitrust investigations heat up

Along with fact-checkers and lawmakers, some Facebook employees are demanding more transparency around its fact-checking system. After reporting by Buzzfeed News revealed preferential treatment for conservative pages—and that Facebook had fired the engineer who had gathered the supporting evidence—an internal message board has since swelled with suggestions on how to “fix Facebook.”

I think Facebook had the sense that we were not going to lie down either, like the anti-abortion groups.”

Caleb Rossiter

“Can we also use this as an opportunity to be more transparent about fact-checking in general?” one employee asked, according to messages obtained by BuzzFeed. “Not just internally but also to our users? How escalations and appeals work, who can do them, who is doing them, aggregated statistics about posts labeled . . .”

Last week, Facebook reiterated how its opinion rule works and announced a slightly revised rating system, including the lighter-weight labels. It also sent Fast Company a new “high level statement.” “There is no playbook for a program like ours and we’re constantly working to improve it based on feedback from our partners and what we see on our platform,” it said.

To Dessler, the scientist and fact-checker, the muddy, Byzantine policies—and Facebook’s response to recent revelations—underscores a more global problem. “We need to have a society-wide discussion about how corporations should deal with disinformation, and then the government should require corporations to adopt that policy.”

advertisement

“The most important thing about the story, and something that doesn’t seem to bother a lot of people, is that we have outsourced decisions like this to corporations,” he says. “This is a truly terrible situation to be in.”

Contact this reporter via Twitter (@pasternack) or by email at pasternack at protonmail.com

advertisement
advertisement

About the author

Alex is a contributing editor at Fast Company, the founding editor and editor at large of Motherboard at Vice, and a freelance writer and producer with a focus on the intersections of science, technology, media, politics, and culture.

More