The tech giant fighting anti-vaxxers isn’t Twitter or Facebook. It’s Pinterest

While the social media world scratches its beard about content moderation, Pinterest proves it doesn’t have to be all that hard.

The tech giant fighting anti-vaxxers isn’t Twitter or Facebook. It’s Pinterest
[Source Image: JonFreer/Blendswap (pin mesh)]

We really do live in the era of fake news, but it’s not born from traditional news organizations. Social media networks are to blame, as they allow any person or bot instant, unprecedented access to millions of people. As a result, Russian propaganda campaigns may have influenced the 2016 election, and more parents are opting not to vaccinate their children than ever in modern history, leading to outbreaks of deadly diseases like the measles.


In this climate of rampant misinformation, companies like Twitter, Facebook, and Google have refused to ban fake news and misinformation. But one unlikely organization has taken a refreshingly simple approach to handling at least some of the most dangerous misinformation in the world: Pinterest, the social media company best known for letting users pin recipes and home decor ideas.

But Pinterest has also dealt with users who promote anti-vaccination agendas and phony medical cures. The company now blocks anti-vaccination searches across the board, and it even deletes anti-vaccination content it detects that is uploaded to the site. Banned material includes propaganda like a pin that declares, “Proud to be anti-vax, that means I’m not just smarter than you but also more likely to stay that way” or a testimonial from a doctor, “after I came out of med school I vaccinated thousands of children and it’s one of the greatest regrets of my life.” The scale of the problem at Pinterest is impossible to know. But the company recently banned dozens of websites promoting anti-vaccination agendas and phony medical cures, then scrubbed Pinterest of their old pins.

Examples of content now banned by Pinterest. [Screenshots: Pinterest]

“People come to our platform to find inspiration,” says Pinterest’s Ifeoma Ozoma, “and there’s nothing inspiring about harmful content.”

Ozoma leads partnerships on the company’s public policy and social impact team. In 2013, Pinterest created its first formal policy against posts that promoted “self-harm,” and in 2017, they added anti-vaccination content and other damaging health misinformation. Then, in 2018, Pinterest began actively removing results in search for anti-vaccination content and certain false cures, like promises of grapeseeds curing cancer. Pinterest now focuses on eliminating this harmful content from both its search tool and the algorithmically generated home feed you see when you log in.

The technical challenge

To spot harmful content trends on the platform, Pinterest constantly collects user feedback–via surveys or less formally– and does its own internal sweeps across content with content moderators. It has also built automated tools to block URLs that frequently share banned content, but the tools have their limits. For instance, they may fail to recognize bad content that’s shared to Pinterest from Facebook, since Facebook appears as a trustworthy URL. Pinterest also has technology to thumbprint image-based memes that have been flagged and banned to prevent them from being re-uploaded, so they can’t make their way onto Pinterest again. In rare cases, Pinterest will even suspend accounts–though it always notifies users as to the reason of the suspension and offers the option to appeal.


The ethical challenge

But just how does Pinterest decide which of its 175 billion pins should be blocked or deleted, and which pins share information that is acceptable? And how did Pinterest build a value system for policing content from an ethical perspective?

“I don’t want to be flippant about it. [Misinformation] is a hard issue both when it comes to tooling and implementation, but the science is settled on something like vaccinations,” says Ozoma. “So for that, that isn’t hard for us. There’s WHO. The CDC. The AAP.” Pinterest consults the reports of leading, global health authorities when deciding its policies and the related language it uses in the enforcement of those policies. All of them recognize that vaccination is essential to public health.

But anti-vaccination propaganda is only part of a larger public health problem exacerbated by the spread of misinformation. Ozoma insists that Pinterest will readily block any health claims that can cause demonstrable harm.

[Screenshot: Pinterest]

In this sense, Pinterest’s misinformation policy is refreshingly straightforward: “We remove harmful advice, content that targets individuals or protected groups and content created as part of disinformation campaigns. Right now we’re focused on misinformation that can cause real-world harm.”

If there is a widespread backlash to Pinterest’s policies, you can’t find it easily online. Perhaps that’s because, by design, an anti-vaxxer who has their content blocked by Pinterest cannot rally the troops with more anti-vaxxer content on Pinterest.


The limitations to Pinterest’s approach

Just because Pinterest blocks some harmful content doesn’t mean Pinterest actively blocks all quack science on its platform. “There’s a wide spectrum, everything from essential oils will make you happy to essential oils will cure your cancer so don’t do your chemo,” says Ozoma. Pinterest blocks the anti-chemo content but will allow the more Goop-like pop health claims so long as no one gets hurt, the company says.

This is an imperfect solution, of course. Any pseudoscience claim about health could do real harm. Ozoma seems to recognize this fact when she tells me, “There’s other stuff further down [the priority list] that could pull our attention later.” For now, Pinterest is performing content triage, addressing the most flagrant and critical cases of harmful health information first. 

What other social media platforms do

Pinterest’s policies–of embracing science over propaganda–shouldn’t be radical. But compared to the misinformation policies of Pinterest’s peers, they are. I reached out to Facebook, Twitter, Snapchat, and YouTube to hear their latest, official policies, and learn how they impact what’s being shared online.

Facebook allows posts to be flagged for misinformation and has 39 fact-checking partners around the world to vet their links. Stories deemed false will be shown lower by the algorithm than factual ones. But Facebook only deletes content that’s in violation of its Community Standards–which aggressively address bullying and fraud but slap fake news on the wrist by “empowering people to decide what to read, trust, and share . . . ” Anti-vaccination and similar content is not in violation of Facebook’s Community Standards. (Facebook hasn’t addressed health-related misinformation on Instagram yet but says it is looking into the topic.) “We’ve taken steps to reduce the distribution of health-related misinformation on Facebook, but we know we have more to do. We’re currently working on additional changes that we’ll be announcing soon,” writes a Facebook spokesperson.

When I search “anti-vax” on YouTube, it brings up a nightmare of a first result. It’s titled, “Pro-Vaccine vs. Anti-Vaccine: Should Your Kids Get Vaccinated?” with a description that says, “We brought people together who both support and oppose vaccination to see if they can find middle ground.” It’s not quite propaganda, but it’s certainly not journalism. The very premise of the video implies that science just needs to find a middle ground, as if listening to the oil industry might fix global warming.


“Misinformation is a difficult challenge and any misinformation on medical topics is especially concerning,” writes a YouTube spokesperson. “We’ve taken a number of steps to address this including surfacing more authoritative content across our site for people searching for vaccination-related topics, beginning to reduce recommendations of certain anti-vaccination videos and showing information panels with more sources where they can fact-check information for themselves. Like many algorithmic changes, these efforts will be gradual and will get more and more accurate over time.” YouTube claims to be looking into providing more context–like linking trusted resources–alongside videos on vaccines. It has also announced that it will reduce the recommendations of harmful content that could misinform users, such as bogus miracle cures. Only some anti-vaccination content would be affected by this tweak to the suggestion algorithm.

Snapchat doesn’t audit private messaging on its service. But if you post a public snap during a live event, and it chooses to put your snap into a Stories group video, it will fact-check anything you say inside the video with a dedicated team of journalists. This process would flag anti-vaccination content. Snapchat’s third-party publishing partners on the platform must comply with guidelines that prohibit fake news. And its Creators (aka influencers) on the platform are prohibited from posting “malicious deception and deliberately spreading false information that causes harm, such as denying the existence of tragic events,” which would also encompass anti-vaccination content.

Twitter declined to comment for this article. The company has no policy for managing anti-vaccination or other misinformation of any sort. Search the hashtag #antivaxx on Twitter, and you’ll get reams of propaganda.

At Pinterest, misinformation is a companywide problem

As for Pinterest, the company admits that it’s still in the early days of policing misinformation. For now, it’s treating this content as a companywide problem, and has tasked multiple product groups across the organization–not just public policy–to deal with it. The biggest challenges for Pinterest now is to make sure it’s really stopping misinformation, and to leverage automation to prevent problems before they start. Pinterest is currently in the process of developing new AI technologies, no doubt building on its photo recognition prowess, to block new dangerous content before it can go viral. Pinterest would also prefer that its blocked searches offered the user something informative. Right now a search for “anti-vaccination” just returns the result, “Sorry, we couldn’t find any Pins for this search”–and it’s working with an undisclosed third-party group to develop that messaging. Additionally, Pinterest partners with media analysis firm Storyful to further its efforts in detecting and managing harmful misinformation and to gain a better understanding of the problem and spot misinformation trends with ever-evolving terminology that Pinterest may have missed.

Looking forward to 2020, one can’t help but wonder if Pinterest might take such an aggressive approach to policing misinformation in general. Technically speaking, Pinterest’s current misinformation standards do give it the authority to block content like pins challenging, say, Obama’s status as a U.S. citizen. However, this isn’t a priority for the company. “Right now we’re focused on misinformation that can cause real-world harm,” clarifies Jamie Favazza from Pinterest’s communications team.


Even still, Pinterest seems to be the first social media company that is willing to take responsibility for the content on its pages–beyond industry mainstays like banning violent or pornographic images, perhaps–recognizing that the truth of content is as integral to a user’s experience as the interface that serves it.

About the author

Mark Wilson is a senior writer at Fast Company who has written about design, technology, and culture for almost 15 years. His work has appeared at Gizmodo, Kotaku, PopMech, PopSci, Esquire, American Photo and Lucky Peach