advertisement
advertisement

Google is placing ads next to COVID-19 misinformation on conspiracy sites

Ads for organizations such as One Medical and UNICEF are showing up on sites that publish fake information about COVID-19 and vaccines, helping those sites monetize their content—while making money for Google.

Google is placing ads next to COVID-19 misinformation on conspiracy sites
[Screenshots: via Tech Transparency Project]

Google’s programmatic ad tools AdSense and DoubleClick are placing ads on websites that publish health misinformation, according to reports from two independent research groups.

advertisement
advertisement

By allowing these sites to monetize, Google is helping to spread health misinformation and profiting at the same time, argues Daniel E. Stevens, executive director for the Campaign for Accountability, a nonprofit that publishes critical research about tech giants through the Tech Transparency Project. “Despite its public commitments, Google is not going to turn off the firehose of advertising dollars that flow to snake oil salesmen promoting misinformation about the coronavirus,” Stevens said in a statement.

Independent researchers at both the Tech Transparency Project and the Global Disinformation Index have published reports detailing when and where Google placed ads next to health misinformation on third-party websites. In its report, the Tech Transparency Project identified 97 websites that habitually publish false information and use Google ads to generate income. The Global Disinformation Index publishes monthly reports of ads placed on conspiracy theory sites. In March, the Global Disinformation Index found 1,400 sites spreading COVID-19 misinformation in Europe earned a collective $76 million from ad tools, with the majority of it coming from Google. Both institutions say that Google is profiting off health conspiracies while publicly committing to fighting COVID-19 misinformation. 

Google representative Christa Muldoon says, “We have strict publisher policies that govern the content ads can run on. We specifically prohibit publishers from misrepresenting themselves or their products and have also taken an aggressive approach to COVID-19 content that causes direct user harm or spreads medical misinformation. When a page or site violates our policies, we take immediate action and remove its ability to monetize.” In this case, the company says, the websites or articles where it placed ads did not violate its policy. 

[Screenshot: via Tech Transparency Project]
In May and June, advertisements for primary care center One Medical appeared on a conspiracy theory website called Waking Times next to the following headline, “The Coronavirus Vaccine As Source of Dangerous Invasion.” The article doesn’t say vaccines are harmful directly, but rather weaves together inaccurate information sources alongside trustworthy ones, painting vaccines in a negative light and sowing doubt about a future COVID-19 vaccine.

One Medical has now blacklisted the site. “One Medical works hard to be a continued source of reliable, clinically vetted information on important healthcare topics, including COVID-19, and we take the fight against misinformation seriously,” the company tells Fast Company. In addition to One Medical, Google placed advertisements for AAA, AARP, Coronavirus.gov, Geico, Lending Tree, Subaru, UNICEF, and The United States Forest Service next to health misinformation.

[Screenshot: via Tech Transparency Project]
Historically, Google has been averse to policing misinformation on its own platforms and within its ad network. However, since the COVID-19 pandemic outbreak, the company has taken a more aggressive position against content that could harm viewers. In April, Google committed $6.5 million to fact-checkers and organizations actively combating COVID-19 misinformation. It’s also taken steps across all of its platforms to raise factual information to the top of pages, suppress dubious claims, and pull down information that actively harms human health. In employing these new standards, Google has been very nuanced about what content violates its policies, and it’s not always clear what constitutes information that poses a risk to human health.

advertisement

Some argue that anti-vaccination content fits that criteria. A few studies from the late aughts show that even though anti-vaccine videos may not be plentiful, they are often popular and can affect decision-making. In a 2018 study on how anti-vaccination propaganda can affect one’s decision to vaccinate, researchers write, “online access to false anti-vaccination information just cannot be understated in examining the rise and spread of the anti-vaccination movement.” In its conclusion, the paper notes that the rise of the anti-vaccination movement poses a “dire threat” to public health and heard immunity.

Last year, the World Health Organization named vaccine hesitancy or “the reluctance or refusal to vaccinate despite the availability of vaccines,” one of its top 10 threats to global health, citing spikes in measles cases in the U.S. and Europe. It could also spread ambivalence about future vaccines. A recent PEW survey reports that though a majority of people would get a COVID-19 vaccine if it were available today, a surprising 27% of Americans would not. 

[Screenshot: via Tech Transparency Project]
Google seemingly understands this plight. The company doesn’t allow anti-vaccine content on YouTube to monetize with ads. The company also suppresses anti-vaccination content both on its search platform and on YouTube. But when it comes to placing ads off its platforms, it is less judicious.

For example, Google may ban certain content from YouTube, but allows websites with that same content to monetize it on its ad network. In May, Google took down conspiracy theorist David Icke’s YouTube channel for making unproven claims about COVID-19. However, the company continues to place ads on his website. The Tech Transparency Project found a Google advertisement for cybersecurity company Palo Alto Networks on Icke’s website next to a video pushing false claims about the virus. The organization also found Google ads next to video interviews with Icke on Activist Post, another conspiracy theory website.

For brands that don’t want to be associated with health misinformation or conspiracy theories generally, Google’s choice to keep such sites in its network creates a recurring issue. Keeping up with fake news sites is hard work. A representative for One Medical says the process for monitoring where its ads appear is largely manual. The company reviews sites where its ads have been placed and then blacklists sites that publish health misinformation so that its ads don’t continue to appear there. One Medical ads are targeted to individuals, the company says, rather than specific sites. That means some ads may appear on sites visited by people One Medical is trying to reach.

“We regularly monitor for emerging sites and sources of misinformation and blacklist these websites from our advertising to prevent the spread and support of misinformation. This is part of our ongoing effort to flag and take quick action to minimize instances where our brand is displayed alongside inaccurate, malicious, offensive, or illegal content,” a spokesperson for One Medical says. Both AARP and UNICEF say they have revised their ad buying policies as a result of these reports. 

advertisement
advertisement

About the author

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology.

More