On Monday night, Breitbart News launched a video of a press conference from a group of physicians called America’s Frontline Doctors, wherein several doctors repeated inaccurate claims about COVID-19, its treatments, and effects. The video reached over 20 million viewers on Facebook alone before being taken down Tuesday. The fast spread of this video and its false claims raises a big question about how much this kind of information affects people’s decisions to stay home, wear a mask, and ultimately, to get vaccinated when a COVID-19 vaccine is approved.
In the video, a line of 10 doctors in white coats stands behind a microphone at an outdoor even hosted by the Tea Party Patriots, an organization devoted to advancing Tea Party conservative agendas and pushing America to reopen its economy and schools. They are America’s Frontline Doctors, a group that includes physicians with a history of making medically dubious claims. Among them is Texas pediatrician Dr. Stella Immanuel, a minister whose fervent anti-LGBTQIA stances and Christian ideologies often bleed into her views on medicine, according to watchdog Media Matters. It is her impassioned speech championing hydroxychloroquine as a cure for COVID-19 that has drawn the most attention. Repeated research studies show hydroxychloroquine neither reduces the length of illness in COVID-19 patients nor prevents death.
The video is in some ways unremarkable among a lineup of other videos selling similar messaging. But what is singular about it is the sheer number of views it managed to rack up in a short amount of time, thanks in part to its spread among conservative, anti-vaccination, and government conspiracy groups on Facebook (Facebook has not responded to a request for comment). It was even shared by President Trump before Twitter deleted the video, as well as Donald Trump Jr., who had his Twitter account frozen temporarily as a result. This video has quickly blown past even the viral video Plandemic, which was seen more than 8 million times over the course of a week across Facebook, YouTube, Instagram, and Twitter, according to The New York Times.
But while we know videos like Plandemic or the America’s Frontline Doctors press conference are reaching huge numbers of people, the bigger question remains: Is this content impacting the people it touches—and how?
Quantifying the impact of misinformation, disinformation, and malinformation—a term referring to accurate information that is misrepresented—is historically difficult to do. But recently, several researchers have attempted to quantify how a person’s media diet impacts their response to COVID-19. Three studies, identified by Washington Post reporter Christopher Ingram, attempted to discern the beliefs and behaviors of those who consume conservative news. One study showed that among 1,008 respondents, those who turned to Fox News, Rush Limbaugh, Breitbart News, One America News, or the Drudge Report were more likely to believe several false narratives, including that the virus was developed in a lab, that some at the CDC have “malign motives,” and that vitamin C can prevent COVID-19 infection. Another study showed that Fox News viewers were less likely to follow stay-at-home orders. A third compared the beliefs of viewers of Hannity and Tucker Carlson Tonight and looked for correlations between those views and increases in COVID-19 cases. In the early days of the pandemic, Fox host Tucker Carlson took COVID-19 seriously, while host Sean Hannity doubted the lethality of the virus. The study seems to indicate that areas where Hannity has high viewership were more associated with higher numbers of COVID-19 cases and deaths during the early days of the outbreak.
These studies, some of which rely on self-reported surveys, are not perfect. “Every measuring technique has its limitations,” says Kate Starbird, associate professor of human-centered design and engineering at the University of Washington. “You have experimental methods that try and control for things, but don’t have external validity. Surveys are talking to people in the real world, but we don’t know how their self-reporting matches up with their reality. There’s no perfect technique.”
However, Starbird says that when you layer different techniques together, you can start to get a picture of whether or not a person’s media consumption habits are impacting how they navigate the world.
In the case of the study on viewers of Hannity and Tucker Carlson Tonight, researchers conducted surveys to garner a broad understanding of the views for each audience and then looked at geographic areas and their COVID-19 case data to make a connection between beliefs and health outcomes. But, as the researchers acknowledge, there are other factors that may have influenced both the number of recorded COVID-19 cases in the region and also the views and behaviors of its citizens. These studies did not, for example, look at whether these residents were also taking in alternate sources of information like Breitbart News.
Juxtaposing a community’s views with its behaviors is the best way to understand impact, but it’s hard to study. The Hannity and Tucker Carlson Tonight study does this indirectly by using COVID-19 case rates to make the connection that Hannity viewers may be more lax about social distancing and mask wearing—thus leading to higher case numbers in their region. A better example for connecting media consumption patterns to health choices is a case study in Samoa. Last year, the Pacific island was besieged by a measles outbreak. In December, there were 4,000 cases among a population of 200,000; about 70 people died, according to Reuters. In December, Michael Gerson, a columnist for The Washington Post, argued that misinformation about vaccines was the culprit for low vaccination rates on the island. “In Samoa, where Facebook is a main source of information, the vaccine coverage of children fell from 58% in 2017 to 31% in 2018. Local authorities have no doubt that social media played a role,” he wrote. But even this case study had its limits. Samoa also had a vaccine incident a year prior, wherein two children died after they were given bad vaccines. This event may have made the population more susceptible to anti-vaccine propaganda. Alternatively, the deaths of two children may have scared parents from bringing their own children in to get vaccinated. It is impossible to untangle the two.
It’s increasingly clear that misinformation, disinformation, and malinformation are having an impact, but we just don’t know how much. Whether people volunteer to get a COVID-19 vaccine may be the ultimate measure for understanding how well the COVID-19-related disinformation campaigns are working. Until then, the best way to understand how disinformation about the coronavirus is impacting public health may lie in the hands of the tech giants, according to Kolina Koltai, a postdoctoral fellow at the University of Washington who studies anti-vaccination misinformation.
Koltai focuses much of her work on following posts on public and private anti-vaccine groups on Facebook. While she’s able to get a fair understanding of attitudes inside of groups and the way that pieces of misinformation move from Facebook to Twitter to TikTok, she thinks that social media platforms should give researchers more access to anonymized data about the kinds of media people are engaging in and posting. If Facebook, Google, and Twitter opened up their data, she argues, it would be easier for researchers like her to make connections between what people read and post about on social media and in turn may indicate something about how they behave in the real world.
Koltai also thinks that platforms should consult more directly with researchers on how to mitigate misinformation. Facebook has done this in the past. One such example is through Harvard’s Social Science One grants, which gave researchers access to anonymized data in an effort to better understand the effects of social media on democracy in the wake of the 2016 election.
But Starbird says that data is actually quite difficult to work with because the company adds a layer of noise to the data set to ensure anonymity. Facebook could, of course, analyze its own data without these protections in place, as it did for its 2014 mood manipulation studies, but that introduces ethical questions about user consent. “Surely Facebook could measure a lot of things—do we want them to be experimenting on us to measure those things?” asks Starbird.
Though it is difficult to get hard data on how impactful these campaigns ultimately are, she thinks it’s important for researchers to try and quantify them. Starbird says that without empirical evidence, platforms like Facebook are free to be lax in how they respond to misinformation.
“We don’t have quantitative—here’s how much, and here’s how many people it’s changing, and here’s how far it’s changing their beliefs—we don’t have that,” says Starbird. “And because we don’t, people who want to use these techniques and platforms that are allowing these things to manifest on their platform are able to have this plausible deniability of, ‘Oh well, we don’t know if that has an impact.’ And so I think it’s really important we start to try to answer those questions.”