Fast company logo
|
advertisement

Misinformation targeting citizens who vote based on their pandemic response may force Facebook, Twitter, and YouTube to police political content.

As COVID-19 misinformation and politics collide, social networks face a choice

[Source images: Sefa Kart/iStock; paseven/iStock]

BY Ruth Reader10 minute read

Researchers watching the constant swirl of COVID-19 misinformation and disinformation say that it’s about to become very political. That may pose a problem for platforms such as Facebook, Google, and Twitter.

Since the coronavirus outbreak, all three networks have worked to promote appropriate sources of health information and pull down content that could harm users. However, they have traditionally shied from removing false information that is politically charged. As health misinformation becomes increasingly politicized, they may be forced to take a stance.

“Coronavirus and perceptions of its toll on our country will become the central 2020 issue,” says Emerson Brooking, a fellow at the Atlantic Council’s Digital Forensic Research Lab.

Since COVID-19 appeared, it has been followed by a persistent churn of misinformation, a phenomenon called an “infodemic” by World Health Organization Director General Tedros Adhanom Ghebreyesus. Even as researchers have raced to decode the illness, there has been a lack of concrete information about it. That void has led some people to publish videos and blogs highlighting their own fake COVID-19 cures and other false narratives, which often include anti-vaccine talking points. Researchers at Carnegie Mellon found that among 200 million coronavirus-related tweets, nearly half came from accounts that showed bot-like behavior. Within those tweets were 100 different false narratives.

Adding to the confusion about COVID-19 is President Donald Trump, who has consistently downplayed the impact of the coronavirus and praised the federal government’s response. He’s also given so much false or misleading statements on the coronavirus and potential treatments for the disease that a Knight Foundation and Gallup Poll survey found that 54% of respondents identified President Trump as one of the top two sources of COVID-19 misinformation. The other was social media.

COVID-19 comes to the ballot box

Come Election Day, Americans will have an opportunity to take a stance on COVID-19 and how it’s been handled both by their states and the federal government. “Because the science is still out on what COVID-19 is and how damaging the disease is to the body—those things are going to be decided at the ballot box,” says Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard University.

In the U.S., state governments have largely carried the burden of responsibility for finding and allocating medical resources and determining mitigation policies. This has led to a patchwork response to COVID-19, with governors taking varying approaches to the outbreak.  Alabama governor Kay Ivey, for example, didn’t call for residents to shelter-in-place until April 4 and allowed businesses to begin reopening on April 30 with some social distancing rules. New York, by contrast, established its stay-at-home order on March 22. It has begun reopening business in phases but has extended its stay-at-home order until June 13. Donovan says the success of these diverging tactics—New York’s more cautious rules versus Alabama’s aggressive push to reopen— will be judged along political party lines.

States are already positioning their efforts accordingly. Recently, Georgia’s Department of Health released a highly misleading map that made it look as if it had a declining number of coronavirus cases across the state—which it does not. The state is not alone. Virginia started incorporating antibody tests into its tally of daily COVID-19 tests performed, when the count is meant to reflect viral tests. Viral tests and antibody tests indicate very different aspects of a person’s health. Viral tests determine whether someone is suffering from COVID-19; antibody tests show whether a person has developed an immune response to the virus.

Clark Mercer, chief of staff to Virginia governor Ralph Northam, has said that the results were combined in order to raise the state’s ranking for the number of tests performed per capita. As The Atlantic’s Alexis Madrigal and Robinson Meyer note, Mercer explained at a press conference that “If another state is including serological tests, and they’re ranked above Virginia, and we are not, and we’re getting criticized for that—hey, you can’t win either way. Now we are including them, and our ranking will be better, and we’re being criticized.”

The battle  against COVID-19 misinformation

Even before the COVID-19 outbreak, Twitter, Facebook, and Google were aggressively trying to weed out health misinformation on their platforms. During the pandemic, these platforms have worked to increase the visibility of content from the World Health Organization and the Centers for Disease Control and Prevention. They have also removed content that makes assertions which could pose a threat to human health.

However, that method is not always effective at stopping the spread of disinformation. “Their systems are not designed to serve authoritative information, their systems are designed to serve popular information, and popular information isn’t often authoritative,” says Donovan. Pinning a red-bannered warning to misinformation or linking to valid information may help put trustworthy information in front of users, she says, but it doesn’t fundamentally change how the platforms surface flawed information.

Facebook, Google, and Twitter have historically shied from making judgments about political content.

While the platforms have made an effort to combat health misinformation and disinformation, Facebook, Google, and Twitter have historically shied from making judgments about political content or content that doesn’t explicitly tell the viewer to do something that will cause them harm. In the past, all three have maintained they are merely technology platforms and not arbiters of truth—at times to aridiculous extent.

Conspiracy theorists have exploited this position to great effect, creating misinformation that narrowly fits within a platform’s rules. For example, a video that suggests vaccines are ineffective or poisonous is considered “borderline content” by YouTube and is therefore demoted in its rankings—but not taken down. That doesn’t always stop its spread. Content makers can use other platforms to direct viewers to their videos. And in the event that a video is taken down—for example, in the case of the misinformation-riddled viral documentary Plandemic—multiple copies can be reuploaded to the same platform under different titles, making it difficult for platforms to find the new versions and take them down.

These factors make the politicization of COVID-19 a particularly tricky one for tech companies that have been careful about what kind of content they’re willing to arbitrate and how they do it. Even as these companies have waded into content moderation, they have yet to address the inherent design flaws in their platforms that allow disinformation and misinformation to reach vast audiences. Instead, the platforms have applied patches to the problem, for instance, pinning authoritative information to the bottom of a video or flagging information as false.

“There are [always] going to be people who go online every day and say vaccines don’t work,” Donovan says. This kind of content goes up with such frequency and in such a massive quantity that it may be impossible to pull down every fake piece of information on the internet. To counter misinformation in a way that works with the design of platforms such as Facebook, YouTube, and Twitter, one would need to fight fire with fire.

“Do we need to have an enormous pro-vaccine movement waste tons of resources on that just because social media has decided to preference the voices and positions of people who will go online and advocate for dangerous or mistaken points of view? What would be the value in that? Nevertheless that’s one of the only solutions on the table.”

advertisement

Health gets politicized

All of this raises an existential question for the platforms: Can they continue applying piecemeal action to fight misinformation, or do they have to change some of the key ways their platforms are designed to function? “There’s an awareness that neutrality is no longer tenable or a defensible stance in our current environment,” argues David Jay, head of mobilization at the Center for Human Technology.

Jay says that platforms are having fierce internal debates about how much they should police content. As the election cycle regains national attention and COVID-19 emerges as a wider theme in that conversation, the platforms might still be skittish about confronting misinformation that is more political in nature. However, he believes that platforms could use this moment as an opportunity to lay out their values and surface content based on those priorities, much as they already do for health information.

The platforms are already signaling how they might react to health misinformation that gets political. Last October, Twitter banned political ads. It has also recently taken down several tweets suggesting hydroxychloroquine is an effective drug against the coronavirus, including one from former New York mayor and personal lawyer to President Trump Rudy Giuliani. Hydroxychloroquine has in the past been used to treat malaria; however, it has serious side effects—especially for people who suffer from heart problems—and there is no evidence that it is effective against the coronavirus.

Then on Tuesday night, Twitter made an unprecedented move to fact-check President Trump. The platform applied a label to one of his tweets that said using mail-in ballots will lead to voter fraud. The little blue note at the bottom of the tweet encouraged readers to “get the facts” on mail-in ballots. The warning label was used as an extension of Twitter’s policy against misleading COVID-19 information, because the mail-in ballots Trump was referring to were being used because of stay-at-home restrictions related to the virus. In an interview explaining the decision, Twitter VP of Communications Brandon Borrman told OneZero’s Will Oremus, “The company needed to do what’s right.”

Despite Zuckerberg’s seeming commitment to neutrality, this is complicated territory for Facebook.

Other platforms may be less proactive. After the 2016 election and a bruising Congressional inquiry into Facebook’s ability to control its social network, the site amassed a team of fact-checkers to flag false information (though not to take it down). Political ads, however, have an exemption when it comes to fact-checking. Facebook CEO Mark Zuckerberg sees policing political information as fundamentally different from health content. In a conversation withjournalists in March,he explained his perspective on this issue: “There are broadly trusted authorities . . . [that] can arbitrate which claims are conspiracy theories or hoaxes and what’s trustworthy and what’s not, which makes this a very different dynamic than trying to be referee of political speech.” He reiterated that sentiment this week when he told Fox News that he didn’t think tech platforms should be “arbiters of truth,” an apparent dig at Twitter’s decision to flag President Trump’s tweet.

Despite Zuckerberg’s seeming commitment to neutrality, this is complicated territory for Facebook. The company does at times play fact-checker, though inconsistently. In March, it initially allowed President Trump to run campaign ads that looked like ads for the U.S. census. It later reversed that decision and pulled the ads for violating its policy on misrepresenting the once-a-decade survey.

Facebook has also struggled to police health misinformation. In May, it pulled down content referencing the Plandemic conspiracy video. But a month earlier, two California doctors uploaded videos to Facebook and YouTube in which they said the risk of contracting COVID-19 and the death toll from the disease were overstated. The videos suggested stay-at-home orders were unnecessary. YouTube took the videos offline. Facebook did not, a choice that is more in keeping with its professed allegiance to impartiality.

Zuckerberg’s neutrality may ultimately prevail. On Thursday, President Trump signed an executive order that encourages regulators to curtail a section of a law that allows social networks to take down content on their sites without having to worry about lawsuits. Some researchers also feel that Facebook is wise to avoid making judgments on petty political spats. Paul Barrett, a professor at NYU who studies political disinformation, writes in an op-ed for Politico that the platform could insert itself into content in ways that have the potential to do great damage. “The platforms cannot and should not try to referee every trivial fib that politicians tell about each other,” he writes. “They should prioritize the consequential issues and statements of the day, much as Facebook’s fact-checkers already try to do.” Misinformation surrounding COVID-19, however political, could be but one such subject.

Shorenstein’s Donovan agrees with David Jay at the Center for Human Technology that the platforms will ultimately have to make a choice.”We’ve entered into uncharted waters here,” she says. “For platform companies especially there will be a moment of reckoning where they’re going to have to throw the anchor down and figure out which position they’re going to stand in.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More


Explore Topics