advertisement
advertisement

This time, it could be worse: 2020 election misinformation will be homegrown, and we’re not ready for it

Misinformation researcher Renée DiResta says that our social networks, if not our government, have gotten smarter about preventing digital interference in our elections. But so have the hackers.

This time, it could be worse: 2020 election misinformation will be homegrown, and we’re not ready for it
[Photo: courtesy of Stanford University]

We may be living in the golden age of misinformation. Our favorite social media platforms are awash in it. Our current president and his administration are masters at it. And we’re all nervous about its potential to disrupt this year’s presidential election in ways that are even more damaging than 2016, when the democratic process was subverted by bad actors and trolls, both foreign and domestic. Yet the tech giants that run the biggest platforms for political misinformation have been slow to understand and react to the problem—and their motivation for doing so at the expense of profit has been called into serious question.

advertisement
advertisement

Renée DiResta, who works at the Stanford Internet Observatory, has studied digital misinformation around the world since long before the 2016 election. She has some of the clearest insights on the tactics of the malign forces that remain committed to influencing the outcomes of our elections and, perhaps scarier yet, sowing doubt in the credibility of the process itself. As the election approaches, we have lots of questions about what to expect, and how well prepared we really are to protect against interference.

This interview has been edited for clarity and cohesion.

Fast Company: In a recent interview, when asked about Facebook’s Trending News feature, you said: “Fake news in 2016 was not about Russia. Fake news was actually about a lot of homegrown Americans writing a lot of propaganda and bullshit and getting it trending on Facebook.” Do you think social media companies have been spending too much of their time trying to protect against foreign misinformation?

Renée DiResta: So what I meant by that was that fake news, which is now a term that means “anything I don’t like on the internet,” did originally have a very specific meaning. It meant stories that were false. They were demonstrably false. And the two canonical examples are “Pope endorses Donald Trump” and “Megan Kelly fired by Fox News,” which were both demonstrably false stories that trended on Facebook. At the time Facebook had its Trending News feature off to the right [of the feed]. And in response to an allegation that the editors of that section were biased against conservatives, and a resulting controversy, they removed human editorial control from the feature and turned it over entirely to the algorithms. Previously, the editors had been removing things that were false or appeared to be gamed, exercising editorial oversight. When the feature became purely algorithmic, the effect was that any significant group of people who clicked the share button could get any type of content trending.

Many of those headlines were written by domestic hyper-partisan Americans. Sometimes they were economically motivated, and sometimes, as we learned, the content also came from economically motivated foreign spammers like the Macedonian fake news sites. But trending fake news was not part of the Russia operation. That’s just how the social ecosystem is engineered. It’s just a systems problem. We didn’t actually know much about the Russian activity until a solid six months later.

FC: Now that we have a fairly good understanding of how Russia used Facebook to influence the 2016 election, how well prepared are we to prevent misinformation campaigns tied to the 2020 election, and what new tactics do you think we might see?

advertisement

The problem is that the adversaries are also getting better and better.”

Renée DiResta

RD: I think that we’ve actually gotten better and better at uncovering influence operations by state actors. The problem is the adversaries are also getting better and better. [They’re] not as sloppy as they used to be. It’s actually getting harder and harder to find and attribute these operations. One specific evolution that we at the Stanford Internet Observatory found and attributed is activity that we’ve seen Russia-affiliated actors conducting in eight different African countries. In this particular case, an oligarch had hired locals to run Facebook Pages and write state-aligned propaganda articles.

This kind of activity becomes really difficult to detect because there are no longer fake accounts that are producing the content. The actors are real people who are real members of the community. And the policies the platforms have come up with to detect, address, and take down disinformation relies on these notions of authenticity. The Internet Research Agency’s fake Texas secessionist content from 2015 and 2016 didn’t come down because it was objectionable secessionist content. It came down because it was Russians pretending to be Texas secessionists. So if real Texas secessionists had produced that content, it would have stayed up, under free expression and a right to a political opinion.

FC: So what might that new approach look like in the U.S. election in 2020?

RD: There’s a risk that bad actors realize that they can simply hire people, or fund these operations, and they work with entities that just happen to be ideologically aligned. The people might not even know where the money’s coming from. The people who accepted money from the Internet Research Agency in 2016 didn’t seem to know that they were accepting money from Russian trolls. They thought they were taking it from fellow activists. So a lot of what we’re looking at now is how can we detect this stuff based on the behavior of these accounts. And so, independent of the content of the account, we’re looking more at whether there are certain indications that clusters of accounts are coordinating to inauthentically spread content.

FC: Do you buy the argument that microtargeting of political ads only feeds you ideas that you agree with, keeps you in your filter bubble, and prevents conversation? As the dominant gatekeeper of news/information, is it the responsibility of social media platforms to provide me with a diversity of opinions via ads?

RD: It’s an interesting question. Of course the parallels would be to TV and radio, where everybody in the vicinity with that channel on sees the same content. But then there are also others—talk radio stations that are very partisan, where you’re not going to hear the other side’s point of view because the other side is not going to waste its money advertising to an audience that’s never going to pay attention to it. Our elections have gotten to the point where they’re primarily turnout games, unfortunately, rather than persuasion games. There seems to be an increasingly small handful of people who are really persuadable or independent.

advertisement

But the problem in the political ad debate is not just that people who are likely Trump supporters aren’t seeing Elizabeth Warren or Pete Buttigieg ads. It’s that the ads that are targeted at them are generally extremely aligned with their preexisting beliefs—and may also be used to spread misinformation to those most likely to believe it. That’s a concern for social platforms. So the challenging question there is whether it’s possible to increase the number of people who do see ads from both sides by simply doing something like limiting certain targeting criteria, which might include raising the minimum audience size for political ads, or by saying that instead of targeting according to particular criteria or behavioral criteria, you’re limited to [targeting by] zip code. And so you can still reach people in swing zip codes if that’s your prerogative as a candidate. But the other side would be doing that as well, and then those people will potentially see both of the ads. So there are ways to change targeting to raise the audience size and scope to try to ensure that a little bit more information makes its way into people’s field of view.

As these policies have emerged, they’ve raised more questions than they’ve answered.”

Renée DiResta

FC: In general, do you think the major social media platforms are forming effective policies around political advertising?

RD: So, one of the things that’s thorny about this issue is the question of what is a political ad. Facebook’s policy is that they’re going to verify your identity to run political ads, but “political ads” includes a huge spectrum of issue ads. They also say they’re not going to fact-check content from politicians, though it’s unclear what the scope of “politicians” will be. Twitter, meanwhile, just said they’re not going to accept political ads, but didn’t give any real guidance on the issue ad side of things. And so I think that as these policies have emerged, they’ve raised more questions than they’ve answered. Facebook has to decide who is a politician. Twitter has to decide what is a political issue. And we wind up in this situation where I think there’s going to be quite a bit of confusion, and it’s ultimately going to be a real challenge for anybody doing political or advocacy work to figure out where the boundaries are as these new policies are put in place.

FC: I know you’ve commented on how Twitter can have this mob mentality that makes people afraid of saying their real opinions or even sharing certain things. Do you think the new tool that lets you regulate who can comment on your tweets will help?

RD: I’m optimistic that it will minimize certain kinds of abuse in people’s feeds. I also have my share of trolls, and if they want to peek around and screenshot what I’ve said, and go and mock it in their own feeds, there’s very little I can do about that. And so I think you are going to continue to see that kind of behavior play out and probably increase as a sort of “harassment once removed.” I think, though, that this change is going to be a net improvement over the current state of affairs.

I think there’s a couple of things that it’s going to potentially change that are of interest. One in particular is tweets from political candidates. Are we going to see high-profile, controversial political figures simply turn off replies for their spiciest takes, so activists or respondents or their opponents can’t push back and respond in line? So people will no longer be able to see the full scope of a debate. They’ll only see the snippet of the conversation that the broadcaster wants to say. So it turns Twitter into a little bit more of a broadcast tool, as opposed to a conversational tool.

advertisement

FC: There have already been attempts to hack into systems used by the campaigns of some of the Democratic primary contenders. What’s your opinion of their overall readiness, and do you think they’re going to be even bigger targets as we get closer to having a nominee?

RD: I think they are targets. There’s a lot of focus on social platforms and social misinformation and what the [Russian] Internet Research Agency did in 2016, and it’s great that there’s so much awareness of the social influence campaign. But that was just one of several different types of attacks. The GRU hack of the DNC and the systemic release of that information by tweaking it in a certain way and promoting it to journalists meant that when Russia wanted to control the media cycle, they could release another tranche of emails. What we see from these hack-and-leak operations is they really drive a news cycle, particularly if the information is real. And mass media coverage has potential for much larger reach than a social influence campaign.

People tend to see coverage of hacked emails or documents as journalists exposing the inner workings of power. These Russian hacks are not always caveated as what they are–a hostile action by a foreign power. In the case of 2016, how often did you read stories about the Clinton emails or the Podesta emails that talked about the fact that they were stolen by Russia? This was an adversary that didn’t like one of our candidates, driving our media narrative. That contextualization in these hack-and-leak situations is still very often absent. And so I think that it’s very, very likely that any number of adversaries will try to hack a 2020 campaign. And the question really becomes how does the media treat the newsworthy content that it’s presented with. I think that’s a very interesting challenge for media going into 2020.

FC: Yes. So the source of the information is as big a part of the story as the news itself, right?

RD: If a candidate’s computer or email or whatever is hacked, a significant percentage of people will believe that the leaks–particularly if there’s anything vaguely sensational in them–are a newsworthy story and the public has a right to know. So then that of course puts the candidate who is the victim of the hack on the defensive for potentially several news cycles, as they try to justify each individual email or document that may be taken out of context.

Another thing we’ve seen with Russia is a willingness to fabricate documents and incorporate them into their leaks. Not necessarily for Podesta and DNC, but for some of the sports-related hacks, as of the IOC (International Olympic Committee) and WADA (World Anti-Doping Agency), those organizations came out and said that there was material that the hackers released that was not in line with the actual medical records [of athletes]. So effectively they can fabricate material and release it, and it raises the question of what journalistic enterprises are going to simply cover all of it. Not all of them will have the resources to validate the documents in the rush to break the story.

advertisement
advertisement