We are a country divided, a place where you are with either the House majority or the Senate majority, where you are either red or blue. We see this everywhere in our visual culture, from election maps, which break down presidential support by state color, to Twitter, where any popular, politically charged comment will result in catty memes from each side.
But newly published research from McGill and Lund University—conducted during the 2016 presidential election cycle—found that with the proper visual trickery, our partisan opinions can quickly skew more centrist. Specifically, Trump haters warmed slightly to his point of view. And those who seemingly despised Clinton admitted maybe she wasn’t so bad after all.
Over the course of two studies, nearly 800 people were asked to rate whether they leaned toward Clinton or Trump on a variety of leadership traits, such as diplomacy and vision. The poll was simple: for each topic, Clinton was on the left, and Trump was on the right. Subjects were told to mark an X near the candidate they thought showed greater aptitude for the trait; the closer the X to the candidate, the greater the subjects deemed his or her aptitude. If the candidates seemed equal in a trait such as charisma or passion, subjects could put the X right in the middle.
The twist was that, after people jotted down their answers, the researchers performed a sleight of hand: They swapped the real poll results for more moderate versions. (In the first study, this polling was done in person—so researchers literally filled out fake answers in a style that matched subjects’ handwriting as closely as possible. In the second, an online portal set up the same idea, automated by software.)
So what happened when people were faced with this infographic lie of their own results? Most people didn’t know it was a lie at all, as 94% of the respondents accepted the fudged responses. Not one person in the face-to-face trial spotted the subterfuge, though people in the computer version suspected it. Researcher Jay Olson suggests that it might be because people expect computers to make errors more than other people.
That’s the less important point, though. The more important point is that as many as 94% of respondents actually softened their views when examining their false results. In a well-known phenomenon called the choice blindness paradigm, respondents rewrote their own convictions when facing a believable lie about their opinions, justifying these fictional, moderate results through rationalization.
“We’d have a guy wearing a red [MAGA] hat, temporary tattoo, a Trump shirt, and a Trump flag,” says Olson. “[After seeing false results], he’d say, ‘Trump has some downsides! And I see Clinton has some upsides as well.’ [We’d hear] these more reasoned views.”
The trend to more centrist justifications was entirely nonpartisan. Researchers saw similar attitude shifts for both liberals and conservatives when subjects were looking at these phony visuals. With the right stimulus, everyone trended toward the middle.
So what does it all mean? “The expression of our attitudes is a lot more malleable than we think it is,” says Olson, who believes the findings have practical implications. He believes the way we design political data visualizations or phrase political polls can have a measurable impact on opinion, at least in that moment.
“I think sometimes we have a folksy psychological idea that if you ask someone their attitude or political preference, they’re expressing a raw preference they have in their brain somewhere,” says Olson. “But we find a lot of this depends on the context itself, possibly the design of the whole thing.”
When I suggest to Olson that a platform such as Twitter—which seems to promote a partisan divide through the tweets they algorithmically promote, along with the terse, back-and-forth nature of the service’s character limit—could be affecting our attitudes, he agrees. “If Twitter is going for engagement, and if people are more engaged when they’re expressing these political views, you can see how this context would change their expression of these views—and probably the views themselves,” he says. In other words, when you see that promoted tweet bashing Trump in your feed, it might be shaping your attitude as much as it reflects it.
Olson isn’t left despondent by his team’s findings. “One of the hopeful messages we get from our study is, it’s possible people are less polarized than they think everyone else is,” says Olson. “If you give them some space, [and] you give them false feedback, they’re more open-minded.”
Never mind the irony that in the age of misinformation, all it takes is the right lie to make us all a little more willing to bury the hatchet somewhere in the middle.