2012: The Anti-Data Election

Our political leaders manipulate data to win over voters and breed confusion in the process. But it doesn’t have to be that way. We can protect ourselves from being manipulated by bad data.

2012: The Anti-Data Election
Charts via Shutterstock

President Obama decisively won re-election on Tuesday and almost immediately the questions started rolling in. Those on the right couldn’t comprehend how a president could be re-elected with such unprecedentedly bad economic numbers: the unemployment rate when he took office was 7.8%; and today we’re still sitting at that same miserable 7.8%. Those on the left were equally incredulous that so many Americans couldn’t see the progress right in front of them. President Obama has been steering a slow recovery, they say, taking us from 10% unemployment in his first term to 7.8% today.


But wait, both those sets of statistics can’t be true: We can’t be simultaneously stagnant and improving. But they’re both correct, it just depends which start date you choose. And that is precisely how both parties manipulated statistics during this election to set their preferred narrative, and sewed confusion in the minds of millions of voters.

Statistic manipulation is hardly a new trend. Every election cycle is jam-packed with contradictory data and inconsistent indicators. Mark Twain reserved special ire for the type of lie that is statistics. But in the struggle to define the narrative of the last four years–triumph or tragedy–stats have taken a particularly hard hit in the past few months.

There are two classic sins of data manipulation that were at work in this campaign: The first is that the Obama and Romney campaigns used different metrics, scales, and data points to discuss their respective track records and plans for the future, making it nearly impossible to compare apples to apples. The second is that both camps used metrics that lack context and relevance.

The debate over the two candidates’ energy policies (which involved both men trying to brag about how much oil they wanted to drill) highlights the first sin nicely: President Obama defended his record on domestic oil production by saying, “We’ve built enough [oil] pipeline to wrap around the entire Earth once.” Governor Romney responded that oil production is down 14% on federal land this year, and that gas production was down 9% due to the President’s cuts on licenses and permits for drilling on federal lands and waters. But what does it all mean? Why are we comparing miles of pipelines circumnavigating the Earth to oil production on federal lands? If the goal is to figure out whether we’re becoming more energy independent why aren’t we comparing the ratio of domestic oil production to international imports between 2008 and 2012?

Context is also critical to understanding the truth behind the data. For instance, Mitt Romney took to claiming that the actual unemployment rate is 11%–which would be the rate if labor-force participation hadn’t declined over the last four years, something he attributes to the failure of Obama’s policies. That claim earned him two Pinocchios from the Washington Post, on account of context. What was that context? Demographic trends like the retirement of baby boomers are estimated to be responsible for over half of the decline in the labor-force participation rate, according to the Federal Reserve Bank of Chicago.

The bottom line: Poor data and misleading metrics are bad for everyone except for the politicians who rely on them to set their narratives and confuse voters about what is “true.” And while this election is now over, the voting public must begin to demand more from our elected leaders if we want a more honest discussion and fairer use of facts in the future.


Here’s what we can do to stop being manipulated by bad data:

1: Ask: “so what?”

We’ve got to ask the “so what?” question more often. When presented with facts and figures, asking “so what?” forces our leaders to use data that reflect important changes. For example, what’s the “so what?” of  unemployment data? We’re trying to drive toward the outcome of financially self-sufficient, healthy and happy American families. And yet the unemployment rate doesn’t capture critical problems such as growing part-time-only work and the replacement of good jobs (like ones that provide health care) with worse ones. So what might get at the outcome we want? While no perfect indicator yet exists, the Gallup Organization’s Well-Being Index is a step in the right direction. We should be demanding data that explains the problems we want to solve.

2: Demand consistency

How can we judge progress when each candidate uses different indicators of success? We must ask our leaders to agree upon a common set of measures and data points that reflect the public’s common interest. Then, we’ve got to size up both past results and future promises against the same metrics. And we must also judge competing parties and candidates against a common scorecard. For example, both parties will unequivocally agree that energy independence is a priority. Although they may disagree on the policies and tactics to achieve this goal, they should both be able to envision an energy-independent America and agree upon the few measures that track our progress toward it (hint: it’s neither miles of pipeline nor rate of production on federal lands).

3. Seek context

Candidates need to put metrics in context and tell the whole story. At the same time, we voters must seek more information. It’s not enough to know the size of the deficit under Obama or to look at the number of jobs Romney outsourced in the private sector. These data points require a deeper look at the data, and we won’t get it until we ask for it.

The good news is that election night took a lot of the hot air out of poll manipulation. Leading up to the election, every pundit and party was frantically trying to spin polling data–regardless of its integrity–in their direction.

And then there was stats guru Nate Silver. Silver’s complex model, which ignores campaign spin and ideas about “momentum” in favor of state-by-state arithmetic, accurately predicted 49 of the 50 state outcomes in the 2008 Presidential election and the sweeping trend leading up the 2010 Republican takeover of the House. Silver took heat for predicting a decisive victory for Obama this year–giving him a 90% chance of winning on election eve. Among the many pundits who dismissed Silver, MSNBC host Joe Scarborough called him a “joke,” leading Silver to announce a $2,000 charitable bet with Scarborough over the election’s outcome.


Not only did Silver’s model predict the overall win, this time he nailed the outcomes in all 50 states, including the nail-biter with an Obama advantage in Florida. Somewhere, Mark Twain is smiling.

This article was written with research support from Mission Measurement Associate Sarah Garner.