This week back in 2011, during a heated GOP race for the presidency, the leading candidate in the polls was Newt Gingrich. The winner of the Iowa caucus in 2012, which kicks off the primary season, was Rick Santorum (remember him?). Four years before that, the leading national candidate was Rudy Giuliani and the Iowa caucus was clinched by Mike Huckabee.
We all complain about polls–there are too many of them and they dominate election coverage–but one of the best reasons to dismiss them is that they are notoriously unreliable at predicting the actual results, especially so early in the campaign.
Right now, Donald Trump dominates national polls of likely GOP voters and Ted Cruz is leading in polls of likely GOP voters in Iowa. It is obvious to most polling experts that such polls have no serious predictive value—at least not until a few days closer to the actual voting—yet the media and an overabundance of political pundits announce the results of numerous such polls on an almost hourly basis every day as if they have some substantive value.
What the Polling Experts Say
We talked to polling experts Al Tuchfarber and Cliff Zukin to explain why such polls are so unreliable and to set the record straight on pre-presidential election survey methodologies and results. Tuchfarber is professor emeritus of political science at the University of Cincinnati and was the founder and director of the UC Institute for Policy Research from 1971 to 2004. Zukin is professor of political science and public policy, Eagleton Institute of Politics and Edward J. Bloustein School of Planning and Policy, Rutgers University, and past president elect and current member of the Executive Council for the American Association for Public Opinion Research (AAPOR).
“These pre-election polls are demonstrably inaccurate right now,” Tuchfarber says. “When you get to the last few days (pre-Iowa caucus and New Hampshire primary voting) they are close but not perfect.”
“It is not a great system from the get-go, and the role of polls in setting expectations is one of those things that happens but it is not very healthy,” Zukin adds.
Zukin recently had a paper published on the AAPOR site, titled “A Primer on Pre-election Polls: Or Why Different Election Polls Sometimes have Different Results,” and Tuchfarber recently authored a guest column for Sabato’s Crystal Ball—a popular site out of the University of Virginia’s Center for Politics that keeps close tabs on presidential elections—titled “60 Days Until Iowa: Are the Polls Predicting the Winners?”
Tuchfarber wrote that we should definitely not trust the predictive power of either the national polls or state polls at this point in the process, adding that “we can figure that perhaps anywhere from one-half to two-thirds of the interviewed respondents in these polls won’t actually vote in a primary or caucus.”
Zukin’s primer alone, synthesized below in brief, is a kind of laundry list that tells us in no uncertain terms why pre-presidential election polling is more of an art than a science:
Sampling error percentages—where margin of error plus or minus X percentage points is noted—are misleading because the sampling error is typically larger than it may seem “and is one of the major reasons why polls may differ, even when conducted around the same time.”
The most modern modes of surveying include telephone (landline and cell), and Internet/online, and they all have drawbacks for enabling polling accuracy. Telephone surveys frequently use random digit dialing (RDD) to ensure distribution by geography, or registration based sampling (RBS) to draw from public lists of registered voters. Both of these methods miss a substantial portion of the electorate.
Regarding cell phones, and in particular smart phones, more people than ever use them (a 2015 Pew Research Center study showed that 64% of American adults own smartphones), but federal law prohibits auto dialers to contact smart phones (they have to be hand dialed). This makes cell phone surveying more expensive than landline surveying, which, in turn, causes some surveying organizations to skimp on the number of cell phone interviews.
Another survey mode includes recorded voice calls, as opposed to live interviewers, known as IVR for interactive voice response, and called robo-polls—also unable to call cell phones and hence cost-prohibitive to accurately represent voters. Additionally, there is no way of knowing who actually answers a robo-poll.
Internet/online polls also have major issues, namely that pollsters have not yet figured out how to obtain a representative sample of Internet users, making most online polls based purely on non-probability samples.
Timing and field procedures also come into play, along with question ordering and wording. “Polls don’t predict; they describe the situation at the moment,” Zukin writes. Polls taken over different dates (field period), such as over one day or seven days, for instance, yield different results. An event might happen at any given time during a field period, impacting polls and thus making them harder to predict. In addition, the ways in which questions are asked, as well as what order they are in, affect poll results. For example, “a line of questioning on the willingness to vote for a woman as president could lead to an overstatement of intentions to vote for Clinton and Fiorina in subsequent questions.”
Weighting and determining probable voters also need to be considered. Weighting uses basic math to identify probable voters and their common characteristics based on census data, but it is far from an exact science. “Even the best polls cannot interview a perfect sample, due to non-response and non-coverage, among a variety of reasons,” Zukin explains. There is also a general problem inherent to these polls whereby there is an over-reporting of peoples’ intention to vote. “When respondents’ self-report of intentions in pre-election polls have been compared to actual turnout, we have historically found a large over-report of voting intentions.” Tuchfarber succinctly adds that getting a good sample of people who will actually vote is “the pollster’s nightmare.”
The only way for pollsters to possibly overcome that nightmare is to have what Tuchfarber refers to as “tight screening,” which entails interviewing a high number of possible voters to get to the people who will actually vote. However, this is an expensive process overall. “Each interview costs you an arm and a leg,” he says. “It costs so much to do these well, and they have fewer [financial] resources to do it,” Zukin says. “For most organizations it does not make good economic sense to put in all the money that they need to in order to do it right. That is why there are more of these Internet opt-in polls with non-probability samples out there because they cost almost nothing to do.”
We can see how this has been playing out with the October announcements from two highly respected presidential election pollsters—Gallup and Pew Research—with both substantially cutting back on polling for the current presidential race.
Still, there are a group of pollsters whose methodologies do have a tighter screening than most, notably, as Tuchfarber claims, the Monmouth University Polling Institute and the Selzer & Company public opinion survey organization that is behind the recent Des Moines Register/Bloomberg Politics Iowa Poll. Zukin also mentioned Monmouth as well as the Quinnipiac University polls. For more on this, as well as plenty of additional viewpoints on pollsters, in general, see Nate Silver’s FiveThirtyEight poll rating site, last updated in September 2014, in which the historical accuracy and methodologies of polls have been analyzed.
Finally, we need to keep in mind that the results of the Iowa caucus vote, as well as the New Hampshire primary vote on Feb. 9, have a very strong cascading influence on the next primaries, with the South Carolina Republican primary on February 20, the Nevada caucus on February 23, and the South Carolina Democratic primary on February 27. Super Tuesday this year, when the largest number of states hold primary elections, is March 1. The problem with this is that voters in both the first caucus and first primary are not representative of mainstream America, with 60% of Republican Iowa caucus voters identified as Evangelical Christians, and New Hampshire primary voters, both Republican and Democrat, identified as non-diverse, estimated at 94% non-Hispanic whites.