IE 11 is not supported. For an optimal experience visit our site on another browser.

Telling good polls from bad

Virtually all polls fall somewhere in the middle of the continuum that runs between completely trustworthy and utterly flawed.
/ Source: National Journal

Earlier this week, the managing editor of a prominent newspaper e-mailed me a tough question: "Given that you take great pains to explain the differences between pollsters... why do you announce and link to the results of all pollsters without regard to their methods and transparency?"

While his question pertained to my Web site, Pollster.com, it touches on a broader question that journalists, political professionals and ordinary news junkies grapple with: What is a good poll? How do you tell a good poll from a bad one? That question can be surprisingly difficult to answer, even for a professional pollster.

Unfortunately, when it comes to pre-election surveys that aim to measure the preferences of "likely voters," pollsters rely as much on art as science. No two pollsters agree on the best means of identifying and "screening" likely voters. We disagree about whether we should rely on procedures that define the likely electorate based entirely on the responses to survey questions or whether the pollster should make a priori judgments about the geographic or demographic composition of the likely electorate.

We have heated debates about the best way to draw our samples: Should we begin with a sample of adults who have working phones (achieved by randomly generating the final digits of known telephone number exchanges), or should we sample from registered voter lists that allow for a more accurate identification of actual voters but miss those with non-listed phones?

Pollsters disagree about the merits of automated surveys. Some argue that substituting a recorded voice for a live interviewer compromises random selection, especially when it comes to picking a respondent within a sampled household. Proponents of automated surveys claim, with empirical support, that the automated method makes for a more accurate selection of "likely voters" and vote preferences by better simulating a secret ballot. Unfortunately, these various disagreements limit our ability to provide a neat checklist of dos and don'ts that would easily differentiate "good" polls from the "bad."

What about measurements of accuracy that compare poll results with election outcomes? Unfortunately, these measurements can also be frustrating for those hoping to make decisions about which polls to trust. As my Pollster.com colleague and University of Wisconsin professor Charles Franklin likes to say, this task is more complicated than compiling "a list of good polls that are always close to right and bad polls that are always closer to wrong." Whatever our judgments about the underlying methodology, polls that use "good" methods can be wrong (as compared to election results) and those that use "bad" methods are often right.

The process of measuring accuracy is itself a source of some debate among pollsters. The firm SurveyUSA, a well-known provider of automated polls to local television news broadcasts, has long provided pollster "report cards" that compare the result of the final poll by each organization to the election result. That approach mixes "final" polls conducted in the last 72 hours before voting with some conducted a week to 10 days before an election. Is that a fair approach?

I put that question to Stanford University professor Jon Krosnick. "Most election pollsters," Krosnick says, "believe that surveys done long before Election Day are not intended to predict election outcomes, but they would agree that final pre-election polls should predict election outcomes well." As such, he would only measure accuracy for polls conducted within three days of an election, with at least a handful of surveys for each to avoid accuracy scores that are especially good or bad by "chance alone."

Of course, restricting accuracy measurements this way will leave out many media pollsters that typically conduct longer, in-depth studies as the basis for analytical stories over the final week of the campaign.

Back to the question posed at the top of this column: One reason Pollster.com does not provide more specific guidance about individual polls is that virtually all polls fall somewhere in the middle of the continuum that runs between completely trustworthy and utterly flawed. Some may be strong on disclosure but weak in other ways. Some excel on accuracy scorecards but provide little beyond horse-race numbers. Not all "likely voter" models are right for all applications. And all of these judgments -- and others like them -- are subjective. Other pollsters may reach different conclusions.

Our ultimate goal is to provide a variety of tools and empirical measures of accuracy, disclosure, perceived reputation and so on that let poll consumers reach their own conclusions. As the editor's question implies, however, we still have a lot of work to do.