IE 11 is not supported. For an optimal experience visit our site on another browser.

Deciding who's a likely voter

This week, I want to look at concerns that the "likely voter" models used by pollsters might miss a flood of new and younger voters that some speculate may turn out this year.
/ Source: National Journal

Last week we looked hard at the potential impact of "cell-phone-only" voters missing from many poll samples and found some evidence that their absence might suppress Barack Obama's margins by a point or two. This week, I want to look at concerns that the "likely voter" models used by pollsters might miss a flood of new and younger voters that some speculate may turn out this year.

Political polling is often a meld of science and art, but nowhere more than in the selection of "likely voters." The basic issue is simple enough: Most pollsters begin by calling a random sample of adults (with telephone service). Some will begin with a sample drawn from a list of registered votes. But all face the challenge that neither all adults nor all registered voters turn out and vote.

Four years ago, according to the statistics compiled by professor Michael McDonald of George Mason University, 60.1 percent of all eligible adults in the United States voted. That vote amounted to roughly three-quarters (74 percent) of those registered to vote just before the 2004 elections.

Since George Gallup and his colleagues first started trying to identify likely voters over 50 years ago, pollsters have noted that doing so could substantially improve their forecasts. The science of Gallup's approach involved dispatching his interviewers to voter registrar offices after the election to see which respondents had actually voted. They could then check which measures best predicted a true intent to vote.

The polling pioneers found that voters often overstated their intent when asked directly if they were likely to vote. So Gallup developed an index based on other measures that correlated with voting, such as a self-reported history of past voting, high interest in the campaign and awareness of the location of their polling place. They learned that they could improve the accuracy of their forecast by selecting only the voters who scored highest on that index.

Today we see as many approaches to selecting or modeling likely voters as there are pollsters. Gallup and others, mostly national news media pollsters, continue to use the traditional Gallup model or a variant (though Gallup has added a new twist this year -- more below).

Most, however, follow a simpler procedure, using screen questions that ask voters if they are registered and likely to vote (or, once early voting gets under way, whether they have voted already). Some will try to use answer scales on these questions to narrow their "likely voter" universe. Others will include additional questions -- on past voting or interest in the election -- to screen out those who are not likely voters.

Another important difference dividing pollsters is the degree to which they let their measures define the demographics (and sometimes the partisanship) of likely voters. Some take a "hands off" approach that involves weighting the sample of adults to match U.S. Census demographic estimates and then letting the demographics and party leanings of the likely voter population fall where they may.

Others choose to manipulate the demographics of likely voters more directly, weighting their samples to match demographic "targets" gleaned from various survey measurements in past elections. Sometimes their "likely voter" process includes weighting by party identification -- a practice that some pollsters embrace and others condemn.

Which brings us to this year's election and the theory that pollsters may be missing new voters. Setting aside the cell phone issues I covered last week, we have good reason to believe that most pollsters are using methodologies sufficient to capture any new wave of voters that might materialize.

First, virtually all of the public polls still begin with a random digit dial (RDD) sample of all adults and then ask voters if they are registered to vote, intend to vote and so on. By starting this way, every new registrant has a chance of being included in the survey.

Second, although disclosure of these details could be better, I can identify relatively few pollsters who continue to put great emphasis on reports of past voting in their likely voter models. Jay Leve, CEO of SurveyUSA, said that his company stopped asking screen questions about past voting six years ago. Others, including Quinnipiac University, Research2000, Diageo/Hotline and Selzer & Company, told me they're not screening based on past vote at all this year.

Several organizations, including the Pew Research Center, Newsweek/Princeton Survey Research Associates International and CBS News, said that their models essentially discount past voting reports for younger voters or put less emphasis on past voting generally. Scott Keeter, director of survey research at Pew, said that they have "added more questions that measure interest and engagement."

And finally, the Gallup Organization is now reporting two different likely voter models, their traditional model plus an "expanded" model based only on current vote intention. Their two models show how important the differences can be: Over the last two weeks, Gallup shows Obama leading McCain by an average of 4 percentage points on their traditional model, but by more than 7 points on their expanded model.

There are exceptions. Generally speaking, the pollsters who are either continuing to use past voting as part of their likely voter process or that weight by party identification to the levels seen four years ago tend to produce results for the presidential race that are closer than average. But most of the poll numbers are defining the likely electorate in ways that should capture any increases in turnout.