IE 11 is not supported. For an optimal experience visit our site on another browser.

An msnbc.com guide to presidential polls

Ever wonder why polling results vary from survey to survey? Check out this polling primer, which explains how sampling and methodology can affect survey outcomes.
/ Source: msnbc.com staff and news service reports

A poll is a small sample of some larger number, an estimate of something about that larger number. For instance, what percentage of people reports that they will cast their ballots for a particular candidate in an election?

A sample reflects the larger number from which it is drawn.

Let’s say you had a perfectly mixed barrel of 1,000 tennis balls, of which 700 are white and 300 orange. You do your sample by scooping up just 50 of those tennis balls.

If your barrel was perfectly mixed, you wouldn’t need to count all 1,000 tennis balls — your sample would tell you that 30 percent of the balls were orange.

People, of course, are far harder to sample.

A tennis ball won’t spontaneously change its color from white to orange, but a would-be voter might suddenly change his or her intention about whether or not to vote, or change his or her preference for a particular candidate.

Sampling
Poll results vary based upon the people being surveyed.

Some pollsters use random-digit telephone dialing of a population in order to ensure that their sample is not skewed to one group.

Others buy lists of phone numbers from vendors.

In some cases, pollsters try to find the people most likely to actually vote. Using a “likely voter screen,” pollsters ask questions such as:
“How likely are you to vote in next Tuesday’s election?”
“Where is your voting location?”
“What precinct are you in?”
“What time will you vote?”

If a respondent doesn’t know where the polling place is, or what time the local polls close, he or she might be tossed out of the sample.

In some cases, pollsters narrow their survey to registered voters, or those who have voted, say, in the last three out of four elections.

But David Paleologos, director of the Political Research Center at Suffolk University in Boston, said this technique can be used only in states where voters are required to be registered by a certain deadline before an election.

In states such as New Hampshire, where one can register on the day of the election, screening out Election Day registrants — people who haven’t voted before — would skew the sample.

Polls rely on those being interviewed to truthfully say whether or not they are registered to vote. A person who isn't registered to vote will taint the sample.

But according to Republican pollster Alex Lundry, there may be social pressure to claim to be a registered voter when in fact one isn’t. The likely voter screening questions attempt to account for this and sift out the non-registered.

“If ever there was an election in which black respondents felt a social desirability bias to over-report their registration, this would be it,” Lundry wrote this week on Pollster.com

“A reasonable person would conclude that an unregistered African-American, called to participate in a survey, would feel some sort of pressure (either known or unknown) to say that he or she is indeed registered, and continue with the survey.”

It’s also important to note that sampling is different from poll to poll. For instance, Gallup’s sample of 1000 people is different from NBC’s sample of 1000. The pollsters are interviewing different people. And that can account for some variation.

But statistics gurus would still contend that if the sampling technique is kosher, then each of those national samples should accurately reflect the entire electorate at a given moment in time.

Cell phones
There has also been controversy over the exclusion of cell phone-only voters in polling.

Recent statistics indicate that 14.5 percent of adults live in households with just cell phones — no landlines.

Some pollsters worry that landline-only polls are missing a huge, often younger, demographic, creating a “coverage bias.”

Until recently, pollsters had been reluctant to interview voters by cell phone, due in large part to legal and logistical hurdles.

Federal regulations require that pollsters hand-dial each number. They also must ask a series of questions to determine if those respondents could have been reached on a landline.

Costs for this process are higher — but have recently been undertaken by several large survey organizations.  Groups that have used cell phone users in their national samples or conducted side-by-side tests include Pew Research Center, Gallup, CBS News/New York Times, Time/SRBI, NBC News/Wall Street Journal, ABC News/Washington Post, and the Associated Press/GfK.

Still, these are relatively small tests, and the margins of error for these samples are likely larger than any possible effect.

Following an NBC/Wall Street Journal poll on Sept. 10, First Read included this in their analysis: "The poll included some cell phone surveys (we found no significant difference in cell phone respondents as we have from landline respondents)."

Methodology

Polls can be automated or administered by an actual, live person. Some argue that recorded voice calls compromise random selection — a member of the household isn’t selected to answer the questions. That task goes to the person who answers the phone.

On the other side, automated polling proponents argue that the method allows for more honest answers, since they’re being given to an unresponsive, silent machine.

There’s also a distinction between national polls and smaller, state polls.

In practice, national polls use longer data collection periods and call back phone numbers if no one answers the first time. National pollsters are also more likely to select random respondents to answer the questions rather than simply asking whoever answered the phone.

But, as Election Day draws near, some of these national poll procedures are usually relaxed to accommodate the need for faster data turnaround.


Although national polls have a pretty good track record as harbingers of the eventual presidential election outcome, there’s a case to be made that good state polling in competitive states is more revealing.

After all, it is partly Barack Obama’s and John McCain’s own polling (for which they pay dearly) that convinced them to make visits in recent days to states where they think it’s still worth competing: Pennsylvania, Indiana, New Hampshire, Ohio, Nevada, Virginia, Florida, Missouri, and North Carolina.

You won’t find McCain campaigning in California or Obama in Texas — and yet Californians and Texans are included in national samples.

The Bradley-Wilder Effect
Named for the historic gubernatorial campaigns of Tom Bradley in California in 1982 and Doug Wilder in Virginia in 1989, the theory speaks to the possible inaccuracy of polling about African-American candidates.

The idea goes that some survey respondents may not be truthful about their feelings on race during polls. Afraid to be honest or sound bigoted, they claim to support the minority candidate when they actually do not.

According to a recent paper from Harvard post-doctoral fellow Daniel Hopkins, this apparent polling bias has largely disappeared over the past 10 years.

But journalist and former Democratic campaign aide William Bradley, who worked on the Tom Bradley campaign, wrote last week on The Huffington Post website that "the problem lay not with significant numbers of voters lying to pollsters, but with the nature of the (Bradley) campaign itself.”

Tom Bradley, the Democratic mayor of Los Angeles, ran a somewhat complacent campaign and lost to Republican George Deukmejian.

As the polls closed on election night, a Field Poll projection based on exit poll interviews projected that Bradley would easily beat Deukmejian. But he lost.

But William Bradley notes, “The Field Poll made two big projections based on its exit polling that fateful November night. Bradley as the next governor of California. And (Jerry) Brown as the next U.S. senator. But Brown lost, too.”

Recent examples
Polling can often be an accurate snapshot of Election Day outcomes. Other times, it can fail — miserably.

The greatest example in this election cycle happened in New Hampshire, in the days leading up to that state’s primary.

Surveys had Obama in the clear lead. But in the end, Hillary Clinton won the state, 39 to 36 percent. Some possible reasons for the surprise upset include last-minute Clinton traction, a loss in Iowa that turned her into the underdog candidate, and the contrarian nature of some voters.

Another polling dispute was raised just a few weeks ago, when an NBC News/Wall Street Journal poll showed Obama with a slight lead over McCain, 48 to 46 percent.

Another large national poll was released on the same day, showing Obama with a much larger lead. The McCain campaign argued the Democratic respondents in that poll outweighed Republicans by 16 points.

By comparison, this NBC News/Wall Street Journal poll had Democrats leading in party identification by just seven points.

More important than any single sample is a consistent trend among several samples.

Look, for instance, at the Pollster.com roundup of all the publicly available polling data on the presidential race in Iowa, a state which George W. Bush narrowly lost in 2000 and narrowly won in 2004.

The trend line — based on 35 samples — shows that Obama has been ahead of McCain in that state since early this year. At no point has McCain led Obama in Iowa.

Msnbc.com's Tom Curry and National Journal's Mark Blumenthal contributed to this report.