IE 11 is not supported. For an optimal experience visit our site on another browser.

Exit poll potpourri

Exit polls — and their shortcomings — once again grabbed the spotlight after last week's Pennsylvania primary, but they're still the most reliable way to find and analyze information about a particular group of voters.
/ Source: National Journal

Exit polls and their shortcomings were once again in the spotlight following the Pennsylvania primary, raising some familiar and some not-so-familiar questions. Let's consider three of them.

Just what did the "exit polls" show? At 5:35 p.m. on the day of Pennsylvania's primary, shortly after the exit polling operation released data from its "quarantine room" to network analysts and producers, the National Review's Jim Geraghty reported that "the exits" showed Barack Obama leading Hillary Rodham Clinton by 5 points (52 percent to 47 percent). The next day on MSNBC's "Morning Joe" (according to several viewers), correspondent Andrea Mitchell confirmed that early exit polls showed Obama winning by a 5-point margin.

But other sources reported late-day exit poll estimates showing Clinton slightly ahead. At about 5:30 p.m., Matt Drudge reported the "EXIT POLL DRAMA" of a 4-point Clinton lead (52 percent to 48 percent). That margin matches the one Bard College's Mark Lindeman extrapolated from official tabulations posted on network Web sites just after 8:00 p.m.

Later that night, CNN's Candy Crowley confirmed to viewers that early exit polls had shown a 52 percent to 48 percent Clinton margin. Later that week, adding a bit more precision, columnist Robert Novak reported that late afternoon exit polls had shown Clinton with "a lead of 3.6 points."

Which was right? Both!

For each election, the pooled network exit poll and vote counting operation creates roughly 10 vote estimates for each race that gradually update as more data becomes available. Most of these statistics are never broadcast or published. They exist to guide analysts at the network decision desks who call election winners and losers the night of the election.

Among the various numbers displayed on the analysts' computer screens is a set of pure, exit-poll-only estimates that vary slightly depending on their geographic weighting. According to my sources, these estimates showed Obama leading by margins of 3 to 5 percentage points as the polls closed.

The exit pollsters also compute a "composite" estimate that splits the difference between their best pure exit poll tally and an average of pre-election telephone polls. The composite estimate was the one that showed Clinton ahead by 4 points. This blend of interviews conducted at polling places and pre-election telephone surveys is intended as a hedge that will reduce errors when the exit poll interviews are off. The composite did just that in Pennsylvania but, unfortunately, not enough to prevent network anchors from mischaracterizing the race as "very competitive" and "too close to call" as the polls closed.

Did the skew to Obama indicate a "Bradley Effect?" Although the end-of-day exit polls have shown a consistent skew favoring Obama throughout the primary season, Robert Novak explained the exit poll discrepancy in Pennsylvania as a return of "the dreaded 'Bradley Effect.'" Exit polls in 1982 showed Tom Bradley, the black mayor of Los Angeles, winning a race for governor, even though he ultimately lost to Republican George Deukmejian. Novak quoted pollster John Zogby connecting the dots: "I think voters face to face are not willing to say they would oppose an African-American candidate."

The problem is that exit poll interviews are not conducted with a "face to face" interview. Interviewers recruit random respondents, hand them a "confidential" paper questionnaire [PDF], a pencil and a clipboard and allow the voters to cast a "secret ballot" that they deposit into a "ballot box." The paper questionnaire mostly eliminates the presumed "social discomfort" that might lead some voters to be less than honest about their choices.

Also, there was another explanation for the failure of the exit polls in 1982. Bradley received more votes from those who cast ballots at polling places but lost the election among absentee voters. In 1982, the California exit polls made no effort to interview early voters by telephone (something they do now).

A better explanation for the Obama skew this year, as I summarized in more detail on Pollster.com, is a well-established problem. The typically younger exit poll interviewers have difficulty completing interviews with older respondents. That issue, combined with the huge age gap in support for Obama and Clinton, most likely explains the persistent Obama skew in the exit polls this year.

Do these problems undermine the ability of exit polls to tell us who voted and why? Or to put it more bluntly, as Slate's Mickey Kaus did last week, "if the exit polls are this unreliable for press' result-predicting purposes, why aren't they also unreliable for all the scholarly purposes they are supposedly put to?"

The final exit poll tabulations we all scrutinize are statistically adjusted to match reality (or "weighted") on a number of levels: regionally to match actual turnout and demographically to match hand tallies kept by interviewers of all the voters they attempted to interview (those who filled out questionnaires and those who refused). Finally, the interviews are weighted so that the overall vote preference matches the official count.

Does that procedure fix all problems? In some cases it may, in some cases it may not, but it may help to remember that all modern political surveys suffer from the same potential "non-response bias" problem and pollsters attempt to repair all surveys in essentially the same way.

In their raw, unweighted form, national telephone surveys of adults now routinely underestimate the percentages of younger adults, nonwhites, urban dwellers and those with lower incomes or without college educations. Pollsters use statistical weighting to correct such skews, and large-scale experiments continue to show that the results compare favorably to real-world outcomes.

All of which is an indirect way of saying that the final exit polls are no better and no worse than all other surveys on this score. All encounter non-response bias in some form that all try to "fix" with weighting. We should always be careful to understand the limitations of political surveys, but when it comes to explaining who voted and why, we still have no better resource than the exit poll.