IE 11 is not supported. For an optimal experience visit our site on another browser.

The '2008 turnout model'

The 2008 election turnout model isn't being used to skew 2012 polls.
The 2008 election turnout model isn't being used to skew 2012 polls.Getty Images

We talked the other day about poll denialism, and the notion, widely embraced among Republicans, that all major-outlet polls have been skewed as part of a conspiracy to boost President Obama over Mitt Romney. It's a pretty silly argument, but it's hard to avoid.

And while it's easy to mock those who feel the need to create their own reality, there was one aspect of conservatives' complaints that seemed more plausible than the rest: it's a mistake for pollsters to rely on the 2008 turnout model.

As the argument goes, the political landscape has changed quite a bit over the last four years, so there's no reason to assume the electorate will be nearly identical. To rely on a turnout model that's bound to be different necessarily leads to "skewed" results.

Sounds reasonable, right? The problem, though, is simple: as Nate Cohn explained, major pollsters aren't relying on the 2008 turnout model.

Most pollsters don't weight their polls to match a preconceived electorate. Instead, they take a demographically representative sample based on actual figures from the US census and then let respondents speak for themselves about whether they're voting for Obama or Romney. For illustrative purposes, consider the Bloomberg/Selzer poll. They started by taking a sample of all American adults, weighted to match the demographics of all adults in the US census, like, race, education, and marital status. To produce a likely voter sample, they then would have excluded adults who weren't registered to vote and then asked a series of questions to help determine who was likely to vote.

Ultimately, Selzer's sample found Obama leading by 6 points, by 49-43. Whatever you think of the outcome, it wasn't the result of Selzer imposing her assumptions upon the sample; she let her sample speak for itself. Did she take a good sample? We'll find out on Election Day. But if she's wrong, it won't be because she used the "2008 turnout model."

And what about the notion that Democrats are being over-sampled? Cohn explained why this is wrong, too.

But the fight over the validity of polling is itself illustrative of a larger issue.

Kevin Drum had a sharp piece on this this morning, noting the distinction between how the left and right have dealt with polling data they don't like.

On the left, we saw Nate Silver "dug deep into the minutiae of how polls are put together and how they're conducted, writing lengthy, table-laden posts that often meandered through several thousand words." On the right, we saw the emergence of "UnSkewed Polls," which reweights to Rasmussen -- because the other polls are part of a partisan conspiracy -- and "doesn't even pretend" to apply any rigor.

This is, to put it bluntly, nuts. And it suggests a fundamental difference between left and right, one that Chris Mooney wrote about earlier this year in The Republican Brain. Neither side has a monopoly on sloppy number crunching or wishful thinking, but liberals, faced with a reality they didn't like, ended up accepting reality and deciding to learn more about it. That's the Nate Silver approach. Conservatives, faced with a reality they didn't like, invented a conspiracy theory to explain it and then produced an alternate reality more to their liking. It's a crude and transparently glib reality, but that's apparently what the true believers want.