Viewers of MSNBC's "Countdown with Keith Olbermann" have become familiar with a new piece of polling nomenclature: The "Keith Number."
What is that?
"The margin of error plus the percentage reported as 'undecided,'" he explained when he first debuted it in early January, hoping to put a caveat on the poll numbers that had been so misleading in New Hampshire just a few days before.
But the Keith number was not a momentary whim. It now appears on the screen alongside poll results whenever they appear in Olbermann's broadcasts. You may think that a professional pollster would be horrified at this apparent departure from the "scientific" approach to survey research, but I am not horrified. In fact, aside from one small quibble, I like Olbermann's innovation.
One reason is that the "margin of error" (or more accurately, the margin of sampling error) has become as grossly overused as it is commonly misunderstood. Too many consumers of poll data assume that the familiar "plus or minus" number incorporates all possible sources of error and that any deviation between survey results and some aspect of reality amounts to a "statistical impossibility." It just isn't so.
In the words that The New York Times made famous, "the practical difficulties of conducting any survey of public opinion may introduce other sources of error into the poll." The wording of the questions, as well as voters who do not respond or are not covered by the sample, can introduce bias into results that the "margin of error" does not take into account.
Interpretation poses another problem. Voter preferences change over the course of a campaign, sometimes even in its final days, so treating a pre-election survey as a precise prediction of the outcome can be hazardous. The old cliché is correct -- a poll is just a snapshot -- and sometimes an entirely "accurate" snapshot of preferences on Saturday may not be so accurate by Tuesday.
That is where the Keith Number comes in. It attempts to quantify the same concept I wrote about here a few weeks ago: Indecision can make for highly volatile results.
My only quibble is with Olbermann's emphasis on the undecided category for political horserace numbers. The problem is that polls show a lot of variation in "undecided" that has more to do with how the pollsters ask their questions than with the true level of certainty among the voters.
Consider the South Carolina Democratic primary. Pollsters released eight surveys over the last week of the campaign, showing an "undecided" percentage that ranged from 1 percent to 36 percent. Why so much variation? Some pollsters pushed uncertain voters harder for an answer than others. Some offered "undecided" as an explicit option; some did not. Although the huge differences in South Carolina were unusual, considerable variation among pollsters on the undecided category is not uncommon. Many of my colleagues believe that the handling of "undecided" respondents is the primary source of so-called "house effect" differences among pollsters.
Pushing hard for a choice near an election is a good thing, as some voters are reluctant to disclose their choice to a stranger, and the way voters lean at that stage is usually predictive of the way they will ultimately vote. When pollsters push hard many months before an election, however, the results can give a false impression that minds are made up, especially when one candidate has an early advantage in name recognition.
A far better measure of uncertainty is the follow-up question that Gallup and others have been asking this primary season immediately after asking about vote preference: "Would you say you will definitely vote for [that candidate], or is it possible you would change your mind between now and the election?" Before the South Carolina primary, four pollsters asked that question (or some variant) and found that between 18 and 26 of Democrats were either undecided or said they might still change their minds.
Add the "might change" number to the margin of error and we would have a version 2.0 of the Keith Number that accomplishes exactly what its namesake intended: A measure that indicates when poll numbers are more or less subject to change or volatility.
The problem, of course, is that most media pollsters do not ask a "certainty" follow-up question, and those that do ask in different ways. Which brings me back to Olbermann's willingness to put his name on this number.
"I thought of it," he said with his trademark flair, "so I named it after myself. You think of a better caveat for polls from now on and we'll name it after you."
No need. Use your influence to convince pollsters to ask the "might change" question on their surveys in a consistent way, Mr. Olbermann, and the "Keith Number" may start appearing well beyond the confines of MSNBC.