One week in the middle of the Clinton-Lewinsky scandal, more than 200,000 people took part in an Live Vote that asked whether President Clinton should leave office. Seventy-three percent said yes. That same week, an NBC News-Wall Street Journal poll found that only 34 percent of about 2,000 people who were surveyed thought so.

More recently, in an survey conducted after a televised debate in the run-up to the New Hampshire primary, when asked “Who stood out from the pack?” 76 percent of the more than 55,000 people who responded chose Rep. Ron Paul of Texas.

This indicated strong support for Paul among readers.

But it revealed nothing about how voters in New Hampshire, or other states, intended to vote in primary elections.

In the week prior to the New Hampshire primary, polls indicated that anywhere between five percent and 14 percent of likely voters in the Republican primary intended to cast their ballots for Paul.

On Election Day, Paul got about eight percent of the votes cast in the New Hampshire GOP primary.

To explain the gap in the numbers in this and other similar cases, let’s consider the differences in the two kinds of surveys.

Journalists and political strategists use polls to gauge what the public is thinking. The most statistically accurate picture is captured by using a randomly selected sample of individuals within the group that is being targeted, such as those likely to vote in a presidential primary election.

While a poll of 100 people will be more accurate than a poll of 10, studies have shown that accuracy begins to improve less at about 500 people and increases only a minor amount beyond 1,000 people.

So, in the case of that NBC-WSJ poll, only 2,005 adults were surveyed by the polling organizations of Peter D. Hart and Robert M. Teeter. The poll was conducted by telephone and had a margin of error of plus or minus 2.2 percentage points at the 95 percent confidence level. The confidence level means that if the same poll were conducted 100 times, each one randomly selecting the people polled, only five of the polls would be expected to yield results outside the margin of error.

In the NBC-WSJ survey, pollsters first randomly selected a number of geographic areas and then telephone numbers were generated in a way that allowed all numbers in those areas (both listed and unlisted) an equal chance to be called. Only one adult in each household was then selected to answer the poll.

Online Surveys
In contrast,'s online surveys — or Live Votes — may reflect the views of more individuals, but they are not necessarily representative of the general population. And they may be even less representative of those people who are registered to vote and who do in fact vote.

To begin with, the people who respond to online surveys choose to do so — they are not randomly selected and asked to participate, but instead make the choice to read a story about a certain topic and then vote on a related question.

They may be highly motivated supporters of a particular candidate who are determined to show their support for him or her in any way they can.

And while Live Votes are designed to allow only one vote per user, someone who wants to vote more than once can use another computer or another Internet account.

Live Votes are not intended to be a scientific sample of opinion. Instead, they are part of the same dialogue that takes place in our online chat sessions: a way to share your views on the news with your fellow users and with writers and editors.

Let us know what you think.


Discussion comments


Most active discussions

  1. votes comments
  2. votes comments
  3. votes comments
  4. votes comments