The day after Monica Lewinsky's March 3 interview with Barbara Walters on ABC, USA Today reported a Gallup Poll showing "the public wasn't impressed" with her performance, and two-thirds of Americans felt no sympathy for her. The New York Post, however, reported an ABC poll showing that "public sympathy for Monica Lewinsky shot up" after the interview. What can we make of poll results or interpretations that appear or are contradictory? Pollsters such as Frank Newport, editor in chief of the Gallup Poll, insist that a survey of a genuinely random sample of 1,000 people, the industry standard, will accurately reflect the opinions of all 187 million American adults. The results, he notes, will be very close to the same as if every member of the population had been asked the same question if-and this is a big if-every member of the population had an equal chance of being chosen. Here's how the major pollsters try to guarantee that equality of opportunity: For a national survey of adult Americans, Gallup starts with a computerized list of all U.S. telephone exchanges. The computer then generates a list of residential numbers from those exchanges, including unlisted numbers. Then the calling begins. If there is no answer or a busy signal on the first call, Gallup will repeatedly call back. (If pollsters gave up after one attempt, they would miss people who are often out, such as young single adults, and those who are often on the phone, thereby introducing a possible bias into the procedure.) Once the household has been reached, Gallup uses one of several procedures to select randomly one adult within the household, such as asking for the oldest or youngest adult living there. Here's what is key: Up to 40 percent of those reached will refuse to participate, depending on the type of poll and other factors. "Non-response doesn't affect the results as much as you might expect," contends David Kinnaman, research director for Barna Research Group, Ltd. He notes that the demographic data of those surveyed is usually "very close" to national census data when compared by age, income, region, ethnicity, political preference, education level, and other factors. Religion, however, is not one of those factors, and it is a strong predictor of attitudes on many issues. This raises a possibility: If those most committed to the Bible have for whatever reason (perhaps less emphasis on politics) a lower response than others, polls could be skewed by a few percentage points. The pollsters don't think that happens but, unless religion becomes a factor carefully weighed, there is no way of knowing. Mr. Kinnaman concedes that polls are open to varying degrees of bias as they are conducted, interpreted, and eventually reported in the media. "Some organizations are trying to get information out there that corresponds with their personal perspectives, or they're trying to get media attention," he warns. When assessing a given poll, he says, look at the credibility of both the polling firm and the group that sponsored it. The wording and order of questions can also affect results. For example, says Mr. Newport, when Gallup asked during the Gulf War whether the United States should go to war "with its allies," more people agreed than when the question was simply whether the United States should go to war. He adds that poll results must be interpreted in context, keeping in mind responses to the same and similar questions in the past. "There's a lot of bad information that people accept at face value," continues Mr. Kinnaman, "and there are others who are so skeptical that they've dismissed all polls." But to avoid both extremes, here are seven questions to ask about a poll:
- Who conducted it and are they reputable?
- Did they use appropriate methods to get a truly random sample? (Data collected from specialized groups, like the readers of a particular magazine, are not random.)
- Were the questions worded fairly and in a reasonable order?
- How large was the sample? (The industry standard is 1,000; below 600 is considered suspect.)
- Was there a significant non-response rate and was it taken into account when the data was analyzed?
- Who sponsored it and what is their agenda?
- Were the results interpreted in context, taking into account previous poll results and trend data?