X

We are Poll Skeptics, Not Poll Cynics

Each of us has been involved in the polling enterprise for over thirty-five years, so this skepticism about media polling is neither fanciful nor fleeting.

Our skepticism focuses primarily on the results of major media polls, which shape our collective vision of what the public thinks and wants.

At the same time, we both believe that well-designed polls, with scientifically-chosen samples, can produce useful insights into the public’s thinking. Too often, they do not.

We believe that in most cases, the fault lies not with the limitations of polls themselves, but rather with the pollsters and the journalists who design and interpret them.

Our criticisms of media polls arise not out of any partisan considerations, but out of a conviction that such polls frequently distort, if not mangle completely, what more valid measures of public opinion would show – if the polls were conducted more rigorously.

In our PollSkeptics Report column, we will illustrate these concerns using media poll results as prime examples.

There is a foundation for our many critiques to come  – Bishop’s The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls and Moore’s The Opinion Makers: An Insider Exposes the Truth Behind the Polls. Our column for iMediaEthics will extend the arguments in our books to current polling results.


Skepticism Justified

That it’s reasonable to be skeptics is all too evident in the spate of recent polls dealing with health care legislation.

As Gary Langer of ABC recently noted, polls leading up to the passage of the legislation provided wildly contradictory findings about the American public – from a CNN poll which showed a 20-point margin against passing the healthcare bill, to a Kaiser Family Foundation poll showing a 4-point margin in favor. Several other polls produced varying results somewhere in between.

Such conflicting poll results are not unusual.

On “cap and trade” legislation, for example, CNN recently found a large majority of the public in favor, by a margin of 23 percentage points.

An earlier Zogby poll reported a large majority opposed, by a 27-point margin – a swing of 50 points. Other polls on this subject have generated results in between these extremes.


Faulty Polling Practices

We hold pollsters accountable for these conflicting results because of faulty survey practices that have become the norm in much of current media polling.

A realistic public is comprised of  people who have developed solid opinions, as well as those who have not thought about a subject and have therefore not yet formed an opinion. But instead of providing such an actual and comprehensive picture of the public, many pollsters prefer instead to portray the public as fully engaged and opinionated, with all but a small percentage having a position on every conceivable subject.

Zogby, for example, reported that only 13 percent of Americans had “no opinion” on “cap and trade” legislation; CNN reported just 3 percent.

The truth is, as a Pew poll revealed, the vast majority of Americans don’t even know what “cap and trade” means, much less whether it would be an effective way to reduce greenhouse gases.

Pollsters typically produce the illusion of an opinionated public by a variety of techniques that, wittingly or unwittingly, manipulate respondents into coming up with opinions, even when they don’t have them.

In the “cap and trade” case, for example, pollsters first read respondents their explanations of this legislation and then immediately asked them to give an opinion on it in a forced-choice question, which offered no explicit opportunity to say “don’t know” or “no opinion.”

This type of question pressures respondents to choose one of the alternatives presented in the question, even when they may prefer to indicate “no opinion.”

Not knowing anything about the issue, but pressed to come up with an answer, respondents will generally base their choices on the information they have received from the interviewer, or on a vague disposition invoked by a word or phrase in the question itself (e.g. something about “the government”).

Such responses are hardly valid indicators of what the larger public is really thinking about the issue at hand.

This point is reinforced by the results of the “cap and trade” polls mentioned earlier. Most respondents knew nothing about the issue and thus relied on the “explanations” provided in the questionnaire as the basis of their choices. Obviously, CNN’s explanation was quite different from Zogby’s – leading the respondents in the CNN sample to come up with diametrically opposite “opinions” from respondents in the Zogby sample.

Both polls could not be right. In fact, neither reflected the public’s lack of engagement on the issue.

Each polling organization essentially manufactured its own version of “public opinion,” based on its unique question wording.

This process of influencing respondents, and thus distorting what valid measures of public opinion would show, is all too commonplace.


Pollsters Claim: Accurate Election Results Validate Their Public Policy Polls

In defense of their practices, most pollsters argue that, because their final pre-election polls can (more or less) accurately predict the outcome, we consumers should also trust their poll findings on policy issues as well.

But, in fact, the polls often do a lousy job of tracking elections. The last presidential election is a case in point.

We acknowledge that most of the major media polls usually come reasonably close to predicting the outcome of presidential elections in their final polls, as they did in 2008. However, during the campaign itself, the pollsters provided wildly different estimates of which candidates the electorate preferred.

Let’s look at the polls’ collective performance during the last month of the presidential campaign. The accompanying chart shows the weekly average of the ten most frequently released polls in October, 2008.

Note that even though the poll results tend to converge in the final predictions, the trend lines reported by the polls during the last month of the campaign vary substantially.

Yes, they all show Obama, not McCain, in the lead, but some suggest the lead was so minimal, McCain might well win.

Others show a fairly comfortable lead for Obama throughout the month.

And a couple show major fluctuations, with Obama’s lead surging or declining by 9 to 10 points from one week to the next.


Not Trusting the Polls

If the major media polls don’t give us a coherent story about the election campaign, they are even worse in describing public opinion on complex policy issues.

At least with elections, the object being measured in the polls is simple and concrete – which candidate do the respondents prefer?

On public policy issues, the preference being measured is generally much more abstract and complex.

Furthermore, there are many people who don’t pay much attention to policy issues, just as there are many voters who don’t pay attention to an election early on in the campaign.

But the public’s lack of engagement, and lack of understanding on many issues, do not deter media pollsters, who pressure their respondents to provide answers – answers which, in too many circumstances, do not reflect anything at all like the public’s actual awareness, knowledge, and judgment.

So, as a general rule, we do not trust results from poorly designed media polls that pressure respondents to produce opinions they don’t have, or that influence respondents by giving them (often biased) information most members of the general public don’t have.

That doesn’t mean we think all such polls are useless. Sometimes, despite their faults, they can provide clues about the nature of public opinion.

They just can’t (and shouldn’t) be interpreted literally. Unfortunately, a literal reading of media polls is the norm for the vast majority of news reports and media outlets today.

Everyone who relies on poll results for insights into reality needs to heed the warning: Caveat Emptor!  No one, not just the two of us, can reasonably avoid being a Poll Skeptic.

 

George Bishop is Professor of Political Science and Director of the Graduate Certificate Program in Public Opinion & Survey Research at the University of Cincinnati. His most recent book, The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield, 2005) was included in Choice Magazine’s list of outstanding academic titles for 2005 (January 2006 issue).

David W. Moore is Senior Fellow with the Carsey Institute at the University of New Hampshire. He is a former Vice President of the Gallup Organization and was a senior editor with the Gallup Poll for thirteen years. He is author of  The Opinion Makers: An Insider Exposes the Truth Behind the Polls (Beacon, 2008; trade paperback edition, 2009). Publishers’ Weekly refers to it as a “succinct and damning critique…Keen and witty throughout.”