Polling experts don’t know what to make of wildly contradictory polls on NSA snooping. While some polls, like one from ABC/Washington Post, find the American public in support of the snooping, others, like the CBS poll, say the opposite. And, even more confusing, Pew has produced two polls that have conflicting results!
For background: Pew’s initial poll after the revelation of the program found a large majority of Americans in favor, while the CBS poll and the Gallup poll each found a large majority against. Then Pew’s subsequent poll four days later found the public evenly divided, while an ABC/Washington Post poll found a large majority in favor and CNN found the public either split down the middle (with one question) or hugely in favor (based on a separate question).
You can read a variety of comments by pollsters and pundits: Mike Mokrzycki on Huffington Post, Gallup’s Frank Newport, Morgan Little of the L.A. Times, Mark Blumenthal of Pollster on Huffington Post, Democratic pollster Mark Mellman, and yours truly on iMediaEthics.
Aside from my comments, most of the remarks seem intended to reassure the reader that polls are still meaningful measures of public opinion, despite the wide variance in what the polls say that opinion is. Newport even opines that the 31-point swing in results between what Pew reported (a 15-point margin in favor) and what Gallup reported (a 16-point margin against) “is not a bit surprising.” Why? As he notes:
“The key in all of this is that people don’t usually have fixed, ironclad attitudes toward many issues stored in some mental filing cabinet ready to be accessed by those who inquire. This is particularly true for something that they don’t think a lot about, something new, and something that has ambiguities and strengths and weaknesses.”
Mokrzycki reinforces that view when he cites academic studies suggesting that “attitudes about privacy are not well formed,” which means that the “opinions” measured by polls are highly influenced by question wording and question context (the questions that are asked before the policy question).
Okay, that sounds reasonable – that many people simply haven’t made up their minds about the issue, and thus don’t have meaningful opinions. So, if that’s the case, how did all the pollsters report that virtually all Americans (90% to 98%) have opinions on the issue? Shouldn’t the pollsters have told us that, say, 40% to 50% of Americans haven’t thought about the issue or don’t have clearly formed opinions, while the rest break in whatever direction they do?
But pollsters don’t do that. Instead, they insist on using forced-choice questions that pressure people, even those who “don’t have fixed, ironclad attitudes,” to come up with some attitude anyway. So, the poll results make it appear as though virtually all Americans have such attitudes, because respondents in the sample are forced to give one. But, in fact, a substantial portion of the real public (people who are not polled) do not have meaningful opinions.
I’m not saying, of course, that poll respondents are not “real” people, but rather that once they participate in a poll, and are given information that the rest of the public does not have, the respondents in the sample no longer represent the larger American population.
That is the conundrum pollsters and journalists face: They want to give us the illusion that virtually all Americans are engaged in all public policy issues. So, they manipulate their respondents to come up with opinions, then claim those views represent the public at large.
But when their question format produces results that are so contradictory and unbelievable, pollsters fall back on the excuse that many people don’t have “fixed, ironclad attitudes,” and therefore “it’s really not surprising” that the polls can simultaneously show a substantial majority on both sides of the issue.
The problem is that polls today, by ignoring non-opinion and intensity of opinion, are more likely to reflect the biases of the survey questions than the will of the public.