Pew's Misleading Trans Fat Poll: David Moore responds to Pew Director

iMediaEthics publishes international media ethics news stories and investigations into journalism ethics lapses.

Menu

Home » Media Polls»

Michael Dimock, director of the Pew Research Center, spoke earlier this year on PBS' News Hour about polls on a strike against Syria. (Credit: YouTube, "PBS NewsHour," screenshot)

In his response to my article critical of a Pew poll, Michael Dimock, Director of the Pew Research Center, acknowledges that I raise a “legitimate point,”  that “one major  challenge to measuring public opinion is that people may not always know what you are asking about.”

I raised this point, because Pew reported that a majority of Americans opposed “prohibiting restaurants from using trans fats in foods.” Pew didn’t report how many people even knew the difference between fats and trans fats. I suggested that many people who were opposed to getting rid of trans fats may, in fact, have been thinking of just “fat,” not realizing that trans fats are artificial and dangerously unhealthy.

 

Finding Out What People Know and Don’t Know

While Dimock concedes my point was a legitimate criticism, he responds that it’s simply impractical in general to find out whether people know what they’re talking about.

“Practically speaking, any given survey asks respondents about multiple people, issues and policies that they may or may not be familiar with – preceding each with a knowledge test would put a substantial burden on the interview and the person being surveyed.”

Let’s think about that comment for a moment.

Dimock is arguing that pollsters have so many questions in a survey, they don’t have time to find out whether people know anything about any of the issues included in that survey. According to this reasoning, reporting people’s opinions on subjects with which they have little or no knowledge is perfectly fine. And it’s not necessary to warn the reader, or (in this case) The News Hour, that public ignorance may be widespread.

Moreover, by using forced-choice questions (which do not offer the respondent an explicit “don’t know” option, and thus pressure respondents to give an opinion even if they don’t have one), the pollster can give the illusion that virtually everyone has an informed and meaningful opinion about almost any issue. (Pew reported 96% of Americans with an opinion on trans fats, though Pew had no idea how many people know what trans fats are.)

As Dimock indicates, Pew had so many other questions on its poll about so many other issues, Pew just didn’t have time to find out how many people knew the difference between fats and trans fats.

But the obvious point is that if many people were really confused about fats and trans fats, the poll finding that a majority of Americans were opposed to getting rid of trans fats is likely to be wrong. Clearly, what people know about the issue should be a part of measuring public opinion.

At the very least, Pew had the obligation to explore not just how people responded to the question, but also what the people understood that question to mean, before passing on the results to The News Hour, as though the poll were an accurate reflection of what the public is thinking.

 

Skewing Opinion

Dimock also suggests that finding out if people even know anything about the subject being surveyed could “lead to a skew in the balance of opinion reported.” As he writes in his response, “For example, if we presented views of Ted Cruz only among people who could prove their knowledge about him, it could produce much more one-sided assessments.”

This argument is a red herring.

We’re not talking about well-known politicians. We’re talking about trans fats. Dimock’s arguments suggests that if we present views only of those people who know what trans fats are, it would be a one-sided assessment. But if so, it would be one-sided and slanted toward people who know what the issue is. I fail to see the problem here.

In any case, I never argued for presenting the views only of people who know what trans fats are. I only suggest that it’s important to know how many of  the people who say they are in favor of trans fats actually know what they are. And they don’t have to “prove” their knowledge as Dimock writes.

Here is one approach that would not bias results, but would provide insight into what people are  thinking and how much they know about the issue:

Now, on another subject.

Q1    Have you heard of a food substance referred to as “trans fat”?

Q2    As far as you know, are trans fats and fats the same substance, or are they different – or are you unsure?

Q3    Would you favor or oppose prohibiting restaurants from using trans fats in foods – or are you unsure? (Do you feel strongly or not strongly about that?)

Q4    Would you favor or oppose prohibiting restaurants from using fats in foods – or are you unsure? (Do you feel strongly or not strongly about that?)

The last question could be asked only of those people who said they thought the two types of fats are similar.

What this series of questions allows Pew to do is compare the results of people who know there is a difference between the two substances, and those who think they are the same or who don’t know. If we find that most of the opposition to prohibiting trans fats is found among people who don’t know the difference, that’s important to know in evaluating public opinion on this issue.

If we also find that about as many people want to preserve fats as want to preserve trans fats, that provides another important factor in understanding public opinion.

The fact that Dimock would argue that it’s “impractical” to clarify opinion on this subject as I’ve suggested points to a serious problem – what I characterize as Pew’s sloppy approach (in this particular case) to measuring public opinion.

 

Pressuring Respondents to Express an Opinion

Dimock also takes issue with my suggestion that pollsters try to measure “non-opinion.” He objects to offering respondents an explicit “don’t know” response because it “may prompt some people to opt out of expressing opinions they actually have.”

This is probably the most salient red herring that Pew, and unfortunately most of the media polling community in general, use to justify their attempts to maximize the number of respondents who offer an opinion – even if they don’t have one.

The refusal of media pollsters to measure non-opinion in most cases is the large elephant in the room that media pollsters try to ignore. In one of the most widely used texts for public opinion classes, Herbert Asher’s Polling and the Public: What Every Citizen Should Know (CQ Press, 7th Edition, 2007), the author makes these observations (pp. 32-33):

“The presence of non-attitudes is one of the simplest yet most perplexing problems in public opinion polling. Too often in a survey context people respond to questions about which they have no genuine attitudes or opinions.” (emphasis added)

“Even worse the analyst or the poll sponsor or the news organization treats the non-attitude responses as if they represented actual public opinions.” (emphasis added)

“Under these circumstances, a misleading portrait of public opinion can emerge because no distinction is made between people with real views on an issue and those whose responses simply reflect their desire to appear in an interview as informed citizens or those whose responses are simply artifacts of being asked a question in the first place.” (emphasis added)

“Unfortunately, pollsters often find it difficult to differentiate between genuine attitude holders and persons merely expressing non-attitudes.”(emphasis added)

In previous posts, iMediaEthics demonstrated the problem of treating non-attitudes as though they represent public opinion – in polls by Pew and Gallup on the government’s aid to the auto industry, and by Marist and Quinnipiac on New Yorkers’ attitudes about bike lanes. When non-opinion is measured, the polls show very different conclusions about public opinion than when non-opinion is ignored.

In actual practice, media pollsters do sometimes try to measure non-opinion, when their story wants to focus on how many people are unaware of an issue. Despite Dimock’s criticism of offering a “don’t know” option, even Pew will sometimes try to measure non-opinion, as in their surveys conducted last January and February, when each Pew poll asked this question:

From what you know, do you agree or disagree with the Tea Party movement, or don’t you have an opinion either way? (italics added)

    Feb 13-18     Jan 9-13
2013              2013

    36        35        Agree
9        10        Disagree
52        51        No opinion either way
1          2        Haven’t heard (volunteered)

3          2        Refused (volunteered)

Note that despite widespread news coverage of the Tea Party for several years, more than half of all Americans – when given the option – admit they don’t know enough about the Tea Party to have an opinion.

By contrast, when Pew did not provide a “no opinion” option, but just asked if people felt favorable or unfavorable about the Tea Party in their October and June 2013 polls, they found only 21% and 18% of the public respectively who volunteered they either hadn’t heard of the Tea Party or didn’t know enough to rate it.

Is your overall opinion very favorable, mostly favorable, mostly unfavorable, or very unfavorable: of the Tea Party movement

    Oct 9-13    Jun 12-16
2013          2013

30        37       Total favorable
49        45       Total unfavorable
      6          7        Haven’t heard (volunteered)
15        11       Can’t rate/refused (volunteered)

In the case of trans fats, Dimock argues that measuring non-opinion could have skewed the results. But in two polls on the Tea Party, apparently it was okay to measure non-opinion.

This inconsistency in dealing with non-opinion by Pew is found among other media pollsters as well, and reveals a more serious problem in the polling industry.

When I worked at the Gallup Poll as senior editor (1993-2006), the general modus operandi (as with all major media pollsters) was to ask forced choice questions, because it maximized the percentage of people who would give a response. But occasionally Gallup would want to emphasize that the public was unaware of an issue. That unawareness was the news story, and so Gallup would explicitly measure non-opinion.

In October, 2007, after I had left, Gallup asked about U.S. sanctions against Iran. Shown below is the question and the response:

 

Note that more than half of the public had no opinion.

For this particular case, Gallup highlighted in its headline that “Most Americans [are] not familiar with new economic sanctions against Iran.”

That approach, where the pollster decides ahead of time whether to measure non-opinion or not, depending on what the focus of the news story might be, shows how manipulative the whole process is. Pollsters know how to measure non-opinion; they just don’t want to do it most of the time.

Why?

Because neither Gallup nor Pew nor any of the other major media pollsters wants a large number, perhaps even a majority, of its stories to carry the kind of headline above that highlights public ignorance. “Majority of Americans don’t know anything about the Tea Party.” “Most Americans unsure whether U.S. should aid Syrian rebels.” “Plurality of Americans confused about economic stimulus.”

After a while, the news reporters might not be so enamored of highlighting poll results because they would be revealing what every attentive person knows anyway, but pollsters try to hide: Large segments of the public are unengaged in most national policy issues.

To pretend otherwise is to engage is self-delusion.

Dimock’s only real justification for releasing the trans fat poll results, knowing that much of the public didn’t know anything about trans fats, is that Pew didn’t want to spend the resources to do a more complete job.

Finding out whether people knew anything about trans fats and measuring non-opinion wouldn’t have “skewed” the results – it would have made the results more meaningful. But it would have taken a little more time.

A little more time that Pew didn’t want to spend, because it had a whole bunch of other issues it wanted to cover, but didn’t want to explore in depth either.

 

Submit a tip / Report a problem

Pew’s Misleading Trans Fat Poll: David Moore responds to Pew Director

Share this article:

Comments Terms and Conditions

  • We reserve the right to edit/delete comments which harass, libel, use coarse language and profanity.
  • We moderate comments especially when there is conflict or negativity among commenters.
  • Leave a Reply

    Your email address will not be published. Required fields are marked *