The Myth of Voter Volatility – Caused More by Polls than Voters

iMediaEthics publishes international media ethics news stories and investigations into journalism ethics lapses.

Menu

Home » Media Polls»

(Credit: Flickr/League of Women Voters)

In a recent appearance on PBS’ The News Hour, David Brooks predicted that the election campaign this year would be “more volatile than the last couple presidential elections.” The implication is that many voters will keep changing their minds, preferring first, say, Obama, and then switching over to Romney, and back and forth for the next several months. Or, as shown below, switch back and forth even over a 10-day period.

Don’t believe it!

Voter volatility is more a function of polls than voters. Look at the results posted by pollsters over  the first half of April:

Voter volatility looks substantial – Obama’s margin going from +8 on April 5 to -2 only a few days later, then surging to +4, dropping to -3, then jumping to +9 over the next few days.

But, of course, that volatility is in the polls, not the voters.

Why the Polls Are “Volatile”

While all the polls shown above report less than 10% undecided, that’s only because of their “forced-choice” question. Chances are that in reality about 30% to 40% of the voters have not yet made up their minds.

Pollsters get such a low undecided vote because they ask, “If the election were held today…”, and then they don’t provide an explicit “unsure” option. In effect, the pollsters are telling the respondents to come up with a decision, long before many have even taken the time to think about whom they might prefer.

Of course, not all respondents are undecided. From past experience, we can reasonably predict that the vast majority of voters who identify as Democrats and Republicans will eventually vote their party line. Still, typically about 10% to 15% of voters from each party will defect. And, of course, independents are up for grabs. That’s why the undecided figures this early in the race are so large.

But when undecided respondents are included in a poll, they are pressured to come up with an opinion anyway. Whatever these undecided respondents say is at best a “top-of-mind” view, not one they feel anchored to, and thus not predictive of what they’ll actually do come election time – or even what they’ll say a week later, if they were re-interviewed.

That’s the basis of the early large fluctuations in the relative standings among the candidates – undecided voters giving top-of-mind responses, to satisfy pollsters, who then report the results as though they reflect public opinion more generally.

But that’s not the only reason for the differences among polls early on. Much of the variation in polls reported above is caused by different questionnaire design (at what point in the interview is the vote choice question asked – at the beginning, or after other issues have been raised), and different modes of interviewing (live phone calls, automated phone calls, online interviews). Because many voters haven’t made up their minds, they are influenced by the questionnaire itself, trying to come up with a choice based on what they’ve heard in the survey.

Can Pollsters Do Better?

Pollsters could measure indecision among voters, either by asking up front whether they’ve actually made up their minds, or at least asking respondents who have succumbed to the forced-choice question if they could change their minds.

Some pollsters do ask the follow-up question, and will find early on a large percentage of voters saying they have not made a final decision. But it’s difficult to find these measures, as they are not routinely tracked by the websites that follow all the polls, nor is indecision regularly reported as part of the news stories.

The news industry has apparently decided that it’s much more interesting to give the illusion of a volatile race, than a race where many voters are undecided and continue to mull over their choices until close to election day.

What’s the solution for us readers?

 My advice: Don’t pay attention to any single media poll, no matter how prestigious its sponsor. Most media attention seems to be given to Pew, CBS/New York Times, ABC/Washington Post, and Gallup, but they, like the rest of the pollsters, all ask the standard forced-choice vote preference question – and frequently provide results that contradict each other.

Instead, consult the several sites that provide averages of all polls. The averages tend to screen out (as well as can be done) the poll variability due to “house effects” – the different questionnaire designs and different modes of interviewing of the pollsters. The averages still underestimate the level of indecision among voters, but the average margin between the two presidential candidates is probably the best measure we can hope for.

The sites I check most frequently are:

As you’ll see, even the averages shown by these sites do not all agree, but there is much less variation among them than there is among the many individual polls.

 

 

 

David W. Moore is a Senior Fellow with the Carsey Institute at the University of New Hampshire. He is a former Vice President of the Gallup Organization and was a senior editor with the Gallup Poll for thirteen years. He is author of The Opinion Makers: An Insider Exposes the Truth Behind the Polls (Beacon, 2008; trade paperback edition, 2009). Publishers’ Weekly refers to it as a “succinct and damning critique…Keen and witty throughout.”

 

 

 

Submit a tip / Report a problem

The Myth of Voter Volatility – Caused More by Polls than Voters

Share this article:

Comments Terms and Conditions

  • We reserve the right to edit/delete comments which harass, libel, use coarse language and profanity.
  • We moderate comments especially when there is conflict or negativity among commenters.
  • Leave a Reply

    Your email address will not be published. Required fields are marked *