On June 20, Bloomberg released a new poll showing President Barack Obama leading Mitt Romney by 13 points, 53% to 40%, among likely voters. The very next day, AP released its new poll – conducted over virtually the same time period – showing Obama with a lead of just three points, 47% to 44%.
The Bloomberg poll produced the startling results, with the political consensus these days that the presidential contest is a close one. That put J. Ann Selzer, who conducted the poll for Bloomberg, under pressure to explain why her results differed so greatly from the norm.
Her explanation was published in the Washington Post, where she looks at all the possible reasons why her poll might be an outlier.
But after examining whether her results could have been affected by 1) a change in methodology, 2) the screening criteria for likely voters, 3) the design of the questionnaire (loading up the front with questions that might have caused a favorable reaction to Obama), or 4) the composition of her sample (too many young people, too many blacks, too many Democrats, or too many higher educated voters), she was able to say that none of these factors was out of the ordinary and could explain her poll’s high support for Obama.
What’s a pollster to do when all the ingredients of her poll are exactly what would be expected from a very carefully designed, scientific survey – and the results seem askew?
Second, suggest if indeed public opinion is now different from what your poll showed, it is because public opinion has changed. “Potentially, this poll caught the electorate when the wind was at Barack Obama’s back for a brief moment in time.”
It must have been brief indeed. Polls conducted at the same time didn’t detect the breeze.
I’m actually quite sympathetic to Selzer’s position. Over the years, at Gallup and at the UNH Survey Center before I started working for Gallup, I occasionally found myself at the center of a poll whose results were out of line with what other polls showed. It happens.
And given the even more difficult problems with contacting respondents these days – response rates in the single digits, especially when asking people about matters many haven’t even begun to consider, such mysterious conflicting results are bound to occur.
I suspect that outliers like this would be less likely to occur if pollsters routinely included measures of non-opinion in their questions. The notion that today more than 90% of the voters know for certain how they will vote in November, or how they would vote “if the election were held today,” is a conceit that underlies most polls these days – and is hardly plausible.