The Gallup Poll has come under attack for recent changes in its likely voter model. While they were apparently designed to overcome some of the shortcomings noted by Mark Blumenthal at Huffington Post, the results produced by Gallup’s new methodology have provoked criticisms from political science professor Alan Abramowitz, who debunks the findings that Romney leads Obama by four percentage points in the swing states; and from Obama’s top pollster, Joel Benenson, who characterizes Gallup’s national poll, also showing Romney ahead by four points, as “an extreme outlier.”
Here are a few points to keep in mind:
First, whatever changes Gallup made were clearly an attempt to improve its election accuracy. You can ignore any comments that suggest anyone at Gallup wants to manipulate results to favor one candidate or another. Having worked at Gallup for 13 years, as Managing Editor and then Senior Editor of the Gallup Poll, I can assure you that being accurate is the only motive for modifying the model.
Second, it’s important to recognize that likely voter models are as much art as science, with each polling organization adopting different parameters to help differentiate people who will vote from those who will not. The problem is that almost everyone in a poll says they will vote, yet we know that actual turnout is only about 55% of eligible adults. The only way we’ll know if the model is successful in identifying actual voters is when the election results are compared with the final pre-election poll.
Third, it’s possible that the changes Gallup made will not improve the accuracy of its predictions. That happened during the first presidential election cycle I worked at Gallup. In 1996, the editors and methodologists decided to adopt a new likely voter model, what we called a “probability model,” rather than a “cut-off model” as had been used. (The details of the two models are not important for this account.)
For two months leading up to the election between Bill Clinton and Bob Dole, we used the probability model. As the election neared, however, it appeared that our results were not in the mainstream of the other polls, but were disproportionately in favor of Clinton. So, in our very last pre-election poll we switched back to the old likely voter model.
Yes, it’s true that for weeks we had been showing Clinton with a significant double-digit lead, but then in the last poll we showed him ahead by just eight points – his actual victory margin. Had we kept with the new model used during the previous two months, our final pre-election poll would have shown Clinton ahead of Dole by an additional five percentage points – outside the poll’s margin of error.*
Despite what we eventually saw as exaggerating Clinton’s lead for the two months leading up to the election, the mistake was a technical one, based on what we thought was methodologically correct, not on any effort to mislead the public about Dole’s chances of winning.
So, what are the lessons here?
As I wrote about the controversy over “Rasmussen and the Allegedly Skewed Polls,” relying on any single poll is not recommended. It may be accurate. Or not.
Moreover, even if a given poll is accurate in its last pre-election prediction, that doesn’t mean it was accurate during the months leading up to the election. That was certainly true of Gallup in 1996, but also of many of the polls in 2008, and probably also of many polls in 2012. (We'll soon see.)
The least dangerous way of reading the polls is to look at the averages of many polls. Even then, don’t bet your life savings on the predictive accuracy of those results.
*Gallup’s final 1996 presidential election prediction was for Clinton to win by 11 percentage points, because we decided that among the 6% undecided voters in our poll, Clinton would get 4%, Dole 1% and Ross Perot 1%. Our final prediction was 52% Clinton, 41% Dole, 7% Perot/other. The results showed Clinton getting 49.2%, Dole 40.7%, and Perot 8.4%
UPDATE: 10/18/2012 4:22 PM EST: Fixed date stamp -- this blog post was published Oct. 18.