It’s widely accepted that in general the midterm election polls stunk. Yes, they alerted us to the possibility that the GOP would regain majority control of the Senate and gain seats in the House, but as HuffPollster notes, they also missed the margins of victory “by a mile.” And in all cases of large missed margins, the polls consistently underestimated how well the Republican candidates would fare.
Overall, HuffPollster claims it was the “worst collective Senate polling miss since 1998” (the year Republicans in congress were attempting to impeach President Clinton). The “misses” included 11-point errors in the margins the GOP candidates received in Arkansas and Virginia, a 10-point error in Kentucky, and a 9-point error in Kansas. (In Virginia, the Democrat won by a point, though the poll average had him up by 12 points right before the election.)
The gubernatorial polls were no more impressive. The most stunning error occurred in Maryland, where the GOP candidate won by 9 points, though the poll average showed him losing by 9 – an 18-point error. Another double-digit error occurred in Vermont, where the Democrat won by only a point after the polls showed him with a 14-point lead. Four other gubernatorial races produced polling errors of more than five points.
Naturally, the polls’ collective poor performance caused great consternation among the polling cognoscenti. The search to find the causes began immediately and has led to some intriguing explanations.
You May Also Like...
- Huffpollster (in an email dated Nov. 14, 2014) explored three theories for the polls’ collective inability to predict the elections – a late surge favored the GOP, the undecided voters broke disproportionately for the GOP, and the likely voter models were incorrect. That’s all another way of saying the pollsters didn’t know what they were doing.
- Nate Silver of Fivethirtyeight.com suggests a more troublesome conclusion: Some pollsters are essentially cheating, slanting their results to correspond to the average of other polls. He calls it “herding.” The problem occurs when pollsters “herd” in the wrong direction, based on previous polls that by chance are several points off the “true” value. This conclusion also does not reflect well on either the ethics or competence of the industry.
- Another explanation is that the polls were essentially on target, but they were measuring what voters thought they were doing, rather than what the machines actually recorded. In short, according to this view, there is widespread election fraud, with supporters of the GOP stealing votes from Democrats, this year included. Several news stories on Election Day this year, which reported that some voting machines in North Carolina were flipping votes for the Democrat Kay Hagan to the Republican Thom Tillis, lend credence to this view.
Thus, we’re left with diametrically opposed reasons for the polls’ poor performance. On one hand, pollsters are to blame – either for incompetence (in not being able to measure late changes or constructing good estimates of voter turnout), or for deliberately fudging their results by herding toward the mean. On the other hand, the polls mostly got it right, but massive vote stealing gave the GOP unexpected victories.
Of course, it’s possible that all three explanations provide some insight into the problem. The evidence suggests that vote stealing is fairly easy to do, that herding does occur and could contribute to poll error, and that pollsters badly estimated voter turnout.
How much each item contributed to the discrepances between the polls and election results has yet to be determined.