Earlier this month, the Pew Research Center announced that in October and November leading up to the election, “the Center will not be producing likely-voter estimates of the state of the race or making a final projection of the national vote total.”
Then just two and a half weeks later, Pew reported on a new pre-election poll, at the same time confirming that it wouldn’t be “making a final projection of the popular vote.”
Pew: Limited Election Polling, But No Comparison With Actual Election Results
OK, folks, just be clear: Pew doesn’t mind telling us the state-of-the-race two weeks ahead of the election, but it just doesn’t want to be checked on how accurate it is by polling right before the election.
If their results two weeks out deviate significantly from the election results, no problem. Much can change in two weeks. Not Pew’s fault. But a poll conducted right before the election that deviates from the outcome – that is a problem.
What explains Pew’s refusal to test its results against reality? Does Pew think that polls in general are no longer accurate enough to tell us, one or two days before Election Day, who will win?
Definitely not, say Pew’s president, Michael Dimock. “Perhaps most importantly, this does not mean we believe that polling is no longer accurate enough to be used to predict elections.”
Got that. So, is Pew afraid more specifically of its own methodology?
Absolutely not, says Dimock. “This is not a methodological decision for the Center.”
So, what’s the problem?
“It’s a matter of research focus,” says Dimock. “We’ll be watching the final weeks of this election as closely as anyone – we’ll simply be putting our energy where we feel we can do the most to further public understanding of the attitudes and dynamics shaping the 2016 election and where the United States goes next.”
Codswallup! (as my father would say).
And there’s more such lame rationalization, if you care to read it.
But the bottom line is this: In the weeks leading up to the election, Pew wants us to trust their numbers as they describe the “attitudes and dynamics shaping the 2016 election,” but Pew doesn’t trust its own numbers to be checked against actual election results.
No Election Polling At All for Gallup
Pew is not alone in this timorous position. A year ago this month, Gallup announced on Politico that “it isn’t planning any polls for the primary horse race this cycle.”
At the time, misinterpreting the announcement to apply only to primary polling, I commended Gallup for not wasting its resources on national primary polls, when in fact primaries are a series of statewide contests. I assumed, however, that Gallup would continue its 76-year tradition of polling in the general election.
But I was wrong. The article went on to say “Gallup won’t commit to tracking the general election next year.” And, from the lack of Gallup polls on the general election so far, it appears the polling firm does not intend to publish any such polls between now and Election Day.
What was Gallup’s reasoning? Gallup Poll Editor-in-Chief Frank Newport explained to Politico: “We believe to put our time and money and brainpower into understanding the issues and priorities is where we can most have an impact.”
To be clear: Unlike Pew, Gallup has done no horserace polling the whole year, at least that we know of. Newport did suggest to FiveThirtyEight, however, “that Gallup might conduct horse-race election polls without publishing the results.”
Given Gallup’s recent election polling disasters, and the fact that it might – as Newport suggests – be conducting horserace polls without publishing the results, we might infer that perhaps Gallup has given up election polling just for 2016 as it tries to work out its problems.
But Newport doesn’t admit to that possibility. Instead, there is the flimsy excuse that Gallup wants to refocus its “brainpower” on understanding the “issues and priorities” of the election – as though the latter is mutually exclusive with measuring voters’ candidate preferences.
I worked at Gallup for 13 years (1993-2006) as a senior editor of the Gallup Poll, which included three presidential elections, and in each of those election cycles, we tracked both voter preferences for candidates and voter preferences on issues.
I agree we probably overdid it. We didn’t need to do daily tracking of any of the measures – once a week would have been more than sufficient. So, it’s understandable this time around if Gallup didn’t want to conduct a daily tracking poll of the horserace. But not to conduct any horserace polls? Gallup is clearly not lacking brainpower or resources to do that.
Currently, instead of horserace polling, Gallup is doing a daily tracking of voters’ favorability rating of candidates. It’s a convenient substitute – it allows Gallup to make news with updates of these measures, routinely picked up by the media (such as Huffington Post and Politico, among others), but avoids any chance that Gallup’s accuracy might be tested against the actual election.
Problems with Polls More Generally
In a recent New York Times op ed, “What’s the Matter With Polling?”, Cliff Zukin, former president of the American Association for Public Opinion Research, argued that “election polling is in near crisis.” Extremely low response rates, combined with new technology, have made it difficult and very costly to obtain representative samples.
And, as Zukin notes, there isn’t any clear solution to these problems. “Our old paradigm has broken down, and we haven’t figured out how to replace it. Political polling has gotten less accurate as a result, and it’s not going to be fixed in time for 2016. We’ll have to go through a period of experimentation to see what works, and how to better hit a moving target.”
In these troubling times for polls, when it’s important to test new ways of measuring opinion against a clear standard, both Pew and Gallup – the two most recognizable and arguably most prestigious names in the polling industry – have decided to step away from the problem, rather than engage it.
They both acknowledge that their decisions mean the polls they do conduct won’t have to meet the traditional acid test of the election outcome.
As Politico noted last year, “Newport concedes that, by skipping the horse-race polls, observers won’t be able to judge Gallup’s surveys against an objective result: the election.” Newport admitted, “That is certainly one of the advantages that an election provides, and that is an external standard.”
Dimock also acknowledged that their decision not to poll the election “takes away an opportunity for us to test our survey accuracy against an objective external reality: the national vote total.”
He added, “However, elections are only one way – and not necessarily the best way – to gauge polling accuracy. We are doubling down on other research investments to compare our methodologies, and those of others, against known benchmarks.”
Whatever those other benchmarks are, they are not ones that the public would easily recognize. Elections are a clear, public way for pollsters to demonstrate that polls can be used to measure accurately what people say and do.
As Henry Enten wrote on FiveThirtyEight last year, his article entitled, “Gallup Gave Up. Here’s Why That Sucks.”:
“Gallup says it will still conduct issue polling, but here’s the problem: Elections are one of the few ways to judge a pollster’s accuracy. And that accuracy is important: We use polls for all kinds of things beyond elections.
“How do Americans feel about the economy? Do elected leaders have the trust of the public? Is there support for striking a deal with Iran? By forgoing horse-race polls, Gallup has taken away a tool to judge its results publicly.”
His comments apply equally well to Pew.
Zukin noted in his piece that “The difficulty in doing [election polling] well has caused major players to not participate. That means there’s even less legitimacy because people who know how to do this right aren’t doing it.”
Gallup and Pew, the industry giants: Too afraid to risk failure.