Did Nate Silver Help Elect Trump? Maybe… - iMediaEthics

iMediaEthics publishes international media ethics news stories and investigations into journalism ethics lapses.

Menu

Home » Media Polls»

(Credit:Voice of America)

A recent article in the Washington Post suggested that President Donald Trump’s election may have been helped by probability forecasts of an almost-certain victory by Hillary Clinton. Such types of forecasts were popularized by Nate Silver, after he used the technique in 2008 to successfully predict the presidential winners in 49 of the 50 states and nearly all of the winners of the U.S. Senate races.

But as reported by the Post, a study by three scholars (“Projecting confidence: How the probabilistic horserace confuses and demobilizes the public”) finds that probabilistic forecasts – such as the New York Times final projection in 2016 that Hillary Clinton had an 85% chance of winning the election, and Nate Silver’s FiveThirtyEight website that she had a 71.4% chance of winning – likely discouraged pro-Clinton voters from turning out to vote.

The theory, according to that study (hereafter referred to as the “Confidence study”), is that probabilistic forecasting confuses the voters into thinking the race is more lopsided than it actually is. With Clinton leading by such a significant margin, many of her supporters may have decided they didn’t need to vote.

By contrast, the authors claim, Trump supporters were not as convinced of Clinton’s large lead, and thus were probably still motivated to vote.

The report went on to suggest that simple poll results do not have the same effect as the probabilistic forecasts. Telling the public that Clinton is ahead in the polls by several percentage points would not cause her supporters to become over-confident and thus less likely to vote.

According to the report, probabilistic forecasting seems so much more definitive than poll reports of candidate standings. For example, the final poll results (as shown on RealClearPolitics) showed Clinton leading Trump 46.8% to 43.6%. That’s just a 3.2 percentage point difference. Most people would think the race is still quite close. But the probabilistic forecasting by Nate Silver, the New York Times, and others, said she had better than a 70% chance of winning. To the public, that probabilistic forecast seems much more certain than a 3.2 percentage point lead. 

The authors of the Confidence study suggest that if there had been no probabilistic forecasts, that if only the poll results had been published, Clinton supporters would not have been deluded into thinking the race was in the bag. 

Instead of losing in Wisconsin, Michigan, and Pennsylvania by less than one percent each (.8%, .2%, and .7% respectively), she might have won in those states. And had she done so, she would have also won the electoral vote.

Ergo, according to the Confidence study, Nate Silver (who initiated probabilistic forecasting of elections) may be at least partly responsible for Clinton’s defeat and Trump’s victory.

(iMediaEthics has written to Silver for his response.)

Evidence from Study 1 of the Report 

It’s important to note that the Confidence study is not based on actual 2016 voter behavior. Instead, it uses experiments to test the impact of media poll coverage and projections on hypothetical voter responses.

The one piece of evidence in the study that is directly relevant to 2016, however, is a question that asked respondents whether they thought the leading candidate would win by “quite a bit” or whether the race was “close.”

Polls conducted in the period 1952-2016 showed that on average, “those who have stated that they expect one candidate to win by quite a bit are about two and a half percent less likely to vote than those who believe a race to be close.”

In 2016, Democrats were more likely (35%) to say the leading candidate (Clinton) was ahead “quite a bit” than Republicans (20%). Taking the 15 percentage point gap into account, it would mean — according to the study — that Democratic turnout in general in 2016 might have been about .4% lower than it would have been, if Democrats and Republicans had been equally convinced about the status of the race.

That .4% lower turnout is greater than the margin by which Trump won Michigan (.2%), but about half the size of Trump’s margins in Wisconsin (.8%) and Pennsylvania (.7%).

It’s important to remember that the “quite a bit” percentages are based on a national sample in 2016, not individual state samples. Also, the two and a half percentage figure measuring a drop in voter turnout appears to be based on polling over a 64-year period, not just in 2016, and definitely not in specific states. We don’t know what voters specifically in Michigan, Wisconsin and Pennsylvania were thinking about the 2016 presidential election.

That said, does the evidence suggest that probabilistic forecasting caused Trump to win in Michigan? The evidence is not direct, but theoretically, based on a lot of assumptions that cannot be verified (e.g., that voters in each of the three states were similar to the respondents in the Confidence study’s national sample), it is possible that probabilistic forecasting affected the outcome of the Michigan election.

Evidence from Study 2 of the Report

The second part of the Confidence study employed a simulation to test how people would vote if each vote they cast cost them some money. The notion was that in the real world, there are costs to voting (getting to the polls, waiting in line, etc.).  How would people react when presented with different scenarios – some that suggested a close election, others that suggested one of the two candidates was far ahead or behind?

According to the rules of the simulation, it cost the respondents $1 to vote, and another $2 if their candidate lost. They won $2 if their candidate won, even if they didn’t vote. The study concluded that “when faced with the costs and benefits of voting in a behavioral game, more extreme probability estimates decrease voting.” 

The “extreme” probability estimates measured in the study were a 70% chance of winning up to a maximum of 90%. When told a candidate had this high a probability of winning, voting by respondents declined by 3.4%.

However, the study found no decline in voting if the respondents were presented only with poll results, even if the poll results showed a candidate leading by as much as 60% to 40% — the maximum gap mentioned in the study.

As a general observation, it makes sense that if you think your candidate has a “great chance” of winning, you wouldn’t waste $1 to vote. But if you see your candidate with a similar chance of losing, you also wouldn’t waste $1 to vote. The effect could be off-setting.

The difficulty is identifying the tipping point – the level at which the “great chance” comes into effect. Apparently, a 60-40 split in the vote share was not a large enough difference to dissuade respondents from voting. But a 70% probability of winning or greater was large enough to discourage some voters. 

How do all of these results relate to the 2016 election? In Michigan, Wisconsin and Pennsylvania, Nate Silver projected a Clinton win by 70% or greater. That would suggest the projections caused a diminished level of voting by Democrats. However, it’s not at all clear that Democratic voters knew these numbers about their own state. People have to know the probabilities for them to have any effect.

On the other hand, many voters may have heard about the 70% or greater probability that the Times, Huffington Post, and FiveThirtyEight each put out about Clinton’s chances of winning the national election. Voters may have been considering those projections when deciding whether to vote.

In that case, if the simulation has any real-world validity, the findings suggest that Democratic voter turnout could have been affected negatively – depending on how many voters actually heard of the projections. And in Michigan, Pennsylvania and Wisconsin, lower voter turnout among Democrats, caused by the erroneous probability projections, could have been the deciding factor.

Still, there are several objections:

  1. We don’t know how many voters in the real-world actually knew what the probability projections were. The study did not include questions of actual 2016 voters to determine that measure, so the direct evidence for what people knew is lacking. If they didn’t know the projections, they wouldn’t have been affected.
  2. Republicans and Democrats who knew the projections should both have been affected. By the study’s own findings, Republicans who knew of projections giving Clinton a high probability of winning should have been discouraged from voting, as were Democrats. 
  3. While the study suggests that Democrats more than Republicans were affected, because more of the former knew about the projections, evidence for this contention is weak in the study. There is no direct evidence of what voters knew and did not know.
  4. The simulation of requiring $1 for a vote, and $2 for either reward or punishment for one’s candidate who won or lost, does not seem especially analogous to actual real-world voting. I would not assume the specific percentage findings in the simulation (such as the 3.4% voter decline) as applicable to actual voting. At best, the findings are suggestive. They are certainly not definitive descriptions of the 2016 election.

Despite these objections, it’s perhaps worthwhile to cite an article published on CNBC in mid-October, 2016, at a time when Clinton enjoyed a double-digit lead: “A new worry for Clinton: Trump’s struggles may depress Democratic voter turnout.”

The gist of the story was exactly the theme of the Confidence study, that “if Donald Trump continues to trail in opinion polls, many Democrats may simply stay home on Election Day.” The article seemed to assume that only Trump’s voters, and not Clinton’s voters, were so passionate they could not be dissuaded from voting for their candidate, regardless of the polls — an asymmetry that is also part of the Confidence study.

In 2020, if Joe Biden is the nominee and probabilistic forecasts show him with a commanding position, will many Democrats feel complacent and fail to vote? It doesn’t seem likely, given the current passion among Democratic voters, but that’s the message of the Confidence study. 

What would Nate Silver say to that?

Submit a tip / Report a problem

Did Nate Silver Help Elect Trump? Maybe…

Share this article:

Comments Terms and Conditions

  • We reserve the right to edit/delete comments which harass, libel, use coarse language and profanity.
  • We moderate comments especially when there is conflict or negativity among commenters.
  • Leave a Reply

    Your email address will not be published. Required fields are marked *