Clarifying Polling Error in 2016

by Alexander Agadjanian

In the aftermath of the 2016 election, Donald Trump’s victory came as a shock to the many people who had trusted polls that showed Hillary Clinton with a (relatively) safe lead. Polling data and pollsters drew criticism for their purported inaccuracy, including from this publication. While there is some truth to my colleague’s claims about polling in 2016, parts of her piece – particularly those that exaggerate and misrepresent polling failure – need clarification, which I hope to provide here.

 

I. Polling and Forecasts: They’re Not Interchangeable

The first point worth clearing up is a simple one: polls and forecasts are separate entities. Both had their problems in 2016, but they are not interchangeable. Unlike polls, which gauge the level of support for a candidate, forecasts directly predict the election winner by assessing t he probability of an event occurring. Polls factor into these probabilities, but are used differently, and are often paired with other data such as national economic conditions.

Among the six major national forecasts, the median probability of Clinton winning was 90.5 percent. Numerically, that means Trump would win one out of ten times, and given his close battleground state victories, it’s not implausible to think this probability for Trump materialized. Moreover, FiveThirtyEight – which interpreted the polls more precisely by introducing greater uncertainty into their forecasts to account for a historically high number of undecided voters – gave Trump a 29 percent chance of winning. By definition, low-probability events don’t happen often, but they must occur sometimes; we just tend to forget this fact, treating less-likely occurrences as shocking while taking probable outcomes in stride.

Putting aside the importance of interpreting probabilities correctly, an article cited by my colleague indicated that “almost all polls were wrong about the outcome of the election.” However, most of these were Electoral College predictions, which, while both poll-driven and incorrect, are not a direct reflection on the quality of polling.

 

II. How Did National-Level Polls Do?

If we truly want to evaluate 2016 polling accuracy, we can do so at the national or the state level. The original piece didn’t distinguish between these levels, but as the final votes have been tallied, one thing has become clear: the national polls were not all “wrong.” In fact, Clinton’s popular vote lead ended up being 2.1% – a margin very close to the 3-4 point Clinton lead national polls estimated. Research I’ve worked on shows that the 2016 national polls will go down as the fourth most accurate in the last ten elections, as national polling was just 1.1 percentage points off the mark in 2016. Thus, at the national level, many polls were in fact quite accurate.

It’s worth noting that many point to the USC Dornsife/LA Times poll as the only one that got it right. However, this poll had Trump ahead in the popular vote, not in terms of his chance of winning the election. In showing a three-point Trump lead, this poll turned out to be one of the least accurate in 2016.

 

III. How Did State-Level Polls Do?

National polls estimate national popular vote shares, and while those numbers historically tend to correlate well with election outcomes, the winners are ultimately decided by state level popular votes. State polls, much more than national ones, underestimated Trump support throughout the country, leading to more serious systematic error in 2016.

When evaluating polling data, one should never evaluate just one poll. This approach is subject to cherry-picking, and more importantly, every poll has a margin of error, among other error types. Aggregating various polls – either averaging them or deriving trend lines from them – is the best solution to this issue, because it allows errors to more or less cancel each other out. In the case of this election, however, state level polls almost uniformly underestimated Trump’s margin. I analyzed these data to see what was going on. I gathered data on Clinton’s margin of victory, both in terms of the actual election results and the final-week polling estimates.

State level error is represented below:

pollingpiece_1-27.png

This graph plots Clinton’s polling lead against her actual margin in each state. If a point falls below the dashed line, Clinton’s margin was overestimated in the state, and if it is above, her margin was underestimated. In 12 of the 49 states represented here, Clinton’s margin was underestimated; in 37 states it was overestimated, which is to say that Trump over-performed as compared to state-level polls. Error was also larger in the 37 states where Trump was underestimated (6.3 points) than in the 12 states where Clinton was underestimated (2.8 points). If we were to extend the scope of the polls plotted here to two weeks before Election Day, Wyoming would be another state in which Clinton’s lead was overestimated, making it 38 of 50 in which this occurred, which is indicative of systematic polling error.

We can consider the 11 states where the margin was less than five points as battleground states. In these states, Clinton’s lead was overestimated by three points on average—not a lot. It’s thus crucial to note that error in key states was far from extraordinary, but in many cases just large enough to swing those states to Trump. In the states that mattered most and were electoral vote-rich – i.e. Michigan, Pennsylvania, and Wisconsin – Trump won by mere tens of thousands of votes.

 

IV. What Drove State-level Polling Error?

This topic is much more complex, but here’s a quick overview. My colleague’s original article made a good point in that it’s likely a mix of different factors. One of the more important ones is likely inaccurate voter turnout models among pollsters. Polls overestimated turnout among Democrats and their key bases, while possibly underestimating turnout among key groups that favored Trump. Another potential explanation is a “shy Trump vote” – due to social desirability bias, some Trump voters might have not revealed their preferences to pollsters. Tested but not validated before the election, this theory has gained traction among some post-election analyses. In addition, the failure to adjust for late changes in support due to the “Comey effect” – the outcome of a highly publicized letter sent by FBI director Jamey Comey to Congress regarding Clinton’s emails – could have figured into this as well.

 

V. Concluding Thoughts

Polls are never supposed to give a perfect picture. In 2016, they proved informative but were also hampered by systematic error. To say “data is dead” is an excessive and misguided reaction, and reveals a widespread misinterpretation of forecasts and how they differ from polls. Regardless, when gauging the value of polls, it’s worth keeping in mind this paraphrased refrain of Nate Silver’s: polls are the worst way to understand an election, except for all the other ways. There remain issues to address in polling quality, but polls are still one of if not the most valuable tools available for understanding an election.

 

 

 

You can contact the author at Alexander.Agadjanian.18@dartmouth.edu or on Twitter as @A_agadjanian, and find more of his work here

Advertisements