Newsletters

Building Trust Through More Realistic Expectations… And a More Accurate Margin of Error

08/29/2024

Building Trust Through More Realistic Expectations… And a More Accurate Margin of Error

Courtney Kennedy, Pew Research Center

Public trust and confidence in polling is low. Over the years, AAPOR has made laudable trust-building efforts including the Transparency Initiative, Standard Definitions, and journalist education.

But they aren’t enough. The public is distrustful of polling and easily disappointed when polls don’t precisely predict election results. One of the root problems is that people often expect polls to deliver greater accuracy than they’re capable of. In an era of partisan parity and a succession of close presidential elections, expecting polls to forecast the outcome may simply be beyond their capabilities.

Part of the public’s misaligned expectation stems from people forgetting or not really understanding that polls have a margin of error. But another problem is that the margin of error itself sets us up for failure.

For decades, pollsters have been reporting the margin of sampling error as the guide for how accurate people should expect the result to be. In truth, the margin of error is usually too small – giving readers the impression that the poll is more precise than it actually is.

AAPOR leaders have known for decades that the margin of sampling error reported with polls is a deeply flawed metric. After the 1948 “Dewey Defeats Truman” debacle, Merton and Hatt observed that the margin of error needs attention because the “expectation of extreme precision invites subsequent disillusionment.” In Burns Roper’s charmingly titled AAPOR Presidential address “Some Things That Concern Me,” he described the margin of error as “meaningless—in fact, highly misleading” because “most people think that the margin of error is a measure of total error, not just sampling error.” Indeed, the margin of error suggests to readers that noncoverage, nonresponse and measurement errors do not exist, which is why it tends to be too low.

As a field, we know better. Scores of books have been written about nonsampling errors. And then there is the data. Numerous studies have found that the margin of error reported by pollsters is too low. Jennings and Wlezien (2018) analyzed over 30,000 polls fielded between 1942 and 2017 in 45 countries. The errors they documented were “considerably greater than we would expect based on sampling error variance alone.” Domestically, the 2020 AAPOR Election Task Force reported that their analysis “clearly reveals the inadequacy of using the margin of error to describe the error on the margin. In that case, 55% of the polls had an absolute error that was larger than the reported margin of error.” Benchmarking studies also find error levels far higher than suggested by the traditional margin of error.

Industry Solutions

How can we replace the margin of sampling error with something better? Shirani-Mehr, Rothschild, Goel and Gelman proposed doubling the margin of error, based on an analysis of over 4,000 state-level polls conducted between 1998 and 2014. This approach has several advantages including simplicity and grounding in actual pre-election poll data.

Sharon Lohr, Mike Brick, Andrew Mercer and I are exploring a similar-in-spirit approach designed to accommodate a broader range of question types and account for a given poll’s design. We greatly appreciated feedback from colleagues when we first presented this idea at the 2024 AAPOR Conference in Atlanta. Our hope is that collectively our profession can provide audiences with a more accurate summary of a poll’s precision. A more accurate margin of (total) error should, in turn, set more realistic expectations for our work. And the more we meet those expectations, the more trust we can build. Ideally, this effort would be but one in a larger suite of activities to repair public trust in polling.